Sample records for additional computational expense

  1. [Cost analysis for navigation in knee endoprosthetics].

    PubMed

    Cerha, O; Kirschner, S; Günther, K-P; Lützner, J

    2009-12-01

    Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to 300-395 depending on the navigation system. Computer-assisted TKA is associated with additional costs. From an economical point of view an amount of more than 50 procedures per year appears to be favourable. The cost-effectiveness could be estimated if long-term results will show a reduction of revisions or a better clinical outcome.

  2. Alloy Design Data Generated for B2-Ordered Compounds

    NASA Technical Reports Server (NTRS)

    Noebe, Ronald D.; Bozzolo, Guillermo; Abel, Phillip B.

    2003-01-01

    Developing alloys based on ordered compounds is significantly more complicated than developing designs based on disordered materials. In ordered compounds, the major constituent elements reside on particular sublattices. Therefore, the addition of a ternary element to a binary-ordered compound is complicated by the manner in which the ternary addition is made (at the expense of which binary component). When ternary additions are substituted for the wrong constituent, the physical and mechanical properties usually degrade. In some cases the resulting degradation in properties can be quite severe. For example, adding alloying additions to NiAl in the wrong combination (i.e., alloying additions that prefer the Al sublattice but are added at the expense of Ni) will severely embrittle the alloy to the point that it can literally fall apart during processing on cooling from the molten state. Consequently, alloying additions that strongly prefer one sublattice over another should always be added at the expense of that component during alloy development. Elements that have a very weak preference for a sublattice can usually be safely added at the expense of either element and will accommodate any deviation from stoichiometry by filling in for the deficient component. Unfortunately, this type of information is not known beforehand for most ordered systems. Therefore, a computational survey study, using a recently developed quantum approximate method, was undertaken at the NASA Glenn Research Center to determine the preferred site occupancy of ternary alloying additions to 12 different B2-ordered compounds including NiAl, FeAl, CoAl, CoFe, CoHf, CoTi, FeTi, RuAl, RuSi, RuHf, RuTi, and RuZr. Some of these compounds are potential high temperature structural alloys; others are used in thin-film magnetic and other electronic applications. The results are summarized. The italicized elements represent the previous sum total alloying information known and verify the computational method used to establish the table. Details of the computational procedures used to determine the preferred site occupancy can be found in reference 2. As further substantiation of the validity of the technique, and its extension to even more complicated systems, it was applied to two simultaneous alloying additions in an ordered alloy.

  3. 48 CFR 9904.410-60 - Illustrations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... budgets for the other segment should be removed from B's G&A expense pool and transferred to the other...; all home office expenses allocated to Segment H are included in Segment H's G&A expense pool. (2) This... cost of scientific computer operations in its G&A expense pool. The scientific computer is used...

  4. 48 CFR 9904.410-60 - Illustrations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... budgets for the other segment should be removed from B's G&A expense pool and transferred to the other...; all home office expenses allocated to Segment H are included in Segment H's G&A expense pool. (2) This... cost of scientific computer operations in its G&A expense pool. The scientific computer is used...

  5. High-Fidelity Simulations of Electromagnetic Propagation and RF Communication Systems

    DTIC Science & Technology

    2017-05-01

    addition to high -fidelity RF propagation modeling, lower-fidelity mod- els, which are less computationally burdensome, are available via a C++ API...expensive to perform, requiring roughly one hour of computer time with 36 available cores and ray tracing per- formed by a single high -end GPU...ER D C TR -1 7- 2 Military Engineering Applied Research High -Fidelity Simulations of Electromagnetic Propagation and RF Communication

  6. Topology Optimization for Reducing Additive Manufacturing Processing Distortions

    DTIC Science & Technology

    2017-12-01

    features that curl or warp under thermal load and are subsequently struck by the recoater blade /roller. Support structures act to wick heat away and...was run for 150 iterations. The material properties for all examples were Young’s modulus E = 1 GPa, Poisson’s ratio ν = 0.25, and thermal expansion...the element-birth model is significantly more computationally expensive for a full op- timization run . Consider, the computational complexity of a

  7. 24 CFR 990.170 - Computation of utilities expense level (UEL): Overview.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... level (UEL): Overview. 990.170 Section 990.170 Housing and Urban Development Regulations Relating to... Expenses § 990.170 Computation of utilities expense level (UEL): Overview. (a) General. The UEL for each... by the payable consumption level multiplied by the inflation factor. The UEL is expressed in terms of...

  8. Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows

    NASA Astrophysics Data System (ADS)

    Crowell, Andrew Rippetoe

    This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.

  9. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  10. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  11. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  12. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  13. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  14. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  15. Self-energy matrices for electron transport calculations within the real-space finite-difference formalism

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Shigeru; Ono, Tomoya; Hirose, Kikuji; Blügel, Stefan

    2017-03-01

    The self-energy term used in transport calculations, which describes the coupling between electrode and transition regions, is able to be evaluated only from a limited number of the propagating and evanescent waves of a bulk electrode. This obviously contributes toward the reduction of the computational expenses in transport calculations. In this paper, we present a mathematical formula for reducing the computational expenses further without using any approximation and without losing accuracy. So far, the self-energy term has been handled as a matrix with the same dimension as the Hamiltonian submatrix representing the interaction between an electrode and a transition region. In this work, through the singular-value decomposition of the submatrix, the self-energy matrix is handled as a smaller matrix, whose dimension is the rank number of the Hamiltonian submatrix. This procedure is practical in the case of using the pseudopotentials in a separable form, and the computational expenses for determining the self-energy matrix are reduced by 90% when employing a code based on the real-space finite-difference formalism and projector-augmented wave method. In addition, this technique is applicable to the transport calculations using atomic or localized basis sets. Adopting the self-energy matrices obtained from this procedure, we present the calculation of the electron transport properties of C20 molecular junctions. The application demonstrates that the electron transmissions are sensitive to the orientation of the molecule with respect to the electrode surface. In addition, channel decomposition of the scattering wave functions reveals that some unoccupied C20 molecular orbitals mainly contribute to the electron conduction through the molecular junction.

  16. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2012-10-01 2012-10-01 false Computers and data processing equipment (account...

  17. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2013-10-01 2013-10-01 false Computers and data processing equipment (account...

  18. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2011-10-01 2011-10-01 false Computers and data processing equipment (account...

  19. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2014-10-01 2014-10-01 false Computers and data processing equipment (account...

  20. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2010-10-01 2010-10-01 false Computers and data processing equipment (account...

  1. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  2. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  3. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  4. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  5. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  6. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  7. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... private expense). Do not use the clause when the only deliverable items are computer software or computer software documentation (see 227.72), commercial items developed exclusively at private expense (see 227... the clause in architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013...

  8. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... private expense). Do not use the clause when the only deliverable items are computer software or computer software documentation (see 227.72), commercial items developed exclusively at private expense (see 227... the clause in architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013...

  9. 24 CFR 990.165 - Computation of project expense level (PEL).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Computation of project expense level (PEL). 990.165 Section 990.165 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF...

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Juliane

    MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.

  11. 48 CFR 9905.506-60 - Illustrations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., installs a computer service center to begin operations on May 1. The operating expense related to the new... operating expenses of the computer service center for the 8-month part of the cost accounting period may be... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Illustrations. 9905.506-60...

  12. 26 CFR 1.50B-1 - Definitions of WIN expenses and WIN employees.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... employee. (c) Trade or business expenses. The term “WIN expenses” includes only salaries and wages which... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Definitions of WIN expenses and WIN employees. 1... INCOME TAXES Rules for Computing Credit for Expenses of Work Incentive Programs § 1.50B-1 Definitions of...

  13. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    NASA Astrophysics Data System (ADS)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  14. 47 CFR 32.6112 - Motor vehicle expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Motor vehicle expense. 32.6112 Section 32.6112 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS.../or to other Plant Specific Operations Expense accounts. These amounts shall be computed on the basis...

  15. Advanced computational simulations of water waves interacting with wave energy converters

    NASA Astrophysics Data System (ADS)

    Pathak, Ashish; Freniere, Cole; Raessi, Mehdi

    2017-03-01

    Wave energy converter (WEC) devices harness the renewable ocean wave energy and convert it into useful forms of energy, e.g. mechanical or electrical. This paper presents an advanced 3D computational framework to study the interaction between water waves and WEC devices. The computational tool solves the full Navier-Stokes equations and considers all important effects impacting the device performance. To enable large-scale simulations in fast turnaround times, the computational solver was developed in an MPI parallel framework. A fast multigrid preconditioned solver is introduced to solve the computationally expensive pressure Poisson equation. The computational solver was applied to two surface-piercing WEC geometries: bottom-hinged cylinder and flap. Their numerically simulated response was validated against experimental data. Additional simulations were conducted to investigate the applicability of Froude scaling in predicting full-scale WEC response from the model experiments.

  16. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  17. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  18. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  19. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  20. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  1. Thermochemical Modeling of Nonequilibrium Oxygen Flows

    NASA Astrophysics Data System (ADS)

    Neitzel, Kevin Joseph

    The development of hypersonic vehicles leans heavily on computational simulation due to the high enthalpy flow conditions that are expensive and technically challenging to replicate experimentally. The accuracy of the nonequilibrium modeling in the computer simulations dictates the design margin that is required for the thermal protection system and flight dynamics. Previous hypersonic vehicles, such as Apollo and the Space Shuttle, were primarily concerned with re-entry TPS design. The strong flow conditions of re-entry, involving Mach numbers of 25, quickly dissociate the oxygen molecules in air. Sustained flight, hypersonic vehicles will be designed to operate in Mach number ranges of 5 to 10. The oxygen molecules will not quickly dissociate and will play an important role in the flow field behavior. The development of nonequilibrium models of oxygen is crucial for limiting modeling uncertainty. Thermochemical nonequilibrium modeling is investigated for oxygen flows. Specifically, the vibrational relaxation and dissociation behavior that dominate the nonequilibrium physics in this flight regime are studied in detail. The widely used two-temperature (2T) approach is compared to the higher fidelity and more computationally expensive state-to-state (STS) approach. This dissertation utilizes a wide range of rate sources, including newly available STS rates, to conduct a comprehensive study of modeling approaches for hypersonic nonequilibrium thermochemical modeling. Additionally, the physical accuracy of the computational methods are assessed by comparing the numerical results with available experimental data. The numerical results and experimental measurements present strong nonequilibrium, and even non-Boltzmann behavior in the vibrational energy mode for the sustained hypersonic flight regime. The STS approach is able to better capture the behavior observed in the experimental data, especially for stronger nonequilibrium conditions. Additionally, a reduced order model (ROM) modification to the 2T model is developed to improve the capability of the 2T approach framework.

  2. 7 CFR 1484.53 - What are the requirements for documenting and reporting contributions?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... contribution must be documented by the Cooperator, showing the method of computing non-cash contributions, salaries, and travel expenses. (b) Each Cooperator must keep records of the methods used to compute the value of non-cash contributions, and (1) Copies of invoices or receipts for expenses paid by the U.S...

  3. 26 CFR 1.213-1 - Medical, dental, etc., expenses.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... medical care includes the diagnosis, cure, mitigation, treatment, or prevention of disease. Expenses paid... taxable year for insurance that constitute expenses paid for medical care shall, for purposes of computing... care of the taxpayer, his spouse, or a dependent of the taxpayer and not be compensated for by...

  4. 26 CFR 1.556-2 - Adjustments to taxable income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... of deductions for trade or business expenses and depreciation which are allocable to the operation... computed without the deduction of the amount disallowed under section 556(b)(5), relating to expenses and... disallowed under section 556(b)(5), relating to expenses and depreciation applicable to property of the...

  5. Multidisciplinary optimization of an HSCT wing using a response surface methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giunta, A.A.; Grossman, B.; Mason, W.H.

    1994-12-31

    Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less

  6. Community Cloud Computing

    NASA Astrophysics Data System (ADS)

    Marinos, Alexandros; Briscoe, Gerard

    Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.

  7. Interaction sorting method for molecular dynamics on multi-core SIMD CPU architecture.

    PubMed

    Matvienko, Sergey; Alemasov, Nikolay; Fomin, Eduard

    2015-02-01

    Molecular dynamics (MD) is widely used in computational biology for studying binding mechanisms of molecules, molecular transport, conformational transitions, protein folding, etc. The method is computationally expensive; thus, the demand for the development of novel, much more efficient algorithms is still high. Therefore, the new algorithm designed in 2007 and called interaction sorting (IS) clearly attracted interest, as it outperformed the most efficient MD algorithms. In this work, a new IS modification is proposed which allows the algorithm to utilize SIMD processor instructions. This paper shows that the improvement provides an additional gain in performance, 9% to 45% in comparison to the original IS method.

  8. Using computers to overcome math-phobia in an introductory course in musical acoustics

    NASA Astrophysics Data System (ADS)

    Piacsek, Andrew A.

    2002-11-01

    In recent years, the desktop computer has acquired the signal processing and visualization capabilities once obtained only with expensive specialized equipment. With the appropriate A/D card and software, a PC can behave like an oscilloscope, a real-time signal analyzer, a function generator, and a synthesizer, with both audio and visual outputs. In addition, the computer can be used to visualize specific wave behavior, such as superposition and standing waves, refraction, dispersion, etc. These capabilities make the computer an invaluable tool to teach basic acoustic principles to students with very poor math skills. In this paper I describe my approach to teaching the introductory-level Physics of Musical Sound at Central Washington University, in which very few science students enroll. Emphasis is placed on how vizualization with computers can help students appreciate and apply quantitative methods for analyzing sound.

  9. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  10. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  11. Parallel implementation of geometrical shock dynamics for two dimensional converging shock waves

    NASA Astrophysics Data System (ADS)

    Qiu, Shi; Liu, Kuang; Eliasson, Veronica

    2016-10-01

    Geometrical shock dynamics (GSD) theory is an appealing method to predict the shock motion in the sense that it is more computationally efficient than solving the traditional Euler equations, especially for converging shock waves. However, to solve and optimize large scale configurations, the main bottleneck is the computational cost. Among the existing numerical GSD schemes, there is only one that has been implemented on parallel computers, with the purpose to analyze detonation waves. To extend the computational advantage of the GSD theory to more general applications such as converging shock waves, a numerical implementation using a spatial decomposition method has been coupled with a front tracking approach on parallel computers. In addition, an efficient tridiagonal system solver for massively parallel computers has been applied to resolve the most expensive function in this implementation, resulting in an efficiency of 0.93 while using 32 HPCC cores. Moreover, symmetric boundary conditions have been developed to further reduce the computational cost, achieving a speedup of 19.26 for a 12-sided polygonal converging shock.

  12. Efficient Semiparametric Inference Under Two-Phase Sampling, With Applications to Genetic Association Studies.

    PubMed

    Tao, Ran; Zeng, Donglin; Lin, Dan-Yu

    2017-01-01

    In modern epidemiological and clinical studies, the covariates of interest may involve genome sequencing, biomarker assay, or medical imaging and thus are prohibitively expensive to measure on a large number of subjects. A cost-effective solution is the two-phase design, under which the outcome and inexpensive covariates are observed for all subjects during the first phase and that information is used to select subjects for measurements of expensive covariates during the second phase. For example, subjects with extreme values of quantitative traits were selected for whole-exome sequencing in the National Heart, Lung, and Blood Institute (NHLBI) Exome Sequencing Project (ESP). Herein, we consider general two-phase designs, where the outcome can be continuous or discrete, and inexpensive covariates can be continuous and correlated with expensive covariates. We propose a semiparametric approach to regression analysis by approximating the conditional density functions of expensive covariates given inexpensive covariates with B-spline sieves. We devise a computationally efficient and numerically stable EM-algorithm to maximize the sieve likelihood. In addition, we establish the consistency, asymptotic normality, and asymptotic efficiency of the estimators. Furthermore, we demonstrate the superiority of the proposed methods over existing ones through extensive simulation studies. Finally, we present applications to the aforementioned NHLBI ESP.

  13. Evaluating vortex generator jet experiments for turbulent flow separation control

    NASA Astrophysics Data System (ADS)

    von Stillfried, F.; Kékesi, T.; Wallin, S.; Johansson, A. V.

    2011-12-01

    Separating turbulent boundary-layers can be energized by streamwise vortices from vortex generators (VG) that increase the near wall momentum as well as the overall mixing of the flow so that flow separation can be delayed or even prevented. In general, two different types of VGs exist: passive vane VGs (VVG) and active VG jets (VGJ). Even though VGs are already successfully used in engineering applications, it is still time-consuming and computationally expensive to include them in a numerical analysis. Fully resolved VGs in a computational mesh lead to a very high number of grid points and thus, computational costs. In addition, computational parameter studies for such flow control devices take much time to set-up. Therefore, much of the research work is still carried out experimentally. KTH Stockholm develops a novel VGJ model that makes it possible to only include the physical influence in terms of the additional stresses that originate from the VGJs without the need to locally refine the computational mesh. Such a modelling strategy enables fast VGJ parameter variations and optimization studies are easliy made possible. For that, VGJ experiments are evaluated in this contribution and results are used for developing a statistical VGJ model.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  15. Metamodels for Computer-Based Engineering Design: Survey and Recommendations

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.

    1997-01-01

    The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.

  16. CASL VMA Milestone Report FY16 (L3:VMA.VUQ.P13.08): Westinghouse Mixing with STAR-CCM+

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilkey, Lindsay Noelle

    2016-09-30

    STAR-CCM+ (STAR) is a high-resolution computational fluid dynamics (CFD) code developed by CD-adapco. STAR includes validated physics models and a full suite of turbulence models including ones from the k-ε and k-ω families. STAR is currently being developed to be able to do two phase flows, but the current focus of the software is single phase flow. STAR can use imported meshes or use the built in meshing software to create computation domains for CFD. Since the solvers generally require a fine mesh for good computational results, the meshes used with STAR tend to number in the millions of cells,more » with that number growing with simulation and geometry complexity. The time required to model the flow of a full 5x5 Mixing Vane Grid Assembly (5x5MVG) in the current STAR configuration is on the order of hours, and can be very computationally expensive. COBRA-TF (CTF) is a low-resolution subchannel code that can be trained using high fidelity data from STAR. CTF does not have turbulence models and instead uses a turbulent mixing coefficient β. With a properly calibrated β, CTF can be used a low-computational cost alternative to expensive full CFD calculations performed with STAR. During the Hi2Lo work with CTF and STAR, STAR-CCM+ will be used to calibrate β and to provide high-resolution results that can be used in the place of and in addition to experimental results to reduce the uncertainty in the CTF results.« less

  17. The Computer Aided Aircraft-design Package (CAAP)

    NASA Technical Reports Server (NTRS)

    Yalif, Guy U.

    1994-01-01

    The preliminary design of an aircraft is a complex, labor-intensive, and creative process. Since the 1970's, many computer programs have been written to help automate preliminary airplane design. Time and resource analyses have identified, 'a substantial decrease in project duration with the introduction of an automated design capability'. Proof-of-concept studies have been completed which establish 'a foundation for a computer-based airframe design capability', Unfortunately, today's design codes exist in many different languages on many, often expensive, hardware platforms. Through the use of a module-based system architecture, the Computer aided Aircraft-design Package (CAAP) will eventually bring together many of the most useful features of existing programs. Through the use of an expert system, it will add an additional feature that could be described as indispensable to entry level engineers and students: the incorporation of 'expert' knowledge into the automated design process.

  18. RoboCal: An automated nondestructive assay system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staley, H.C.; Hollen, R.M.; Bonner, C.A.

    1990-01-01

    The manager of a facility handling special nuclear material (SNM) is caught in a squeeze between increased state and federal regulations and tighter funding. RoboCal uses a robot to manipulate canisters containing SNM to lower worker radiation exposure and to provide increased utilization of expensive assay equipment. In addition, it helps with accountability and material tracking. It consists of a hierarchical network of more than a dozen computers and provides a single point of contact for the user to accomplish multiple assays.

  19. Learning Reverse Engineering and Simulation with Design Visualization

    NASA Technical Reports Server (NTRS)

    Hemsworth, Paul J.

    2018-01-01

    The Design Visualization (DV) group supports work at the Kennedy Space Center by utilizing metrology data with Computer-Aided Design (CAD) models and simulations to provide accurate visual representations that aid in decision-making. The capability to measure and simulate objects in real time helps to predict and avoid potential problems before they become expensive in addition to facilitating the planning of operations. I had the opportunity to work on existing and new models and simulations in support of DV and NASA’s Exploration Ground Systems (EGS).

  20. Using quantum chemistry muscle to flex massive systems: How to respond to something perturbing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertoni, Colleen

    Computational chemistry uses the theoretical advances of quantum mechanics and the algorithmic and hardware advances of computer science to give insight into chemical problems. It is currently possible to do highly accurate quantum chemistry calculations, but the most accurate methods are very computationally expensive. Thus it is only feasible to do highly accurate calculations on small molecules, since typically more computationally efficient methods are also less accurate. The overall goal of my dissertation work has been to try to decrease the computational expense of calculations without decreasing the accuracy. In particular, my dissertation work focuses on fragmentation methods, intermolecular interactionsmore » methods, analytic gradients, and taking advantage of new hardware.« less

  1. 47 CFR 36.311 - Network Support/General Support Expenses-Accounts 6110 and 6120 (Class B Telephone Companies...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...

  2. 47 CFR 36.311 - Network Support/General Support Expenses-Accounts 6110 and 6120 (Class B Telephone Companies...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...

  3. 47 CFR 36.311 - Network Support/General Support Expenses-Accounts 6110 and 6120 (Class B Telephone Companies...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...

  4. Remote control system for high-perfomance computer simulation of crystal growth by the PFC method

    NASA Astrophysics Data System (ADS)

    Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei

    2017-04-01

    Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.

  5. Aortic dissection simulation models for clinical support: fluid-structure interaction vs. rigid wall models.

    PubMed

    Alimohammadi, Mona; Sherwood, Joseph M; Karimpour, Morad; Agu, Obiekezie; Balabani, Stavroula; Díaz-Zuccarini, Vanessa

    2015-04-15

    The management and prognosis of aortic dissection (AD) is often challenging and the use of personalised computational models is being explored as a tool to improve clinical outcome. Including vessel wall motion in such simulations can provide more realistic and potentially accurate results, but requires significant additional computational resources, as well as expertise. With clinical translation as the final aim, trade-offs between complexity, speed and accuracy are inevitable. The present study explores whether modelling wall motion is worth the additional expense in the case of AD, by carrying out fluid-structure interaction (FSI) simulations based on a sample patient case. Patient-specific anatomical details were extracted from computed tomography images to provide the fluid domain, from which the vessel wall was extrapolated. Two-way fluid-structure interaction simulations were performed, with coupled Windkessel boundary conditions and hyperelastic wall properties. The blood was modelled using the Carreau-Yasuda viscosity model and turbulence was accounted for via a shear stress transport model. A simulation without wall motion (rigid wall) was carried out for comparison purposes. The displacement of the vessel wall was comparable to reports from imaging studies in terms of intimal flap motion and contraction of the true lumen. Analysis of the haemodynamics around the proximal and distal false lumen in the FSI model showed complex flow structures caused by the expansion and contraction of the vessel wall. These flow patterns led to significantly different predictions of wall shear stress, particularly its oscillatory component, which were not captured by the rigid wall model. Through comparison with imaging data, the results of the present study indicate that the fluid-structure interaction methodology employed herein is appropriate for simulations of aortic dissection. Regions of high wall shear stress were not significantly altered by the wall motion, however, certain collocated regions of low and oscillatory wall shear stress which may be critical for disease progression were only identified in the FSI simulation. We conclude that, if patient-tailored simulations of aortic dissection are to be used as an interventional planning tool, then the additional complexity, expertise and computational expense required to model wall motion is indeed justified.

  6. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  7. 76 FR 9349 - Jim Woodruff Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-17

    ... month. Southeastern would compute its purchased power obligation for each delivery point monthly... rates to include a pass-through of purchased power expenses. The capacity and energy charges to preference customers can be reduced because purchased power expenses will be recovered in a separate, pass...

  8. A microeconomic scheduler for parallel computers

    NASA Technical Reports Server (NTRS)

    Stoica, Ion; Abdel-Wahab, Hussein; Pothen, Alex

    1995-01-01

    We describe a scheduler based on the microeconomic paradigm for scheduling on-line a set of parallel jobs in a multiprocessor system. In addition to the classical objectives of increasing the system throughput and reducing the response time, we consider fairness in allocating system resources among the users, and providing the user with control over the relative performances of his jobs. We associate with every user a savings account in which he receives money at a constant rate. When a user wants to run a job, he creates an expense account for that job to which he transfers money from his savings account. The job uses the funds in its expense account to obtain the system resources it needs for execution. The share of the system resources allocated to the user is directly related to the rate at which the user receives money; the rate at which the user transfers money into a job expense account controls the job's performance. We prove that starvation is not possible in our model. Simulation results show that our scheduler improves both system and user performances in comparison with two different variable partitioning policies. It is also shown to be effective in guaranteeing fairness and providing control over the performance of jobs.

  9. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367

  10. Toward high-efficiency and detailed Monte Carlo simulation study of the granular flow spallation target

    NASA Astrophysics Data System (ADS)

    Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei

    2018-02-01

    The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.

  11. Lensfree Computational Microscopy Tools and their Biomedical Applications

    NASA Astrophysics Data System (ADS)

    Sencan, Ikbal

    Conventional microscopy has been a revolutionary tool for biomedical applications since its invention several centuries ago. Ability to non-destructively observe very fine details of biological objects in real time enabled to answer many important questions about their structures and functions. Unfortunately, most of these advance microscopes are complex, bulky, expensive, and/or hard to operate, so they could not reach beyond the walls of well-equipped laboratories. Recent improvements in optoelectronic components and computational methods allow creating imaging systems that better fulfill the specific needs of clinics or research related biomedical applications. In this respect, lensfree computational microscopy aims to replace bulky and expensive optical components with compact and cost-effective alternatives through the use of computation, which can be particularly useful for lab-on-a-chip platforms as well as imaging applications in low-resource settings. Several high-throughput on-chip platforms are built with this approach for applications including, but not limited to, cytometry, micro-array imaging, rare cell analysis, telemedicine, and water quality screening. The lack of optical complexity in these lensfree on-chip imaging platforms is compensated by using computational techniques. These computational methods are utilized for various purposes in coherent, incoherent and fluorescent on-chip imaging platforms e.g. improving the spatial resolution, to undo the light diffraction without using lenses, localization of objects in a large volume and retrieval of the phase or the color/spectral content of the objects. For instance, pixel super resolution approaches based on source shifting are used in lensfree imaging platforms to prevent under sampling, Bayer pattern, and aliasing artifacts. Another method, iterative phase retrieval, is utilized to compensate the lack of lenses by undoing the diffraction and removing the twin image noise of in-line holograms. This technique enables recovering the complex optical field from its intensity measurement(s) by using additional constraints in iterations, such as spatial boundaries and other known properties of objects. Another computational tool employed in lensfree imaging is compressive sensing (or decoding), which is a novel method taking advantage of the fact that natural signals/objects are mostly sparse or compressible in known bases. This inherent property of objects enables better signal recovery when the number of measurement is low, even below the Nyquist rate, and increases the additive noise immunity of the system.

  12. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  13. A Case against Computer Symbolic Manipulation in School Mathematics Today.

    ERIC Educational Resources Information Center

    Waits, Bert K.; Demana, Franklin

    1992-01-01

    Presented are two reasons discouraging computer symbol manipulation systems use in school mathematics at present: cost for computer laboratories or expensive pocket computers; and impracticality of exact solution representations. Although development with this technology in mathematics education advances, graphing calculators are recommended to…

  14. Combining convolutional neural networks and Hough Transform for classification of images containing lines

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy

    2017-03-01

    In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.

  15. Multi-Scale Surface Descriptors

    PubMed Central

    Cipriano, Gregory; Phillips, George N.; Gleicher, Michael

    2010-01-01

    Local shape descriptors compactly characterize regions of a surface, and have been applied to tasks in visualization, shape matching, and analysis. Classically, curvature has be used as a shape descriptor; however, this differential property characterizes only an infinitesimal neighborhood. In this paper, we provide shape descriptors for surface meshes designed to be multi-scale, that is, capable of characterizing regions of varying size. These descriptors capture statistically the shape of a neighborhood around a central point by fitting a quadratic surface. They therefore mimic differential curvature, are efficient to compute, and encode anisotropy. We show how simple variants of mesh operations can be used to compute the descriptors without resorting to expensive parameterizations, and additionally provide a statistical approximation for reduced computational cost. We show how these descriptors apply to a number of uses in visualization, analysis, and matching of surfaces, particularly to tasks in protein surface analysis. PMID:19834190

  16. A Parallel Numerical Algorithm To Solve Linear Systems Of Equations Emerging From 3D Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.

  17. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  18. 41 CFR 302-17.3 - Types of moving expenses or allowances covered and general limitations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-RELOCATION INCOME TAX (RIT) ALLOWANCE § 302-17.3 Types of moving expenses or allowances covered and general... law authorizes reimbursement of additional income taxes resulting from certain moving expenses... actually paid or incurred, and are not allowable as a moving expense deduction for tax purposes. The types...

  19. 78 FR 50374 - Proposed Information Collection; Comment Request; Information and Communication Technology Survey

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-19

    ... expenses (purchases; and operating leases and rental payments) for four types of information and communication technology equipment and software (computers and peripheral equipment; ICT equipment, excluding computers and peripherals; electromedical and electrotherapeutic apparatus; and computer software, including...

  20. Multifidelity Analysis and Optimization for Supersonic Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory

    2010-01-01

    Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.

  1. Storage and computationally efficient permutations of factorized covariance and square-root information matrices

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector-stored upper-triangular diagonal factorized covariance (UD) and vector stored upper-triangular square-root information filter (SRIF) arrays is presented. The method involves cyclical permutation of the rows and columns of the arrays and retriangularization with appropriate square-root-free fast Givens rotations or elementary slow Givens reflections. A minimal amount of computation is performed and only one scratch vector of size N is required, where N is the column dimension of the arrays. To make the method efficient for large SRIF arrays on a virtual memory machine, three additional scratch vectors each of size N are used to avoid expensive paging faults. The method discussed is compared with the methods and routines of Bierman's Estimation Subroutine Library (ESL).

  2. Computational methods in drug discovery

    PubMed Central

    Leelananda, Sumudu P

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed. PMID:28144341

  3. Computational methods in drug discovery.

    PubMed

    Leelananda, Sumudu P; Lindert, Steffen

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein-ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  4. Low-Cost Terminal Alternative for Learning Center Managers. Final Report.

    ERIC Educational Resources Information Center

    Nix, C. Jerome; And Others

    This study established the feasibility of replacing high performance and relatively expensive computer terminals with less expensive ones adequate for supporting specific tasks of Advanced Instructional System (AIS) at Lowry AFB, Colorado. Surveys of user requirements and available devices were conducted and the results used in a system analysis.…

  5. EPEPT: A web service for enhanced P-value estimation in permutation tests

    PubMed Central

    2011-01-01

    Background In computational biology, permutation tests have become a widely used tool to assess the statistical significance of an event under investigation. However, the common way of computing the P-value, which expresses the statistical significance, requires a very large number of permutations when small (and thus interesting) P-values are to be accurately estimated. This is computationally expensive and often infeasible. Recently, we proposed an alternative estimator, which requires far fewer permutations compared to the standard empirical approach while still reliably estimating small P-values [1]. Results The proposed P-value estimator has been enriched with additional functionalities and is made available to the general community through a public website and web service, called EPEPT. This means that the EPEPT routines can be accessed not only via a website, but also programmatically using any programming language that can interact with the web. Examples of web service clients in multiple programming languages can be downloaded. Additionally, EPEPT accepts data of various common experiment types used in computational biology. For these experiment types EPEPT first computes the permutation values and then performs the P-value estimation. Finally, the source code of EPEPT can be downloaded. Conclusions Different types of users, such as biologists, bioinformaticians and software engineers, can use the method in an appropriate and simple way. Availability http://informatics.systemsbiology.net/EPEPT/ PMID:22024252

  6. Towards real-time photon Monte Carlo dose calculation in the cloud

    NASA Astrophysics Data System (ADS)

    Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe

    2017-06-01

    Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.

  7. Towards real-time photon Monte Carlo dose calculation in the cloud.

    PubMed

    Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe

    2017-06-07

    Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.

  8. Web-Based Job Submission Interface for the GAMESS Computational Chemistry Program

    ERIC Educational Resources Information Center

    Perri, M. J.; Weber, S. H.

    2014-01-01

    A Web site is described that facilitates use of the free computational chemistry software: General Atomic and Molecular Electronic Structure System (GAMESS). Its goal is to provide an opportunity for undergraduate students to perform computational chemistry experiments without the need to purchase expensive software.

  9. 77 FR 18704 - Fees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-28

    ... is publishing a final rule establishing an additional fee for a particular service: Travel expenses... Copyright Office's schedule of fees by adding a fee for travel expenses in connection with participation by... travel expenses. As the office administering the nation's records of copyright ownership and as the...

  10. Printing soft matter in three dimensions.

    PubMed

    Truby, Ryan L; Lewis, Jennifer A

    2016-12-14

    Light- and ink-based three-dimensional (3D) printing methods allow the rapid design and fabrication of materials without the need for expensive tooling, dies or lithographic masks. They have led to an era of manufacturing in which computers can control the fabrication of soft matter that has tunable mechanical, electrical and other functional properties. The expanding range of printable materials, coupled with the ability to programmably control their composition and architecture across various length scales, is driving innovation in myriad applications. This is illustrated by examples of biologically inspired composites, shape-morphing systems, soft sensors and robotics that only additive manufacturing can produce.

  11. Printing soft matter in three dimensions

    NASA Astrophysics Data System (ADS)

    Truby, Ryan L.; Lewis, Jennifer A.

    2016-12-01

    Light- and ink-based three-dimensional (3D) printing methods allow the rapid design and fabrication of materials without the need for expensive tooling, dies or lithographic masks. They have led to an era of manufacturing in which computers can control the fabrication of soft matter that has tunable mechanical, electrical and other functional properties. The expanding range of printable materials, coupled with the ability to programmably control their composition and architecture across various length scales, is driving innovation in myriad applications. This is illustrated by examples of biologically inspired composites, shape-morphing systems, soft sensors and robotics that only additive manufacturing can produce.

  12. Simplex-stochastic collocation method with improved scalability

    NASA Astrophysics Data System (ADS)

    Edeling, W. N.; Dwight, R. P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  13. Computing Systems | High-Performance Computing | NREL

    Science.gov Websites

    investigate, build, and test models of complex phenomena or entire integrated systems-that cannot be directly observed or manipulated in the lab, or would be too expensive or time consuming. Models and visualizations

  14. 41 CFR 301-10.301 - How do I compute my mileage reimbursement?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false How do I compute my...-TRANSPORTATION EXPENSES Privately Owned Vehicle (POV) § 301-10.301 How do I compute my mileage reimbursement? You compute mileage reimbursement by multiplying the distance traveled, determined under § 301-10.302 of this...

  15. Is Your School Y2K-OK?

    ERIC Educational Resources Information Center

    Bates, Martine G.

    1999-01-01

    The most vulnerable Y2K areas for schools are networked computers, free-standing personal computers, software, and embedded chips in utilities such as telephones and fire alarms. Expensive, time-consuming procedures and software have been developed for testing and bringing most computers into compliance. Districts need a triage prioritization…

  16. Understanding the Internet.

    ERIC Educational Resources Information Center

    Oblinger, Diana

    The Internet is an international network linking hundreds of smaller computer networks in North America, Europe, and Asia. Using the Internet, computer users can connect to a variety of computers with little effort or expense. The potential for use by college faculty is enormous. The largest problem faced by most users is understanding what such…

  17. "Mini", "Midi" and the Student.

    ERIC Educational Resources Information Center

    Edwards, Perry; Broadwell, Bruce

    Mini- and midi-computers have been introduced into the computer science program at Sierra College to afford students more direct contact with computers. The college's administration combined with the Science and Business departments to share the expense and utilization of the program. The National Cash Register Century 100 and the Data General…

  18. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... software. (2) Computer software, as used in this clause, means (i) computer programs which are data... software. The term “data” does not include data incidental to the administration of this contract, such as... this clause, means data, other than computer software, developed at private expense that embody trade...

  19. 41 CFR 301-11.521 - Must I file a claim to be reimbursed for the additional income taxes incurred?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... be reimbursed for the additional income taxes incurred? 301-11.521 Section 301-11.521 Public... ALLOWABLE TRAVEL EXPENSES 11-PER DIEM EXPENSES Income Tax Reimbursement Allowance (ITRA), Tax Years 1993 and 1994 Employee Responsibilities § 301-11.521 Must I file a claim to be reimbursed for the additional...

  20. 41 CFR 301-11.621 - Must I file a claim to be reimbursed for the additional income taxes incurred?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... be reimbursed for the additional income taxes incurred? 301-11.621 Section 301-11.621 Public... ALLOWABLE TRAVEL EXPENSES 11-PER DIEM EXPENSES Income Tax Reimbursement Allowance (ITRA), Tax Years 1995 and Thereafter Employee Responsibilities § 301-11.621 Must I file a claim to be reimbursed for the additional...

  1. RenderMan design principles

    NASA Technical Reports Server (NTRS)

    Apodaca, Tony; Porter, Tom

    1989-01-01

    The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.

  2. Extending Strong Scaling of Quantum Monte Carlo to the Exascale

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Baczewski, Andrew; Luo, Ye; Romero, Nichols; Kent, Paul

    Quantum Monte Carlo is one of the most accurate and most computationally expensive methods for solving the electronic structure problem. In spite of its significant computational expense, its massively parallel nature is ideally suited to petascale computers which have enabled a wide range of applications to relatively large molecular and extended systems. Exascale capabilities have the potential to enable the application of QMC to significantly larger systems, capturing much of the complexity of real materials such as defects and impurities. However, both memory and computational demands will require significant changes to current algorithms to realize this possibility. This talk will detail both the causes of the problem and potential solutions. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the US Department of Energys National Nuclear Security Administration under contract DE-AC04-94AL85000.

  3. Regression with Small Data Sets: A Case Study using Code Surrogates in Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamath, C.; Fan, Y. J.

    There has been an increasing interest in recent years in the mining of massive data sets whose sizes are measured in terabytes. While it is easy to collect such large data sets in some application domains, there are others where collecting even a single data point can be very expensive, so the resulting data sets have only tens or hundreds of samples. For example, when complex computer simulations are used to understand a scientific phenomenon, we want to run the simulation for many different values of the input parameters and analyze the resulting output. The data set relating the simulationmore » inputs and outputs is typically quite small, especially when each run of the simulation is expensive. However, regression techniques can still be used on such data sets to build an inexpensive \\surrogate" that could provide an approximate output for a given set of inputs. A good surrogate can be very useful in sensitivity analysis, uncertainty analysis, and in designing experiments. In this paper, we compare different regression techniques to determine how well they predict melt-pool characteristics in the problem domain of additive manufacturing. Our analysis indicates that some of the commonly used regression methods do perform quite well even on small data sets.« less

  4. 26 CFR 1.861-10 - Special allocations of interest expense.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... In addition, assets which are the subject of qualified nonrecourse indebtedness or integrated... 26 Internal Revenue 9 2010-04-01 2010-04-01 false Special allocations of interest expense. 1.861... § 1.861-10 Special allocations of interest expense. (a)-(d) [Reserved] (e) Treatment of certain...

  5. [Diagnostic possibilities of digital volume tomography].

    PubMed

    Lemkamp, Michael; Filippi, Andreas; Berndt, Dorothea; Lambrecht, J Thomas

    2006-01-01

    Cone beam computed tomography allows high quality 3D images of cranio-facial structures. Although detail resolution is increased, x-ray exposition is reduced compared to classic computer tomography. The volume is analysed in three orthogonal plains, which can be rotated independently without quality loss. Cone beam computed tomography seems to be a less expensive and less x-ray exposing alternative to classic computer tomography.

  6. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  7. Kinetic barriers in the isomerization of substituted ureas: implications for computer-aided drug design.

    PubMed

    Loeffler, Johannes R; Ehmki, Emanuel S R; Fuchs, Julian E; Liedl, Klaus R

    2016-05-01

    Urea derivatives are ubiquitously found in many chemical disciplines. N,N'-substituted ureas may show different conformational preferences depending on their substitution pattern. The high energetic barrier for isomerization of the cis and trans state poses additional challenges on computational simulation techniques aiming at a reproduction of the biological properties of urea derivatives. Herein, we investigate energetics of urea conformations and their interconversion using a broad spectrum of methodologies ranging from data mining, via quantum chemistry to molecular dynamics simulation and free energy calculations. We find that the inversion of urea conformations is inherently slow and beyond the time scale of typical simulation protocols. Therefore, extra care needs to be taken by computational chemists to work with appropriate model systems. We find that both knowledge-driven approaches as well as physics-based methods may guide molecular modelers towards accurate starting structures for expensive calculations to ensure that conformations of urea derivatives are modeled as adequately as possible.

  8. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less

  9. hPIN/hTAN: Low-Cost e-Banking Secure against Untrusted Computers

    NASA Astrophysics Data System (ADS)

    Li, Shujun; Sadeghi, Ahmad-Reza; Schmitz, Roland

    We propose hPIN/hTAN, a low-cost token-based e-banking protection scheme when the adversary has full control over the user's computer. Compared with existing hardware-based solutions, hPIN/hTAN depends on neither second trusted channel, nor secure keypad, nor computationally expensive encryption module.

  10. The AAHA Computer Program. American Animal Hospital Association.

    PubMed

    Albers, J W

    1986-07-01

    The American Animal Hospital Association Computer Program should benefit all small animal practitioners. Through the availability of well-researched and well-developed certified software, veterinarians will have increased confidence in their purchase decisions. With the expansion of computer applications to improve practice management efficiency, veterinary computer systems will further justify their initial expense. The development of the Association's veterinary computer network will provide a variety of important services to the profession.

  11. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  12. Multiobjective Aerodynamic Shape Optimization Using Pareto Differential Evolution and Generalized Response Surface Metamodels

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.

  13. References and benchmarks for pore-scale flow simulated using micro-CT images of porous media and digital rocks

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Hofmann, Ronny; Alpak, Faruk O.; Berg, Steffen; Dietderich, Jesse; Agarwal, Umang; Tandon, Kunj; Hunter, Sander; Freeman, Justin; Wilson, Ove Bjorn

    2017-11-01

    We generate a novel reference dataset to quantify the impact of numerical solvers, boundary conditions, and simulation platforms. We consider a variety of microstructures ranging from idealized pipes to digital rocks. Pore throats of the digital rocks considered are large enough to be well resolved with state-of-the-art micro-computerized tomography technology. Permeability is computed using multiple numerical engines, 12 in total, including, Lattice-Boltzmann, computational fluid dynamics, voxel based, fast semi-analytical, and known empirical models. Thus, we provide a measure of uncertainty associated with flow computations of digital media. Moreover, the reference and standards dataset generated is the first of its kind and can be used to test and improve new fluid flow algorithms. We find that there is an overall good agreement between solvers for idealized cross-section shape pipes. As expected, the disagreement increases with increase in complexity of the pore space. Numerical solutions for pipes with sinusoidal variation of cross section show larger variability compared to pipes of constant cross-section shapes. We notice relatively larger variability in computed permeability of digital rocks with coefficient of variation (of up to 25%) in computed values between various solvers. Still, these differences are small given other subsurface uncertainties. The observed differences between solvers can be attributed to several causes including, differences in boundary conditions, numerical convergence criteria, and parameterization of fundamental physics equations. Solvers that perform additional meshing of irregular pore shapes require an additional step in practical workflows which involves skill and can introduce further uncertainty. Computation times for digital rocks vary from minutes to several days depending on the algorithm and available computational resources. We find that more stringent convergence criteria can improve solver accuracy but at the expense of longer computation time.

  14. 25 CFR 700.165 - Ineligible moving and related expenses.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... expenses. A displaced person is not entitled to payment for— (a) The cost of moving any structure or other...) Physical changes at replacement location of business, farm or nonprofit organization, except as provided at § 700.157; or (g) Any additional expense of a business, farm, or nonprofit organization incurred because...

  15. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  16. Database Driven 6-DOF Trajectory Simulation for Debris Transport Analysis

    NASA Technical Reports Server (NTRS)

    West, Jeff

    2008-01-01

    Debris mitigation and risk assessment have been carried out by NASA and its contractors supporting Space Shuttle Return-To-Flight (RTF). As a part of this assessment, analysis of transport potential for debris that may be liberated from the vehicle or from pad facilities prior to tower clear (Lift-Off Debris) is being performed by MSFC. This class of debris includes plume driven and wind driven sources for which lift as well as drag are critical for the determination of the debris trajectory. As a result, NASA MSFC has a need for a debris transport or trajectory simulation that supports the computation of lift effect in addition to drag without the computational expense of fully coupled CFD with 6-DOF. A database driven 6-DOF simulation that uses aerodynamic force and moment coefficients for the debris shape that are interpolated from a database has been developed to meet this need. The design, implementation, and verification of the database driven six degree of freedom (6-DOF) simulation addition to the Lift-Off Debris Transport Analysis (LODTA) software are discussed in this paper.

  17. Privacy-preserving search for chemical compound databases.

    PubMed

    Shimizu, Kana; Nuida, Koji; Arai, Hiromi; Mitsunari, Shigeo; Attrapadung, Nuttapong; Hamada, Michiaki; Tsuda, Koji; Hirokawa, Takatsugu; Sakuma, Jun; Hanaoka, Goichiro; Asai, Kiyoshi

    2015-01-01

    Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information.

  18. Privacy-preserving search for chemical compound databases

    PubMed Central

    2015-01-01

    Background Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. Results In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. Conclusion We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information. PMID:26678650

  19. 26 CFR 1.50B-3 - Estates and trusts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Estates and trusts. 1.50B-3 Section 1.50B-3... Computing Credit for Expenses of Work Incentive Programs § 1.50B-3 Estates and trusts. (a) General rule—(1) In general. In the case of an estate or trust, WIN expenses (as defined in paragraph (a) of § 1.50B-1...

  20. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  1. Paradigm Paralysis and the Plight of the PC in Education.

    ERIC Educational Resources Information Center

    O'Neil, Mick

    1998-01-01

    Examines the varied factors involved in providing Internet access in K-12 education, including expense, computer installation and maintenance, and security, and explores how the network computer could be useful in this context. Operating systems and servers are discussed. (MSE)

  2. Computational Modeling in Concert with Laboratory Studies: Application to B Cell Differentiation

    EPA Science Inventory

    Remediation is expensive, so accurate prediction of dose-response is important to help control costs. Dose response is a function of biological mechanisms. Computational models of these mechanisms improve the efficiency of research and provide the capability for prediction.

  3. A Talking Computers System for Persons with Vision and Speech Handicaps. Final Report.

    ERIC Educational Resources Information Center

    Visek & Maggs, Urbana, IL.

    This final report contains a detailed description of six software systems designed to assist individuals with blindness and/or speech disorders in using inexpensive, off-the-shelf computers rather than expensive custom-made devices. The developed software is not written in the native machine language of any particular brand of computer, but in the…

  4. Use of less expensive cigarettes in six cities in China: findings from the International Tobacco Control (ITC) China Survey.

    PubMed

    Li, Qiang; Hyland, Andrew; Fong, Geoffrey T; Jiang, Yuan; Elton-Marshall, Tara

    2010-10-01

    The existence of less expensive cigarettes in China may undermine public health. The aim of the current study is to examine the use of less expensive cigarettes in six cities in China. Data was from the baseline wave of the International Tobacco Control (ITC) China Survey of 4815 adult urban smokers in 6 cities, conducted between April and August 2006. The percentage of smokers who reported buying less expensive cigarettes (the lowest pricing tertile within each city) at last purchase was computed. Complex sample multivariate logistic regression models were used to identify factors associated with use of less expensive cigarettes. The association between the use of less expensive cigarettes and intention to quit smoking was also examined. Smokers who reported buying less expensive cigarettes at last purchase tended to be older, heavier smokers, to have lower education and income, and to think more about the money spent on smoking in the last month. Smokers who bought less expensive cigarettes at the last purchase and who were less knowledgeable about the health harm of smoking were less likely to intend to quit smoking. Measures need to be taken to minimise the price differential among cigarette brands and to increase smokers' health knowledge, which may in turn increase their intentions to quit.

  5. Reactive transport modeling in the subsurface environment with OGS-IPhreeqc

    NASA Astrophysics Data System (ADS)

    He, Wenkui; Beyer, Christof; Fleckenstein, Jan; Jang, Eunseon; Kalbacher, Thomas; Naumov, Dimitri; Shao, Haibing; Wang, Wenqing; Kolditz, Olaf

    2015-04-01

    Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.

  6. Reactive transport modeling in variably saturated porous media with OGS-IPhreeqc

    NASA Astrophysics Data System (ADS)

    He, W.; Beyer, C.; Fleckenstein, J. H.; Jang, E.; Kalbacher, T.; Shao, H.; Wang, W.; Kolditz, O.

    2014-12-01

    Worldwide, sustainable water resource management becomes an increasingly challenging task due to the growth of population and extensive applications of fertilizer in agriculture. Moreover, climate change causes further stresses to both water quantity and quality. Reactive transport modeling in the coupled soil-aquifer system is a viable approach to assess the impacts of different land use and groundwater exploitation scenarios on the water resources. However, the application of this approach is usually limited in spatial scale and to simplified geochemical systems due to the huge computational expense involved. Such computational expense is not only caused by solving the high non-linearity of the initial boundary value problems of water flow in the unsaturated zone numerically with rather fine spatial and temporal discretization for the correct mass balance and numerical stability, but also by the intensive computational task of quantifying geochemical reactions. In the present study, a flexible and efficient tool for large scale reactive transport modeling in variably saturated porous media and its applications are presented. The open source scientific software OpenGeoSys (OGS) is coupled with the IPhreeqc module of the geochemical solver PHREEQC. The new coupling approach makes full use of advantages from both codes: OGS provides a flexible choice of different numerical approaches for simulation of water flow in the vadose zone such as the pressure-based or mixed forms of Richards equation; whereas the IPhreeqc module leads to a simplification of data storage and its communication with OGS, which greatly facilitates the coupling and code updating. Moreover, a parallelization scheme with MPI (Message Passing Interface) is applied, in which the computational task of water flow and mass transport is partitioned through domain decomposition, whereas the efficient parallelization of geochemical reactions is achieved by smart allocation of computational workload over multiple compute nodes. The plausibility of the new coupling is verified by several benchmark tests. In addition, the efficiency of the new coupling approach is demonstrated by its application in a large scale scenario, in which the environmental fate of pesticides in a complex soil-aquifer system is studied.

  7. Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision Techniques

    NASA Astrophysics Data System (ADS)

    Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández

    2013-08-01

    Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, Eyegrade, a system for automatic grading of multiple choice exams is presented. While most current solutions are based on expensive scanners, Eyegrade offers a truly low-cost solution requiring only a regular off-the-shelf webcam. Additionally, Eyegrade performs both mark recognition as well as optical character recognition of handwritten student identification numbers, which avoids the use of bubbles in the answer sheet. When compared with similar webcam-based systems, the user interface in Eyegrade has been designed to provide a more efficient and error-free data collection procedure. The tool has been validated with a set of experiments that show the ease of use (both setup and operation), the reduction in grading time, and an increase in the reliability of the results when compared with conventional, more expensive systems.

  8. Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.

    PubMed

    Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg

    2009-07-01

    In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.

  9. Modeling the Hydration Layer around Proteins: Applications to Small- and Wide-Angle X-Ray Scattering

    PubMed Central

    Virtanen, Jouko Juhani; Makowski, Lee; Sosnick, Tobin R.; Freed, Karl F.

    2011-01-01

    Small-/wide-angle x-ray scattering (SWAXS) experiments can aid in determining the structures of proteins and protein complexes, but success requires accurate computational treatment of solvation. We compare two methods by which to calculate SWAXS patterns. The first approach uses all-atom explicit-solvent molecular dynamics (MD) simulations. The second, far less computationally expensive method involves prediction of the hydration density around a protein using our new HyPred solvation model, which is applied without the need for additional MD simulations. The SWAXS patterns obtained from the HyPred model compare well to both experimental data and the patterns predicted by the MD simulations. Both approaches exhibit advantages over existing methods for analyzing SWAXS data. The close correspondence between calculated and observed SWAXS patterns provides strong experimental support for the description of hydration implicit in the HyPred model. PMID:22004761

  10. Arduino: a low-cost multipurpose lab equipment.

    PubMed

    D'Ausilio, Alessandro

    2012-06-01

    Typical experiments in psychological and neurophysiological settings often require the accurate control of multiple input and output signals. These signals are often generated or recorded via computer software and/or external dedicated hardware. Dedicated hardware is usually very expensive and requires additional software to control its behavior. In the present article, I present some accuracy tests on a low-cost and open-source I/O board (Arduino family) that may be useful in many lab environments. One of the strengths of Arduinos is the possibility they afford to load the experimental script on the board's memory and let it run without interfacing with computers or external software, thus granting complete independence, portability, and accuracy. Furthermore, a large community has arisen around the Arduino idea and offers many hardware add-ons and hundreds of free scripts for different projects. Accuracy tests show that Arduino boards may be an inexpensive tool for many psychological and neurophysiological labs.

  11. VAX CLuster upgrade: Report of a CPC task force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, J.; Berry, H.; Kessler, P.

    The CSCF VAX cluster provides interactive computing for 100 users during prime time, plus a considerable amount of daytime and overnight batch processing. While this cluster represents less than 10% of the VAX computing power at BNL (6 MIPS out of 70), it has served as an important center for this larger network, supporting special hardware and software too expensive to maintain on every machine. In addition, it is the only unrestricted facility available to VAX/VMS users (other machines are typically dedicated to special projects). This committee's analysis shows that the cpu's on the CSCF cluster are currently badly oversaturated,more » frequently giving extremely poor interactive response. Short batch jobs (a necessary part of interactive work) typically take 3 to 4 times as long to execute as they would on an idle machine. There is also an immediate need for more scratch disk space and user permanent file space.« less

  12. Proceeding On : Parallelisation Of Critical Code Passages In PHOENIX/3D

    NASA Astrophysics Data System (ADS)

    Arkenberg, Mario; Wichert, Viktoria; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach here, by introducing especially adapted, parallel numerical methods and correspondingly parallelising time critical code passages. In the following, we present our work on PHOENIX/3D.While parallelisation is generally worthwhile, it requires revision of time-consuming subroutines with respect to separability of localised data and variables in order to determine the optimal approach. Of course, the same applies to the code structure. The importance of this ongoing work can be showcased by recently derived benchmark results, which were generated utilis- ing MPI and OpenMP. Furthermore, the need for a careful and thorough choice of an adequate, machine dependent setup is discussed.

  13. A Well-Tempered Hybrid Method for Solving Challenging Time-Dependent Density Functional Theory (TDDFT) Systems.

    PubMed

    Kasper, Joseph M; Williams-Young, David B; Vecharynski, Eugene; Yang, Chao; Li, Xiaosong

    2018-04-10

    The time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) equations allow one to probe electronic resonances of a system quickly and inexpensively. However, the iterative solution of the eigenvalue problem can be challenging or impossible to converge, using standard methods such as the Davidson algorithm for spectrally dense regions in the interior of the spectrum, as are common in X-ray absorption spectroscopy (XAS). More robust solvers, such as the generalized preconditioned locally harmonic residual (GPLHR) method, can alleviate this problem, but at the expense of higher average computational cost. A hybrid method is proposed which adapts to the problem in order to maximize computational performance while providing the superior convergence of GPLHR. In addition, a modification to the GPLHR algorithm is proposed to adaptively choose the shift parameter to enforce a convergence of states above a predefined energy threshold.

  14. Optimal pre-scheduling of problem remappings

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1987-01-01

    A large class of scientific computational problems can be characterized as a sequence of steps where a significant amount of computation occurs each step, but the work performed at each step is not necessarily identical. Two good examples of this type of computation are: (1) regridding methods which change the problem discretization during the course of the computation, and (2) methods for solving sparse triangular systems of linear equations. Recent work has investigated a means of mapping such computations onto parallel processors; the method defines a family of static mappings with differing degrees of importance placed on the conflicting goals of good load balance and low communication/synchronization overhead. The performance tradeoffs are controllable by adjusting the parameters of the mapping method. To achieve good performance it may be necessary to dynamically change these parameters at run-time, but such changes can impose additional costs. If the computation's behavior can be determined prior to its execution, it can be possible to construct an optimal parameter schedule using a low-order-polynomial-time dynamic programming algorithm. Since the latter can be expensive, the performance is studied of the effect of a linear-time scheduling heuristic on one of the model problems, and it is shown to be effective and nearly optimal.

  15. Exploiting GPUs in Virtual Machine for BioCloud

    PubMed Central

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment. PMID:23710465

  16. Exploiting GPUs in virtual machine for BioCloud.

    PubMed

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.

  17. Public Response to Cost-Quality Tradeoffs in Clinical Decisions

    PubMed Central

    Beach, Mary Catherine; Asch, David A.; Jepson, Christopher; Hershey, John C.; Mohr, Tara; McMorrow, Stacey; Ubel, Peter A.

    2011-01-01

    Purpose To explore public attitudes toward the incorporation of cost-effectiveness analysis into clinical decisions. Methods The authors presented 781 jurors with a survey describing 1 of 6 clinical encounters in which a physician has to choose between cancer screening tests. They provided cost-effectiveness data for all tests, and in each scenario, the most effective test was more expensive. They instructed respondents to imagine that he or she was the physician in the scenario and asked them to choose which test to recommend and then explain their choice in an open-ended manner. The authors then qualitatively analyzed the responses by identifying themes and developed a coding scheme. Two authors separately coded the statements with high overall agreement (kappa = 0.76). Categories were not mutually exclusive. Results Overall, 410 respondents (55%) chose the most expensive option, and 332 respondents (45%) choose a less expensive option. Explanatory comments were given by 82% respondents. Respondents who chose the most expensive test focused on the increased benefit (without directly acknowledging the additional cost) (39%), a general belief that life is more important than money (22%), the significance of cancer risk for the patient in the scenario (20%), the belief that the benefit of the test was worth the additional cost (8%), and personal anecdotes/preferences (6%). Of the respondents who chose the less expensive test, 40% indicated that they did not believe that the patient in the scenario was at significant risk for cancer, 13% indicated that they thought the less expensive test was adequate or not meaningfully different from the more expensive test, 12% thought the cost of the test was not worth the additional benefit, 9% indicated that the test was too expensive (without mention of additional benefit), and 7% responded that resources were limited. Conclusions Public response to cost-quality tradeoffs is mixed. Although some respondents justified their decision based on the cost-effectiveness information provided, many focused instead on specific features of the scenario or on general beliefs about whether cost should be incorporated into clinical decisions. PMID:14570295

  18. Efficient hyperspectral image segmentation using geometric active contour formulation

    NASA Astrophysics Data System (ADS)

    Albalooshi, Fatema A.; Sidike, Paheding; Asari, Vijayan K.

    2014-10-01

    In this paper, we present a new formulation of geometric active contours that embeds the local hyperspectral image information for an accurate object region and boundary extraction. We exploit self-organizing map (SOM) unsupervised neural network to train our model. The segmentation process is achieved by the construction of a level set cost functional, in which, the dynamic variable is the best matching unit (BMU) coming from SOM map. In addition, we use Gaussian filtering to discipline the deviation of the level set functional from a signed distance function and this actually helps to get rid of the re-initialization step that is computationally expensive. By using the properties of the collective computational ability and energy convergence capability of the active control models (ACM) energy functional, our method optimizes the geometric ACM energy functional with lower computational time and smoother level set function. The proposed algorithm starts with feature extraction from raw hyperspectral images. In this step, the principal component analysis (PCA) transformation is employed, and this actually helps in reducing dimensionality and selecting best sets of the significant spectral bands. Then the modified geometric level set functional based ACM is applied on the optimal number of spectral bands determined by the PCA. By introducing local significant spectral band information, our proposed method is capable to force the level set functional to be close to a signed distance function, and therefore considerably remove the need of the expensive re-initialization procedure. To verify the effectiveness of the proposed technique, we use real-life hyperspectral images and test our algorithm in varying textural regions. This framework can be easily adapted to different applications for object segmentation in aerial hyperspectral imagery.

  19. 49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...

  20. 49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...

  1. 49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...

  2. Computer-assisted coding and clinical documentation: first things first.

    PubMed

    Tully, Melinda; Carmichael, Angela

    2012-10-01

    Computer-assisted coding tools have the potential to drive improvements in seven areas: Transparency of coding. Productivity (generally by 20 to 25 percent for inpatient claims). Accuracy (by improving specificity of documentation). Cost containment (by reducing overtime expenses, audit fees, and denials). Compliance. Efficiency. Consistency.

  3. Simulation tools for robotics research and assessment

    NASA Astrophysics Data System (ADS)

    Fields, MaryAnne; Brewer, Ralph; Edge, Harris L.; Pusey, Jason L.; Weller, Ed; Patel, Dilip G.; DiBerardino, Charles A.

    2016-05-01

    The Robotics Collaborative Technology Alliance (RCTA) program focuses on four overlapping technology areas: Perception, Intelligence, Human-Robot Interaction (HRI), and Dexterous Manipulation and Unique Mobility (DMUM). In addition, the RCTA program has a requirement to assess progress of this research in standalone as well as integrated form. Since the research is evolving and the robotic platforms with unique mobility and dexterous manipulation are in the early development stage and very expensive, an alternate approach is needed for efficient assessment. Simulation of robotic systems, platforms, sensors, and algorithms, is an attractive alternative to expensive field-based testing. Simulation can provide insight during development and debugging unavailable by many other means. This paper explores the maturity of robotic simulation systems for applications to real-world problems in robotic systems research. Open source (such as Gazebo and Moby), commercial (Simulink, Actin, LMS), government (ANVEL/VANE), and the RCTA-developed RIVET simulation environments are examined with respect to their application in the robotic research domains of Perception, Intelligence, HRI, and DMUM. Tradeoffs for applications to representative problems from each domain are presented, along with known deficiencies and disadvantages. In particular, no single robotic simulation environment adequately covers the needs of the robotic researcher in all of the domains. Simulation for DMUM poses unique constraints on the development of physics-based computational models of the robot, the environment and objects within the environment, and the interactions between them. Most current robot simulations focus on quasi-static systems, but dynamic robotic motion places an increased emphasis on the accuracy of the computational models. In order to understand the interaction of dynamic multi-body systems, such as limbed robots, with the environment, it may be necessary to build component-level computational models to provide the necessary simulation fidelity for accuracy. However, the Perception domain remains the most problematic for adequate simulation performance due to the often cartoon nature of computer rendering and the inability to model realistic electromagnetic radiation effects, such as multiple reflections, in real-time.

  4. Analysis of patients' willingness to be mobile, taking into account individual characteristics and two exemplary indications.

    PubMed

    Augustin, Jobst; Schäfer, Ines; Augustin, Matthias; Zander, Nicole

    2017-04-01

    With respect to health care planning, it is commonly assumed that patients consult the nearest physician. In reality, however, patients frequently accept great-er efforts/expenses than necessary to see a physician. The objective of the present study was to determine under which circumstances patients were willing to accept additional efforts/expenses, and which role sociodemographic and clinical characteristics play in this regard. Data collection was carried out in the context of a multicenter cross-sectional study among office-based and hospital-affiliated (University Medical Center Hamburg-Eppendorf) dermatologists. Patients (n = 309) with psoriasis and chronic wounds were surveyed about their mobility patterns and disease severity. Data analysis was performed using descriptive and multivariate methods. The willingness to accept additional efforts/expenses is primarily determined by a physician's expertise and service portfolio. Comparing both diagnoses showed that psoriasis patients usually traveled longer distances than wound patients. Among psoriasis patients, one significant predictor for accepting additional efforts/expenses was the level of education. With regard to wound patients, key factors included wound size (severity). The present study revealed complex mobility patterns among patients, which are affected by numerous personal as well as clinical factors. Depending on the diagnosis and individual preferences, additional efforts/expenses can - among other things - be explained by disease severity. Further studies are required to obtain more conclusive data. © 2017 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.

  5. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... expense). Do not use the clause when the only deliverable items are computer software or computer software... architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013 with its Alternate I in... Software Previously Delivered to the Government, in solicitations when the resulting contract will require...

  6. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... expense). Do not use the clause when the only deliverable items are computer software or computer software... architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013 with its Alternate I in... Software Previously Delivered to the Government, in solicitations when the resulting contract will require...

  7. A COMPUTATIONALLY EFFICIENT HYBRID APPROACH FOR DYNAMIC GAS/AEROSOL TRANSFER IN AIR QUALITY MODELS. (R826371C005)

    EPA Science Inventory

    Dynamic mass transfer methods have been developed to better describe the interaction of the aerosol population with semi-volatile species such as nitrate, ammonia, and chloride. Unfortunately, these dynamic methods are computationally expensive. Assumptions are often made to r...

  8. Looking At Display Technologies

    ERIC Educational Resources Information Center

    Bull, Glen; Bull, Gina

    2005-01-01

    A projection system in a classroom with an Internet connection provides a window on the world. Until recently, projectors were expensive and difficult to maintain. Technological advances have resulted in solid-state projectors that require little maintenance and cost no more than a computer. Adding a second or third computer to a classroom…

  9. Site Identification by Ligand Competitive Saturation (SILCS) simulations for fragment-based drug design.

    PubMed

    Faller, Christina E; Raman, E Prabhu; MacKerell, Alexander D; Guvench, Olgun

    2015-01-01

    Fragment-based drug design (FBDD) involves screening low molecular weight molecules ("fragments") that correspond to functional groups found in larger drug-like molecules to determine their binding to target proteins or nucleic acids. Based on the principle of thermodynamic additivity, two fragments that bind nonoverlapping nearby sites on the target can be combined to yield a new molecule whose binding free energy is the sum of those of the fragments. Experimental FBDD approaches, like NMR and X-ray crystallography, have proven very useful but can be expensive in terms of time, materials, and labor. Accordingly, a variety of computational FBDD approaches have been developed that provide different levels of detail and accuracy.The Site Identification by Ligand Competitive Saturation (SILCS) method of computational FBDD uses all-atom explicit-solvent molecular dynamics (MD) simulations to identify fragment binding. The target is "soaked" in an aqueous solution with multiple fragments having different identities. The resulting computational competition assay reveals what small molecule types are most likely to bind which regions of the target. From SILCS simulations, 3D probability maps of fragment binding called "FragMaps" can be produced. Based on the probabilities relative to bulk, SILCS FragMaps can be used to determine "Grid Free Energies (GFEs)," which provide per-atom contributions to fragment binding affinities. For essentially no additional computational overhead relative to the production of the FragMaps, GFEs can be used to compute Ligand Grid Free Energies (LGFEs) for arbitrarily complex molecules, and these LGFEs can be used to rank-order the molecules in accordance with binding affinities.

  10. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    PubMed

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  11. Benchmarking Undedicated Cloud Computing Providers for Analysis of Genomic Datasets

    PubMed Central

    Yazar, Seyhan; Gooden, George E. C.; Mackey, David A.; Hewitt, Alex W.

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5–78.2) for E.coli and 53.5% (95% CI: 34.4–72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5–303.1) and 173.9% (95% CI: 134.6–213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE. PMID:25247298

  12. Large-scale expensive black-box function optimization

    NASA Astrophysics Data System (ADS)

    Rashid, Kashif; Bailey, William; Couët, Benoît

    2012-09-01

    This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.

  13. Accelerated Training for Large Feedforward Neural Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.

  14. Use of less expensive cigarettes in six cities in China: findings from the International Tobacco Control (ITC) China Survey

    PubMed Central

    Hyland, Andrew; Fong, Geoffrey T; Jiang, Yuan; Elton-Marshall, Tara

    2010-01-01

    Objective The existence of less expensive cigarettes in China may undermine public health. The aim of the current study is to examine the use of less expensive cigarettes in six cities in China. Methods Data was from the baseline wave of the International Tobacco Control (ITC) China Survey of 4815 adult urban smokers in 6 cities, conducted between April and August 2006. The percentage of smokers who reported buying less expensive cigarettes (the lowest pricing tertile within each city) at last purchase was computed. Complex sample multivariate logistic regression models were used to identify factors associated with use of less expensive cigarettes. The association between the use of less expensive cigarettes and intention to quit smoking was also examined. Results Smokers who reported buying less expensive cigarettes at last purchase tended to be older, heavier smokers, to have lower education and income, and to think more about the money spent on smoking in the last month. Smokers who bought less expensive cigarettes at the last purchase and who were less knowledgeable about the health harm of smoking were less likely to intend to quit smoking. Conclusions Measures need to be taken to minimise the price differential among cigarette brands and to increase smokers' health knowledge, which may in turn increase their intentions to quit. PMID:20935199

  15. The Advantages of Using Technology in Second Language Education: Technology Integration in Foreign Language Teaching Demonstrates the Shift from a Behavioral to a Constructivist Learning Approach

    ERIC Educational Resources Information Center

    Wang, Li

    2005-01-01

    With the advent of networked computers and Internet technology, computer-based instruction has been widely used in language classrooms throughout the United States. Computer technologies have dramatically changed the way people gather information, conduct research and communicate with others worldwide. Considering the tremendous startup expenses,…

  16. Automatic segmentation of relevant structures in DCE MR mammograms

    NASA Astrophysics Data System (ADS)

    Koenig, Matthias; Laue, Hendrik; Boehler, Tobias; Peitgen, Heinz-Otto

    2007-03-01

    The automatic segmentation of relevant structures such as skin edge, chest wall, or nipple in dynamic contrast enhanced MR imaging (DCE MRI) of the breast provides additional information for computer aided diagnosis (CAD) systems. Automatic reporting using BI-RADS criteria benefits of information about location of those structures. Lesion positions can be automatically described relatively to such reference structures for reporting purposes. Furthermore, this information can assist data reduction for computation expensive preprocessing such as registration, or for visualization of only the segments of current interest. In this paper, a novel automatic method for determining the air-breast boundary resp. skin edge, for approximation of the chest wall, and locating of the nipples is presented. The method consists of several steps which are built on top of each other. Automatic threshold computation leads to the air-breast boundary which is then analyzed to determine the location of the nipple. Finally, results of both steps are starting point for approximation of the chest wall. The proposed process was evaluated on a large data set of DCE MRI recorded by T1 sequences and yielded reasonable results in all cases.

  17. Multidisciplinary Shape Optimization of a Composite Blended Wing Body Aircraft

    NASA Astrophysics Data System (ADS)

    Boozer, Charles Maxwell

    A multidisciplinary shape optimization tool coupling aerodynamics, structure, and performance was developed for battery powered aircraft. Utilizing high-fidelity computational fluid dynamics analysis tools and a structural wing weight tool, coupled based on the multidisciplinary feasible optimization architecture; aircraft geometry is modified in the optimization of the aircraft's range or endurance. The developed tool is applied to three geometries: a hybrid blended wing body, delta wing UAS, the ONERA M6 wing, and a modified ONERA M6 wing. First, the optimization problem is presented with the objective function, constraints, and design vector. Next, the tool's architecture and the analysis tools that are utilized are described. Finally, various optimizations are described and their results analyzed for all test subjects. Results show that less computationally expensive inviscid optimizations yield positive performance improvements using planform, airfoil, and three-dimensional degrees of freedom. From the results obtained through a series of optimizations, it is concluded that the newly developed tool is both effective at improving performance and serves as a platform ready to receive additional performance modules, further improving its computational design support potential.

  18. Deep learning beyond Lefschetz thimbles

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Bedaque, Paulo F.; Lamm, Henry; Lawrence, Scott

    2017-11-01

    The generalized thimble method to treat field theories with sign problems requires repeatedly solving the computationally expensive holomorphic flow equations. We present a machine learning technique to bypass this problem. The central idea is to obtain a few field configurations via the flow equations to train a feed-forward neural network. The trained network defines a new manifold of integration which reduces the sign problem and can be rapidly sampled. We present results for the 1 +1 dimensional Thirring model with Wilson fermions on sizable lattices. In addition to the gain in speed, the parametrization of the integration manifold we use avoids the "trapping" of Monte Carlo chains which plagues large-flow calculations, a considerable shortcoming of the previous attempts.

  19. Effect of solutes on the lattice parameters and elastic stiffness coefficients of body-centered tetragonal Fe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fellinger, Michael R.; Hector, Jr., Louis G.; Trinkle, Dallas R.

    In this study, we compute changes in the lattice parameters and elastic stiffness coefficients C ij of body-centered tetragonal (bct) Fe due to Al, B, C, Cu, Mn, Si, and N solutes. Solute strain misfit tensors determine changes in the lattice parameters as well as strain contributions to the changes in the C ij. We also compute chemical contributions to the changes in the C ij, and show that the sum of the strain and chemical contributions agree with more computationally expensive direct calculations that simultaneously incorporate both contributions. Octahedral interstitial solutes, with C being the most important addition inmore » steels, must be present to stabilize the bct phase over the body-centered cubic phase. We therefore compute the effects of interactions between interstitial C solutes and substitutional solutes on the bct lattice parameters and C ij for all possible solute configurations in the dilute limit, and thermally average the results to obtain effective changes in properties due to each solute. Finally, the computed data can be used to estimate solute-induced changes in mechanical properties such as strength and ductility, and can be directly incorporated into mesoscale simulations of multiphase steels to model solute effects on the bct martensite phase.« less

  20. Cloud computing can simplify HIT infrastructure management.

    PubMed

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  2. The Chemical Engineer's Toolbox: A Glass Box Approach to Numerical Problem Solving

    ERIC Educational Resources Information Center

    Coronell, Daniel G.; Hariri, M. Hossein

    2009-01-01

    Computer programming in undergraduate engineering education all too often begins and ends with the freshman programming course. Improvements in computer technology and curriculum revision have improved this situation, but often at the expense of the students' learning due to the use of commercial "black box" software. This paper describes the…

  3. Superintendents' Perceptions of 1:1 Initiative Implementation and Sustainability

    ERIC Educational Resources Information Center

    Cole, Bobby Virgil, Jr.; Sauers, Nicholas J.

    2018-01-01

    One of the fastest growing, most discussed, and most expensive technology initiatives over the last decade has been one-to-one (1:1) computing initiatives. The purpose of this study was to examine key factors that influenced implementing and sustaining 1:1 computing initiatives from the perspective of school superintendents. Nine superintendents…

  4. Data Bases at a State Institution--Costs, Uses and Needs. AIR Forum Paper 1978.

    ERIC Educational Resources Information Center

    McLaughlin, Gerald W.

    The cost-benefit of administrative data at a state college is placed in perspective relative to the institutional involvement in computer use. The costs of computer operations, personnel, and peripheral equipment expenses related to instruction are analyzed. Data bases and systems support institutional activities, such as registration, and aid…

  5. Film Library Information Management System.

    ERIC Educational Resources Information Center

    Minnella, C. Vincent; And Others

    The computer program described not only allows the user to determine rental sources for a particular film title quickly, but also to select the least expensive of the sources. This program developed at SUNY Cortland's Sperry Learning Resources Center and Computer Center is designed to maintain accurate data on rental and purchase films in both…

  6. A new application for food customization with additive manufacturing technologies

    NASA Astrophysics Data System (ADS)

    Serenó, L.; Vallicrosa, G.; Delgado, J.; Ciurana, J.

    2012-04-01

    Additive Manufacturing (AM) technologies have emerged as a freeform approach capable of producing almost any complete three dimensional (3D) objects from computer-aided design (CAD) data by successively adding material layer by layer. Despite the broad range of possibilities, commercial AM technologies remain complex and expensive, making them suitable only for niche applications. The developments of the Fab@Home system as an open AM technology discovered a new range of possibilities of processing different materials such as edible products. The main objective of this work is to analyze and optimize the manufacturing capacity of this system when producing 3D edible objects. A new heated syringe deposition tool was developed and several process parameters were optimized to adapt this technology to consumers' needs. The results revealed in this study show the potential of this system to produce customized edible objects without qualified personnel knowledge, therefore saving manufacturing costs compared to traditional technologies.

  7. Good Practices in Free-energy Calculations

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher

    2013-01-01

    As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.

  8. Data Structures for Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahan, Simon

    As computing problems of national importance grow, the government meets the increased demand by funding the development of ever larger systems. The overarching goal of the work supported in part by this grant is to increase efficiency of programming and performing computations on these large computing systems. In past work, we have demonstrated that some of these computations once thought to require expensive hardware designs and/or complex, special-purpose programming may be executed efficiently on low-cost commodity cluster computing systems using a general-purpose “latency-tolerant” programming framework. One important developed application of the ideas underlying this framework is graph database technology supportingmore » social network pattern matching used by US intelligence agencies to more quickly identify potential terrorist threats. This database application has been spun out by the Pacific Northwest National Laboratory, a Department of Energy Laboratory, into a commercial start-up, Trovares Inc. We explore an alternative application of the same underlying ideas to a well-studied challenge arising in engineering: solving unstructured sparse linear equations. Solving these equations is key to predicting the behavior of large electronic circuits before they are fabricated. Predicting that behavior ahead of fabrication means that designs can optimized and errors corrected ahead of the expense of manufacture.« less

  9. Development of a Pressure Switched Microfluidic Cell Sorter

    NASA Astrophysics Data System (ADS)

    Ozbay, Baris; Jones, Alex; Gibson, Emily

    2009-10-01

    Lab on a chip technology allows for the replacement of traditional cell sorters with microfluidic devices which can be produced less expensively and are more compact. Additionally, the compact nature of microfluidic cell sorters may lead to the realization of their application in point-of-care medical devices. Though techniques have been demonstrated previously for sorting in microfluidic devices with optical or electro-osmotic switching, both of these techniques are expensive and more difficult to implement than pressure switching. This microfluidic cell sorter design also allows for easy integration with optical spectroscopy for identification of cell type. Our current microfluidic device was fabricated with polydimethylsiloxane (PDMS), a polymer that houses the channels, which is then chemically bonded to a glass slide. The flow of fluid through the device is controlled by pressure controllers, and the switching of the cells is accomplished with the use of a high performance pressure controller interfaced with a computer. The cells are fed through the channels with the use of hydrodynamic focusing techniques. Once the experimental setup is fully functional the objective will be to determine switching rates, explore techniques to optimize these rates, and experiment with sorting of other biomolecules including DNA.

  10. Correlation energy extrapolation by many-body expansion

    DOE PAGES

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...

    2017-01-09

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  11. Correlation energy extrapolation by many-body expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  12. 45 CFR 2507.5 - How does the Corporation process requests for records?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... compelled to create new records or do statistical computations. For example, the Corporation is not required... feasible way to respond to a request. The Corporation is not required to perform any research for the... duplicating all of them. For example, if it requires less time and expense to provide a computer record as a...

  13. 26 CFR 1.179-5 - Time and manner of making election.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... desktop computer costing $1,500. On Taxpayer's 2003 Federal tax return filed on April 15, 2004, Taxpayer elected to expense under section 179 the full cost of the laptop computer and the full cost of the desktop... provided by the Internal Revenue Code, the regulations under the Code, or other guidance published in the...

  14. Innovative Leaders Take the Phone and Run: Profiles of Four Trailblazing Programs

    ERIC Educational Resources Information Center

    Norris, Cathleen; Soloway, Elliot; Menchhofer, Kyle; Bauman, Billie Diane; Dickerson, Mindy; Schad, Lenny; Tomko, Sue

    2010-01-01

    While the Internet changed everything, mobile will change everything squared. The Internet is just a roadway, and computers--the equivalent of cars for the Internet--have been expensive. The keepers of the information roadway--the telecommunication companies--will give one a "computer," such as cell phone, mobile learning device, or MLD,…

  15. 75 FR 25161 - Defense Federal Acquisition Regulation Supplement; Presumption of Development at Private Expense

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-07

    ... asserted restrictions on technical data and computer software. DATES: Comments on the proposed rule should... restrictions on technical data and computer software. More specifically, the proposed rule affects these...) items (as defined at 41 U.S.C. 431(c)). Since COTS items are a subtype of commercial items, this change...

  16. 17 CFR 240.17a-3 - Records to be made by certain exchange members, brokers and dealers.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... records) reflecting all assets and liabilities, income and expense and capital accounts. (3) Ledger..., and a record of the computation of aggregate indebtedness and net capital, as of the trial balance...) thereof shall make a record of the computation of aggregate indebtedness and net capital as of the trial...

  17. Application of Sequence Comparison Methods to Multisensor Data Fusion and Target Recognition

    DTIC Science & Technology

    1993-06-18

    lin- ear comparison). A particularly attractive aspect of the proposed fusion scheme is that it has the potential to work for any object with (1...radar sensing is a historical custom - however, the reader should keep in mind that the fundamental issue in this research is to explore and exploit...reduce the computationally expensive need to compute partial derivatives. In usual practice, the computationally more attractive filter design is

  18. Site Identification by Ligand Competitive Saturation (SILCS) Simulations for Fragment-Based Drug Design

    PubMed Central

    Faller, Christina E.; Raman, E. Prabhu; MacKerell, Alexander D.; Guvench, Olgun

    2015-01-01

    Fragment-based drug design (FBDD) involves screening low molecular weight molecules (“fragments”) that correspond to functional groups found in larger drug-like molecules to determine their binding to target proteins or nucleic acids. Based on the principle of thermodynamic additivity, two fragments that bind non-overlapping nearby sites on the target can be combined to yield a new molecule whose binding free energy is the sum of those of the fragments. Experimental FBDD approaches, like NMR and X-ray crystallography, have proven very useful but can be expensive in terms of time, materials, and labor. Accordingly, a variety of computational FBDD approaches have been developed that provide different levels of detail and accuracy. The Site Identification by Ligand Competitive Saturation (SILCS) method of computational FBDD uses all-atom explicit-solvent molecular dynamics (MD) simulations to identify fragment binding. The target is “soaked” in an aqueous solution with multiple fragments having different identities. The resulting computational competition assay reveals what small molecule types are most likely to bind which regions of the target. From SILCS simulations, 3D probability maps of fragment binding called “FragMaps” can be produced. Based on the probabilities relative to bulk, SILCS FragMaps can be used to determine “Grid Free Energies (GFEs),” which provide per-atom contributions to fragment binding affinities. For essentially no additional computational overhead relative to the production of the FragMaps, GFEs can be used to compute Ligand Grid Free Energies (LGFEs) for arbitrarily complex molecules, and these LGFEs can be used to rank-order the molecules in accordance with binding affinities. PMID:25709034

  19. Numerical Analysis of Crack Tip Plasticity and History Effects under Mixed Mode Conditions

    NASA Astrophysics Data System (ADS)

    Lopez-Crespo, Pablo; Pommier, Sylvie

    The plastic behaviour in the crack tip region has a strong influence on the fatigue life of engineering components. In general, residual stresses developed as a consequence of the plasticity being constrained around the crack tip have a significant role on both the direction of crack propagation and the propagation rate. Finite element methods (FEM) are commonly employed in order to model plasticity. However, if millions of cycles need to be modelled to predict the fatigue behaviour of a component, the method becomes computationally too expensive. By employing a multiscale approach, very precise analyses computed by FEM can be brought to a global scale. The data generated using the FEM enables us to identify a global cyclic elastic-plastic model for the crack tip region. Once this model is identified, it can be employed directly, with no need of additional FEM computations, resulting in fast computations. This is done by partitioning local displacement fields computed by FEM into intensity factors (global data) and spatial fields. A Karhunen-Loeve algorithm developed for image processing was employed for this purpose. In addition, the partitioning is done such as to distinguish into elastic and plastic components. Each of them is further divided into opening mode and shear mode parts. The plastic flow direction was determined with the above approach on a centre cracked panel subjected to a wide range of mixed-mode loading conditions. It was found to agree well with the maximum tangential stress criterion developed by Erdogan and Sih, provided that the loading direction is corrected for residual stresses. In this approach, residual stresses are measured at the global scale through internal intensity factors.

  20. 32 CFR 705.7 - Radio and television.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... extra expense involved. A strict accounting of the additional expenses incurred and charged to the production company must be maintained by the designated project officer. A copy of this accounting will be...) Not in competition with the regular employment of professional performers. (2) The public affairs...

  1. 32 CFR 705.7 - Radio and television.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... extra expense involved. A strict accounting of the additional expenses incurred and charged to the production company must be maintained by the designated project officer. A copy of this accounting will be...) Not in competition with the regular employment of professional performers. (2) The public affairs...

  2. 7 CFR 235.6 - Use of funds.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS STATE ADMINISTRATIVE EXPENSE FUNDS § 235.6 Use of funds. (a) Funds allocated... of Management and Budget Circular A-87. (c) In addition to State Administrative Expense funds made...

  3. 7 CFR 235.6 - Use of funds.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS STATE ADMINISTRATIVE EXPENSE FUNDS § 235.6 Use of funds. (a) Funds allocated... of Management and Budget Circular A-87. (c) In addition to State Administrative Expense funds made...

  4. 7 CFR 235.6 - Use of funds.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS STATE ADMINISTRATIVE EXPENSE FUNDS § 235.6 Use of funds. (a) Funds allocated... of Management and Budget Circular A-87. (c) In addition to State Administrative Expense funds made...

  5. 7 CFR 235.6 - Use of funds.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS STATE ADMINISTRATIVE EXPENSE FUNDS § 235.6 Use of funds. (a) Funds allocated... of Management and Budget Circular A-87. (c) In addition to State Administrative Expense funds made...

  6. 7 CFR 235.6 - Use of funds.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS STATE ADMINISTRATIVE EXPENSE FUNDS § 235.6 Use of funds. (a) Funds allocated... of Management and Budget Circular A-87. (c) In addition to State Administrative Expense funds made...

  7. Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor; Kurth, Thorsten; Carrier, Pierre; Wichmann, Nathan; Prendergast, David; Kent, Paul; Deslippe, Jack

    Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides.

  8. Implementation of the diagonalization-free algorithm in the self-consistent field procedure within the four-component relativistic scheme.

    PubMed

    Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G

    2014-09-05

    A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.

  9. Emissive flat panel displays: A challenge to the AMLCD

    NASA Astrophysics Data System (ADS)

    Walko, R. J.

    According to some sources, flat panel displays (FPD's) for computers will represent a 20-40 billion dollar industry by the end of the decade and could leverage up to 100-200 billion dollars in computer sales. Control of the flat panel display industry could be a significant factor in the global economy if FPD's manage to tap into the enormous audio/visual consumer market. Japan presently leads the world in active matrix liquid crystal display (AMLCD) manufacturing, the current leading FPD technology. The AMLCD is basically a light shutter which does not emit light on its own, but modulates the intensity of a separate backlight. However, other technologies, based on light emitting phosphors, could eventually challenge the AMLCD's lead position. These light-emissive technologies do not have the size, temperature and viewing angle limitations of AMLCD's. In addition, they could also be less expensive to manufacture, and require a smaller capital outlay for a manufacturing plant. An overview of these alternative technologies is presented.

  10. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  11. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  12. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  13. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  14. Application of Phase-Field Techniques to Hydraulically- and Deformation-Induced Fracture.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Culp, David; Miller, Nathan; Schweizer, Laura

    Phase-field techniques provide an alternative approach to fracture problems which mitigate some of the computational expense associated with tracking the crack interface and the coalescence of individual fractures. The technique is extended to apply to hydraulically driven fracture such as would occur during fracking or CO 2 sequestration. Additionally, the technique is applied to a stainless steel specimen used in the Sandia Fracture Challenge. It was found that the phase-field model performs very well, at least qualitatively, in both deformation-induced fracture and hydraulically-induced fracture, though spurious hourglassing modes were observed during coupled hydralically-induced fracture. Future work would include performing additionalmore » quantitative benchmark tests and updating the model as needed.« less

  15. 3D RISM theory with fast reciprocal-space electrostatics.

    PubMed

    Heil, Jochen; Kast, Stefan M

    2015-03-21

    The calculation of electrostatic solute-solvent interactions in 3D RISM ("three-dimensional reference interaction site model") integral equation theory is recast in a form that allows for a computational treatment analogous to the "particle-mesh Ewald" formalism as used for molecular simulations. In addition, relations that connect 3D RISM correlation functions and interaction potentials with thermodynamic quantities such as the chemical potential and average solute-solvent interaction energy are reformulated in a way that calculations of expensive real-space electrostatic terms on the 3D grid are completely avoided. These methodical enhancements allow for both, a significant speedup particularly for large solute systems and a smoother convergence of predicted thermodynamic quantities with respect to box size, as illustrated for several benchmark systems.

  16. What Would a Graph Look Like in this Layout? A Machine Learning Approach to Large Graph Visualization.

    PubMed

    Kwon, Oh-Hyun; Crnovrsanin, Tarik; Ma, Kwan-Liu

    2018-01-01

    Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.

  17. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2018-02-01

    The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.

  18. Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.

    PubMed

    Trudgian, David C; Mirzaei, Hamid

    2012-12-07

    We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.

  19. Improving finite element results in modeling heart valve mechanics.

    PubMed

    Earl, Emily; Mohammadi, Hadi

    2018-06-01

    Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.

  20. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    NASA Astrophysics Data System (ADS)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  1. Automatic Data Filter Customization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  2. Committee-Based Active Learning for Surrogate-Assisted Particle Swarm Optimization of Expensive Problems.

    PubMed

    Wang, Handing; Jin, Yaochu; Doherty, John

    2017-09-01

    Function evaluations (FEs) of many real-world optimization problems are time or resource consuming, posing a serious challenge to the application of evolutionary algorithms (EAs) to solve these problems. To address this challenge, the research on surrogate-assisted EAs has attracted increasing attention from both academia and industry over the past decades. However, most existing surrogate-assisted EAs (SAEAs) either still require thousands of expensive FEs to obtain acceptable solutions, or are only applied to very low-dimensional problems. In this paper, a novel surrogate-assisted particle swarm optimization (PSO) inspired from committee-based active learning (CAL) is proposed. In the proposed algorithm, a global model management strategy inspired from CAL is developed, which searches for the best and most uncertain solutions according to a surrogate ensemble using a PSO algorithm and evaluates these solutions using the expensive objective function. In addition, a local surrogate model is built around the best solution obtained so far. Then, a PSO algorithm searches on the local surrogate to find its optimum and evaluates it. The evolutionary search using the global model management strategy switches to the local search once no further improvement can be observed, and vice versa. This iterative search process continues until the computational budget is exhausted. Experimental results comparing the proposed algorithm with a few state-of-the-art SAEAs on both benchmark problems up to 30 decision variables as well as an airfoil design problem demonstrate that the proposed algorithm is able to achieve better or competitive solutions with a limited budget of hundreds of exact FEs.

  3. Computing LORAN time differences with an HP-25 hand calculator

    NASA Technical Reports Server (NTRS)

    Jones, E. D.

    1978-01-01

    A program for an HP-25 or HP-25C hand calculator that will calculate accurate LORAN-C time differences is described and presented. The program is most useful when checking the accuracy of a LORAN-C receiver at a known latitude and longitude without the aid of an expensive computer. It can thus be used to compute time differences for known landmarks or waypoints to predict in advance the approximate readings during a navigation mission.

  4. Design Trade-off Between Performance and Fault-Tolerance of Space Onboard Computers

    NASA Astrophysics Data System (ADS)

    Gorbunov, M. S.; Antonov, A. A.

    2017-01-01

    It is well known that there is a trade-off between performance and power consumption in onboard computers. The fault-tolerance is another important factor affecting performance, chip area and power consumption. Involving special SRAM cells and error-correcting codes is often too expensive with relation to the performance needed. We discuss the possibility of finding the optimal solutions for modern onboard computer for scientific apparatus focusing on multi-level cache memory design.

  5. Current And Future Directions Of Lens Design Software

    NASA Astrophysics Data System (ADS)

    Gustafson, Darryl E.

    1983-10-01

    The most effective environment for doing lens design continues to evolve as new computer hardware and software tools become available. Important recent hardware developments include: Low-cost but powerful interactive multi-user 32 bit computers with virtual memory that are totally software-compatible with prior larger and more expensive members of the family. A rapidly growing variety of graphics devices for both hard-copy and screen graphics, including many with color capability. In addition, with optical design software readily accessible in many forms, optical design has become a part-time activity for a large number of engineers instead of being restricted to a small number of full-time specialists. A designer interface that is friendly for the part-time user while remaining efficient for the full-time designer is thus becoming more important as well as more practical. Along with these developments, software tools in other scientific and engineering disciplines are proliferating. Thus, the optical designer is less and less unique in his use of computer-aided techniques and faces the challenge and opportunity of efficiently communicating his designs to other computer-aided-design (CAD), computer-aided-manufacturing (CAM), structural, thermal, and mechanical software tools. This paper will address the impact of these developments on the current and future directions of the CODE VTM optical design software package, its implementation, and the resulting lens design environment.

  6. Sustaining Moore's law with 3D chips

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik P.; Badaroglu, Mustafa; Chen, An

    Here, rather than continue the expensive and time-consuming quest for transistor replacement, the authors argue that 3D chips coupled with new computer architectures can keep Moore's law on its traditional scaling path.

  7. Sustaining Moore's law with 3D chips

    DOE PAGES

    DeBenedictis, Erik P.; Badaroglu, Mustafa; Chen, An; ...

    2017-08-01

    Here, rather than continue the expensive and time-consuming quest for transistor replacement, the authors argue that 3D chips coupled with new computer architectures can keep Moore's law on its traditional scaling path.

  8. 46 CFR 404.5 - Guidelines for the recognition of expenses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... to the extent that they conform to depreciation plus an allowance for return on investment (computed... ratemaking purposes. The Director reviews non-pilotage activities to determine if any adversely impact the...

  9. Computer programs: Information retrieval and data analysis, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The items presented in this compilation are divided into two sections. Section one treats of computer usage devoted to the retrieval of information that affords the user rapid entry into voluminous collections of data on a selective basis. Section two is a more generalized collection of computer options for the user who needs to take such data and reduce it to an analytical study within a specific discipline. These programs, routines, and subroutines should prove useful to users who do not have access to more sophisticated and expensive computer software.

  10. Towards Wearable Cognitive Assistance

    DTIC Science & Technology

    2013-12-01

    ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: mobile computing, cloud...It presents a muli-tiered mobile system architecture that offers tight end-to-end latency bounds on compute-intensive cognitive assistance...to an entire neighborhood or an entire city is extremely expensive and time-consuming. Physical infrastructure in public spaces tends to evolve very

  11. Behavior-Based Fault Monitoring

    DTIC Science & Technology

    1990-12-03

    processor targeted for avionics and space applications . It appears that the signature monitoring technique can be extended to detect computer viruses as...most common approach is structural duplication. Although effective, duplication is too expensive for all but a few applications . Redundancy can also be...Signature Monitoring and Encryption," Int. Conf. on Dependable Computing for Critical Applications , August 1989. 7. K.D. Wilken and J.P. Shen

  12. Artificial Intelligence Methods: Challenge in Computer Based Polymer Design

    NASA Astrophysics Data System (ADS)

    Rusu, Teodora; Pinteala, Mariana; Cartwright, Hugh

    2009-08-01

    This paper deals with the use of Artificial Intelligence Methods (AI) in the design of new molecules possessing desired physical, chemical and biological properties. This is an important and difficult problem in the chemical, material and pharmaceutical industries. Traditional methods involve a laborious and expensive trial-and-error procedure, but computer-assisted approaches offer many advantages in the automation of molecular design.

  13. Gaussian process regression of chirplet decomposed ultrasonic B-scans of a simulated design case

    NASA Astrophysics Data System (ADS)

    Wertz, John; Homa, Laura; Welter, John; Sparkman, Daniel; Aldrin, John

    2018-04-01

    The US Air Force seeks to implement damage tolerant lifecycle management of composite structures. Nondestructive characterization of damage is a key input to this framework. One approach to characterization is model-based inversion of the ultrasonic response from damage features; however, the computational expense of modeling the ultrasonic waves within composites is a major hurdle to implementation. A surrogate forward model with sufficient accuracy and greater computational efficiency is therefore critical to enabling model-based inversion and damage characterization. In this work, a surrogate model is developed on the simulated ultrasonic response from delamination-like structures placed at different locations within a representative composite layup. The resulting B-scans are decomposed via the chirplet transform, and a Gaussian process model is trained on the chirplet parameters. The quality of the surrogate is tested by comparing the B-scan for a delamination configuration not represented within the training data set. The estimated B-scan has a maximum error of ˜15% for an estimated reduction in computational runtime of ˜95% for 200 function calls. This considerable reduction in computational expense makes full 3D characterization of impact damage tractable.

  14. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  15. 48 CFR 1433.104 - Protests to GAO.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... report as required by FAR 33.104(a)(3). (2) The SOL will furnish promptly GAO's written notice of the... expense or difficulty in performance. If appropriate, the report shall contain a statement regarding any... difficulties or additional expense to the Government. The contracting activity shall submit the CO's report to...

  16. 48 CFR 1433.104 - Protests to GAO.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... report as required by FAR 33.104(a)(3). (2) The SOL will furnish promptly GAO's written notice of the... expense or difficulty in performance. If appropriate, the report shall contain a statement regarding any... difficulties or additional expense to the Government. The contracting activity shall submit the CO's report to...

  17. 48 CFR 1433.104 - Protests to GAO.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... report as required by FAR 33.104(a)(3). (2) The SOL will furnish promptly GAO's written notice of the... expense or difficulty in performance. If appropriate, the report shall contain a statement regarding any... difficulties or additional expense to the Government. The contracting activity shall submit the CO's report to...

  18. 26 CFR 1.213-1 - Medical, dental, etc., expenses.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 3 2012-04-01 2012-04-01 false Medical, dental, etc., expenses. 1.213-1 Section... (CONTINUED) INCOME TAXES (CONTINUED) Additional Itemized Deductions for Individuals § 1.213-1 Medical, dental... (including nurses' board where paid by the taxpayer), medical, laboratory, surgical, dental and other...

  19. 26 CFR 1.213-1 - Medical, dental, etc., expenses.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 3 2014-04-01 2014-04-01 false Medical, dental, etc., expenses. 1.213-1 Section... (CONTINUED) INCOME TAXES (CONTINUED) Additional Itemized Deductions for Individuals § 1.213-1 Medical, dental... (including nurses' board where paid by the taxpayer), medical, laboratory, surgical, dental and other...

  20. 26 CFR 1.213-1 - Medical, dental, etc., expenses.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 3 2013-04-01 2013-04-01 false Medical, dental, etc., expenses. 1.213-1 Section... (CONTINUED) INCOME TAXES (CONTINUED) Additional Itemized Deductions for Individuals § 1.213-1 Medical, dental... (including nurses' board where paid by the taxpayer), medical, laboratory, surgical, dental and other...

  1. 18 CFR 35.24 - Tax normalization for public utilities.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the Natural Gas Act. (7) Tax effect means the tax reduction or addition associated with a specific... purposes; (v) Differences that arise from recognition of research, development, and demonstration... purchased gas costs as a current expense for tax purposes but as a deferred expense for book purposes. (See...

  2. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  3. (Re)engineering Earth System Models to Expose Greater Concurrency for Ultrascale Computing: Practice, Experience, and Musings

    NASA Astrophysics Data System (ADS)

    Mills, R. T.

    2014-12-01

    As the high performance computing (HPC) community pushes towards the exascale horizon, the importance and prevalence of fine-grained parallelism in new computer architectures is increasing. This is perhaps most apparent in the proliferation of so-called "accelerators" such as the Intel Xeon Phi or NVIDIA GPGPUs, but the trend also holds for CPUs, where serial performance has grown slowly and effective use of hardware threads and vector units are becoming increasingly important to realizing high performance. This has significant implications for weather, climate, and Earth system modeling codes, many of which display impressive scalability across MPI ranks but take relatively little advantage of threading and vector processing. In addition to increasing parallelism, next generation codes will also need to address increasingly deep hierarchies for data movement: NUMA/cache levels, on node vs. off node, local vs. wide neighborhoods on the interconnect, and even in the I/O system. We will discuss some approaches (grounded in experiences with the Intel Xeon Phi architecture) for restructuring Earth science codes to maximize concurrency across multiple levels (vectors, threads, MPI ranks), and also discuss some novel approaches for minimizing expensive data movement/communication.

  4. Can price controls reduce pharmaceutical expenses? A case study of antibacterial expenditures in 12 Chinese hospitals from 1996 to 2005.

    PubMed

    Han, Sheng; Liang, Huigang; Su, Weiping; Xue, Yajiong; Shi, Luwen

    2013-01-01

    The objective of this article is to investigate whether the Chinese government's pricing policies have reduced pharmaceutical expenses. The purchasing records for systemic antibacterial drugs of 12 hospitals in Beijing from 1996 to 2005 were analyzed by separating the expenditure growth into three components: the price change, the volume change, and the structure change. Our results reveal that the structure change is the dominant determinant of drug expenditure growth. Despite lowered prices, the antibacterial drug expenditure was raised because more expensive drugs in the same therapeutic category were prescribed. It is insufficient to rely only on pricing policies to reduce drug expenses, given that physicians could circumvent the policy by prescribing more expensive drugs. In addition, physician behaviors need to be regulated to eliminate unnecessary overprescribing.

  5. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  6. Reliable but Timesaving: In Search of an Efficient Quantum-chemical Method for the Description of Functional Fullerenes.

    PubMed

    Reis, H; Rasulev, B; Papadopoulos, M G; Leszczynski, J

    2015-01-01

    Fullerene and its derivatives are currently one of the most intensively investigated species in the area of nanomedicine and nanochemistry. Various unique properties of fullerenes are responsible for their wide range applications in industry, biology and medicine. A large pool of functionalized C60 and C70 fullerenes is investigated theoretically at different levels of quantum-mechanical theory. The semiempirial PM6 method, density functional theory with the B3LYP functional, and correlated ab initio MP2 method are employed to compute the optimized structures, and an array of properties for the considered species. In addition to the calculations for isolated molecules, the results of solution calculations are also reported at the DFT level, using the polarizable continuum model (PCM). Ionization potentials (IPs) and electron affinities (EAs) are computed by means of Koopmans' theorem as well as with the more accurate but computationally expensive ΔSCF method. Both procedures yield comparable values, while comparison of IPs and EAs computed with different quantum-mechanical methods shows surprisingly large differences. Harmonic vibrational frequencies are computed at the PM6 and B3LYP levels of theory and compared with each other. A possible application of the frequencies as 3D descriptors in the EVA (EigenVAlues) method is shown. All the computed data are made available, and may be used to replace experimental data in routine applications where large amounts of data are required, e.g. in structure-activity relationship studies of the toxicity of fullerene derivatives.

  7. Extension of market exclusivity and its impact on the accessibility to essential medicines, and drug expense in Thailand: analysis of the effect of TRIPs-Plus proposal.

    PubMed

    Akaleephan, Chutima; Wibulpolprasert, Suwit; Sakulbumrungsil, Rungpetch; Luangruangrong, Paithip; Jitraknathee, Anchalee; Aeksaengsri, Achara; Udomaksorn, Siripa; Tangcharoensathien, Viroj; Tantivess, Sripen

    2009-07-01

    In Thailand and the US negotiating FTA, the 'TRIPs-Plus' is one of the US proposal which would result in an extension of market exclusivity of innovative drugs. In addition, it would foreseeably lead to high and unaffordable medicine prices and inaccessibility to essential medicines. To quantify the impact on medicine expense and medicine accessibility. Based on 2000 to 2003 Thai Food and Drug Administration (FDA)'s and the Drug & Medical Supply Information Center (DMSIC), costs and accessibility were estimated upon the price and quantity costing between innovative drugs and their generics plus some parameters found from their competitive behaviour. Thereafter, we simulated the 10-year potential additional expense on the 2003 unit price of the patented and monopolized non-patented medicines. In 2003, the availability of generics helped to save 104.5% of actual expense and the accessibility would increase by 53.6%. By extension of market exclusivity, given that there were 60 new items approved annually, the cumulative potential expense was projected to be $US 6.2 million for the first year to $US 5215.8 million in tenth year. The TRIPs-Plus proposal would result in a significant increase in the medicine expense; and a delay in the increase in drug accessibility via generics. Several options as well as other related mechanisms to help reduce the negative impact are proposed.

  8. A Spectral Finite Element Approach to Modeling Soft Solids Excited with High-Frequency Harmonic Loads

    PubMed Central

    Brigham, John C.; Aquino, Wilkins; Aguilo, Miguel A.; Diamessis, Peter J.

    2010-01-01

    An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number. PMID:21461402

  9. Computing technology in the 1980's. [computers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Advances in computing technology have been led by consistently improving semiconductor technology. The semiconductor industry has turned out ever faster, smaller, and less expensive devices since transistorized computers were first introduced 20 years ago. For the next decade, there appear to be new advances possible, with the rate of introduction of improved devices at least equal to the historic trends. The implication of these projections is that computers will enter new markets and will truly be pervasive in business, home, and factory as their cost diminishes and their computational power expands to new levels. The computer industry as we know it today will be greatly altered in the next decade, primarily because the raw computer system will give way to computer-based turn-key information and control systems.

  10. Switching from computer to microcomputer architecture education

    NASA Astrophysics Data System (ADS)

    Bolanakis, Dimosthenis E.; Kotsis, Konstantinos T.; Laopoulos, Theodore

    2010-03-01

    In the last decades, the technological and scientific evolution of the computing discipline has been widely affecting research in software engineering education, which nowadays advocates more enlightened and liberal ideas. This article reviews cross-disciplinary research on a computer architecture class in consideration of its switching to microcomputer architecture. The authors present their strategies towards a successful crossing of boundaries between engineering disciplines. This communication aims at providing a different aspect on professional courses that are, nowadays, addressed at the expense of traditional courses.

  11. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments

    DOE PAGES

    Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...

    2015-11-09

    Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.

  12. QuickProbs—A Fast Multiple Sequence Alignment Algorithm Designed for Graphics Processors

    PubMed Central

    Gudyś, Adam; Deorowicz, Sebastian

    2014-01-01

    Multiple sequence alignment is a crucial task in a number of biological analyses like secondary structure prediction, domain searching, phylogeny, etc. MSAProbs is currently the most accurate alignment algorithm, but its effectiveness is obtained at the expense of computational time. In the paper we present QuickProbs, the variant of MSAProbs customised for graphics processors. We selected the two most time consuming stages of MSAProbs to be redesigned for GPU execution: the posterior matrices calculation and the consistency transformation. Experiments on three popular benchmarks (BAliBASE, PREFAB, OXBench-X) on quad-core PC equipped with high-end graphics card show QuickProbs to be 5.7 to 9.7 times faster than original CPU-parallel MSAProbs. Additional tests performed on several protein families from Pfam database give overall speed-up of 6.7. Compared to other algorithms like MAFFT, MUSCLE, or ClustalW, QuickProbs proved to be much more accurate at similar speed. Additionally we introduce a tuned variant of QuickProbs which is significantly more accurate on sets of distantly related sequences than MSAProbs without exceeding its computation time. The GPU part of QuickProbs was implemented in OpenCL, thus the package is suitable for graphics processors produced by all major vendors. PMID:24586435

  13. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  14. Virtual planning for craniomaxillofacial surgery--7 years of experience.

    PubMed

    Adolphs, Nicolai; Haberl, Ernst-Johannes; Liu, Weichen; Keeve, Erwin; Menneking, Horst; Hoffmeister, Bodo

    2014-07-01

    Contemporary computer-assisted surgery systems more and more allow for virtual simulation of even complex surgical procedures with increasingly realistic predictions. Preoperative workflows are established and different commercially software solutions are available. Potential and feasibility of virtual craniomaxillofacial surgery as an additional planning tool was assessed retrospectively by comparing predictions and surgical results. Since 2006 virtual simulation has been performed in selected patient cases affected by complex craniomaxillofacial disorders (n = 8) in addition to standard surgical planning based on patient specific 3d-models. Virtual planning could be performed for all levels of the craniomaxillofacial framework within a reasonable preoperative workflow. Simulation of even complex skeletal displacements corresponded well with the real surgical result and soft tissue simulation proved to be helpful. In combination with classic 3d-models showing the underlying skeletal pathology virtual simulation improved planning and transfer of craniomaxillofacial corrections. Additional work and expenses may be justified by increased possibilities of visualisation, information, instruction and documentation in selected craniomaxillofacial procedures. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  15. 3-D and quasi-2-D discrete element modeling of grain commingling in a bucket elevator boot system

    USDA-ARS?s Scientific Manuscript database

    Unwanted grain commingling impedes new quality-based grain handling systems and has proven to be an expensive and time consuming issue to study experimentally. Experimentally validated models may reduce the time and expense of studying grain commingling while providing additional insight into detail...

  16. Boeing’s Integrated Defense Systems Restructuring: Significant and Preventable Cost Impacts to Army Aviation Programs

    DTIC Science & Technology

    2005-03-18

    IDS, the treatment and handling of Boeing World Headquarters (BWHQ) costs, common or shared systems costs, Shared Services Group costs, fringe...these expenses.15 One such example is the addition of the Shared Services Group (SSG) expense to the Mesa and Philadelphia accounting ledgers. Under

  17. 24 CFR 990.190 - Other formula expenses (add-ons).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... operating subsidy is determined to be zero based on the formula is still eligible to receive operating... formula expenses (add-ons). In addition to calculating operating subsidy based on the PEL and UEL, a PHA's... receive an amount for PILOT in accordance with section 6(d) of the 1937 Act, based on its cooperation...

  18. 24 CFR 990.190 - Other formula expenses (add-ons).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... operating subsidy is determined to be zero based on the formula is still eligible to receive operating... formula expenses (add-ons). In addition to calculating operating subsidy based on the PEL and UEL, a PHA's... receive an amount for PILOT in accordance with section 6(d) of the 1937 Act, based on its cooperation...

  19. Digital video technology, today and tomorrow

    NASA Astrophysics Data System (ADS)

    Liberman, J.

    1994-10-01

    Digital video is probably computing's fastest moving technology today. Just three years ago, the zenith of digital video technology on the PC was the successful marriage of digital text and graphics with analog audio and video by means of expensive analog laser disc players and video overlay boards. The state of the art involves two different approaches to fully digital video on computers: hardware-assisted and software-only solutions.

  20. A fast CT reconstruction scheme for a general multi-core PC.

    PubMed

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.

  1. A Fast CT Reconstruction Scheme for a General Multi-Core PC

    PubMed Central

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731

  2. 25 CFR 700.163 - Expenses in searching for replacement location-nonresidential moves.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., including— (a) Transportation computed at prevailing federal per diem and mileage allowance schedules; meals and lodging away from home; (b) Time spent searching, based on reasonable earnings; (c) Fees paid to a...

  3. 25 CFR 700.163 - Expenses in searching for replacement location-nonresidential moves.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., including— (a) Transportation computed at prevailing federal per diem and mileage allowance schedules; meals and lodging away from home; (b) Time spent searching, based on reasonable earnings; (c) Fees paid to a...

  4. SIMULATING ATMOSPHERIC EXPOSURE USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME

    EPA Science Inventory

    Multimedia Risk assessments require the temporal integration of atmospheric concentration and deposition estimates with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-ter...

  5. 47 CFR 54.639 - Ineligible expenses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., including the following: i. Computers, including servers, and related hardware (e.g., printers, scanners, laptops), unless used exclusively for network management, maintenance, or other network operations; ii... installation/construction; marketing studies, marketing activities, or outreach to potential network members...

  6. 47 CFR 54.639 - Ineligible expenses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., including the following: i. Computers, including servers, and related hardware (e.g., printers, scanners, laptops), unless used exclusively for network management, maintenance, or other network operations; ii... installation/construction; marketing studies, marketing activities, or outreach to potential network members...

  7. Inner Space Perturbation Theory in Matrix Product States: Replacing Expensive Iterative Diagonalization.

    PubMed

    Ren, Jiajun; Yi, Yuanping; Shuai, Zhigang

    2016-10-11

    We propose an inner space perturbation theory (isPT) to replace the expensive iterative diagonalization in the standard density matrix renormalization group theory (DMRG). The retained reduced density matrix eigenstates are partitioned into the active and secondary space. The first-order wave function and the second- and third-order energies are easily computed by using one step Davidson iteration. Our formulation has several advantages including (i) keeping a balance between the efficiency and accuracy, (ii) capturing more entanglement with the same amount of computational time, (iii) recovery of the standard DMRG when all the basis states belong to the active space. Numerical examples for the polyacenes and periacene show that the efficiency gain is considerable and the accuracy loss due to the perturbation treatment is very small, when half of the total basis states belong to the active space. Moreover, the perturbation calculations converge in all our numerical examples.

  8. Multi-chain Markov chain Monte Carlo methods for computationally expensive models

    NASA Astrophysics Data System (ADS)

    Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.

    2017-12-01

    Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.

  9. On Using Surrogates with Genetic Programming.

    PubMed

    Hildebrandt, Torsten; Branke, Jürgen

    2015-01-01

    One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.

  10. Medicine expenses and obesity in Brazil: an analysis based on the household budget survey.

    PubMed

    Canella, Daniela S; Novaes, Hillegonda M D; Levy, Renata B

    2016-01-20

    Obesity can be considered a global public health problem that affects virtually all countries worldwide and results in greater use of healthcare services and higher healthcare costs. We aimed to describe average monthly household medicine expenses according to source of funding, public or private, and to estimate the influence of the presence of obese residents in households on total medicine expenses. This study was based on data from the 2008-2009 Brazilian Household Budget Survey, with a representative population sample of 55,970 households as study units. Information on nutritional status and medicines acquired and their cost in the past 30 days were analyzed. A two-part model was employed to assess the influence of obesity on medicine expenses, with monthly household medicine expenses per capita as outcome, presence of obese in the household as explanatory variable, and adjustment for confounding variables. Out-of-pocket expenses on medicines were always higher than the cost of medicines obtained through the public sector, and 32 % of households had at least one obese as resident. Monthly household expenses on medicines per capita in households with obese was US$ 20.40, 16 % higher than in households with no obese. An adjusted model confirmed that the presence of obese in the households increased medicine expenses. Obesity is associated with additional medicine expenses, increasing the negative impact on household budgets and public expenditure.

  11. How to create a very-low-cost, very-low-power, credit-card-sized and real-time-ready datalogger

    NASA Astrophysics Data System (ADS)

    Bès de Berc, M.; Grunberg, M.; Engels, F.

    2015-03-01

    In order to improve an existing network, a field seismologist would have to add some extra sensors to a remote station. However, additional ADCs (analogue-to-digital converters) are not always implemented on commercial dataloggers, or, if they are, they may already be used. Installing additional ADCs often implies an expensive development, or the purchase of a new datalogger. We present here a simple method to take advantage of the ADCs of an embedded computer in order to create data in a seismological standard format and integrate them within the real-time data stream from the station. Our first goal is to plug temperature and pressure sensors on the ADCs, read data and record them in mini-seed format (seed stands for Standard for the Exchange of the Earthquake Data), and eventually transfer them to a central server together with the seismic data, by using seedlink, since mini-seed and seedlink are standard for seismology.

  12. A non-linear piezoelectric actuator calibration using N-dimensional Lissajous figure

    NASA Astrophysics Data System (ADS)

    Albertazzi, A.; Viotti, M. R.; Veiga, C. L. N.; Fantin, A. V.

    2016-08-01

    Piezoelectric translators (PZTs) are very often used as phase shifters in interferometry. However, they typically present a non-linear behavior and strong hysteresis. The use of an additional resistive or capacitive sensor make possible to linearize the response of the PZT by feedback control. This approach works well, but makes the device more complex and expensive. A less expensive approach uses a non-linear calibration. In this paper, the authors used data from at least five interferograms to form N-dimensional Lissajous figures to establish the actual relationship between the applied voltages and the resulting phase shifts [1]. N-dimensional Lissajous figures are formed when N sinusoidal signals are combined in an N-dimensional space, where one signal is assigned to each axis. It can be verified that the resulting Ndimensional ellipsis lays in a 2D plane. By fitting an ellipsis equation to the resulting 2D ellipsis it is possible to accurately compute the resulting phase value for each interferogram. In this paper, the relationship between the resulting phase shift and the applied voltage is simultaneously established for a set of 12 increments by a fourth degree polynomial. The results in speckle interferometry show that, after two or three interactions, the calibration error is usually smaller than 1°.

  13. Temperature scaling method for Markov chains.

    PubMed

    Crosby, Lonnie D; Windus, Theresa L

    2009-01-22

    The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.

  14. Blind detection of isolated astrophysical pulses in the spatial Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Schmid, Natalia A.; Prestage, Richard M.

    2018-07-01

    We present a novel approach for the detection of isolated transients in pulsar surveys and fast radio transient observations. Rather than the conventional approach of performing a computationally expensive blind dispersion measure search, we take the spatial Fourier transform (SFT) of short (˜ few seconds) sections of data. A transient will have a characteristic signature in the SFT domain, and we present a blind statistic which may be used to detect this signature at an empirical zero false alarm rate. The method has been evaluated using simulations, and also applied to two fast radio burst observations. In addition to its use for current observations, we expect this method will be extremely beneficial for future multibeam observations made by telescopes equipped with phased array feeds.

  15. Implementation of Implicit Adaptive Mesh Refinement in an Unstructured Finite-Volume Flow Solver

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2013-01-01

    This paper explores the implementation of adaptive mesh refinement in an unstructured, finite-volume solver. Unsteady and steady problems are considered. The effect on the recovery of high-order numerics is explored and the results are favorable. Important to this work is the ability to provide a path for efficient, implicit time advancement. A method using a simple refinement sensor based on undivided differences is discussed and applied to a practical problem: a shock-shock interaction on a hypersonic, inviscid double-wedge. Cases are compared to uniform grids without the use of adapted meshes in order to assess error and computational expense. Discussion of difficulties, advances, and future work prepare this method for additional research. The potential for this method in more complicated flows is described.

  16. Blind detection of isolated astrophysical pulses in the spatial Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Schmid, Natalia A.; Prestage, Richard M.

    2018-04-01

    We present a novel approach for the detection of isolated transients in pulsar surveys and fast radio transient observations. Rather than the conventional approach of performing a computationally expensive blind DM search, we take the spatial Fourier transform (SFT) of short (˜ few seconds) sections of data. A transient will have a characteristic signature in the SFT domain, and we present a blind statistic which may be used to detect this signature at an empirical zero False Alarm Rate (FAR). The method has been evaluated using simulations, and also applied to two fast radio burst observations. In addition to its use for current observations, we expect this method will be extremely beneficial for future multi-beam observations made by telescopes equipped with phased array feeds.

  17. Numerical prediction of the energy efficiency of the three-dimensional fish school using the discretized Adomian decomposition method

    NASA Astrophysics Data System (ADS)

    Lin, Yinwei

    2018-06-01

    A three-dimensional modeling of fish school performed by a modified Adomian decomposition method (ADM) discretized by the finite difference method is proposed. To our knowledge, few studies of the fish school are documented due to expensive cost of numerical computing and tedious three-dimensional data analysis. Here, we propose a simple model replied on the Adomian decomposition method to estimate the efficiency of energy saving of the flow motion of the fish school. First, the analytic solutions of Navier-Stokes equations are used for numerical validation. The influences of the distance between the side-by-side two fishes are studied on the energy efficiency of the fish school. In addition, the complete error analysis for this method is presented.

  18. Multitarget drug discovery projects in CNS diseases: quantitative systems pharmacology as a possible path forward.

    PubMed

    Geerts, Hugo; Kennis, Ludo

    2014-01-01

    Clinical development in brain diseases has one of the lowest success rates in the pharmaceutical industry, and many promising rationally designed single-target R&D projects fail in expensive Phase III trials. By contrast, successful older CNS drugs do have a rich pharmacology. This article will provide arguments suggesting that highly selective single-target drugs are not sufficiently powerful to restore complex neuronal circuit homeostasis. A rationally designed multitarget project can be derisked by dialing in an additional symptomatic treatment effect on top of a disease modification target. Alternatively, we expand upon a hypothetical workflow example using a humanized computer-based quantitative systems pharmacology platform. The hope is that incorporating rationally multipharmacology drug discovery could potentially lead to more impactful polypharmacy drugs.

  19. Space-filling designs for computer experiments: A review

    DOE PAGES

    Joseph, V. Roshan

    2016-01-29

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  20. Space-filling designs for computer experiments: A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, V. Roshan

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  1. GPU-computing in econophysics and statistical physics

    NASA Astrophysics Data System (ADS)

    Preis, T.

    2011-03-01

    A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.

  2. A computer-based physics laboratory apparatus: Signal generator software

    NASA Astrophysics Data System (ADS)

    Thanakittiviroon, Tharest; Liangrocapart, Sompong

    2005-09-01

    This paper describes a computer-based physics laboratory apparatus to replace expensive instruments such as high-precision signal generators. This apparatus uses a sound card in a common personal computer to give sinusoidal signals with an accurate frequency that can be programmed to give different frequency signals repeatedly. An experiment on standing waves on an oscillating string uses this apparatus. In conjunction with interactive lab manuals, which have been developed using personal computers in our university, we achieve a complete set of low-cost, accurate, and easy-to-use equipment for teaching a physics laboratory.

  3. Medicare program; delay in implementing the adjustments to the practice expense relative value units under the physician fee schedule for calendar year 1998--HCFA. Notice of intent to regulate.

    PubMed

    1997-10-31

    This notice identifies provisions in the Medicare physician fee schedule regulations that are affected by enactment of the Balanced Budget Act of 1997 (BBA 1997). Section 4505 of the BBA 1997 postpones implementation of a resource-based practice expense relative value unit system until January 1, 1999 and provides for a 4-year transition. In addition, it provides for an adjustment for practice expense relative value units for 1998. It also requires publication of a new proposed rule for practice expense by May 1, 1998, thus requiring significant revision of our proposal contained in the proposed rule published June 18, 1997 (62 FR 33158).

  4. Fast Geometric Consensus Approach for Protein Model Quality Assessment

    PubMed Central

    Adamczak, Rafal; Pillardy, Jaroslaw; Vallat, Brinda K.

    2011-01-01

    Abstract Model quality assessment (MQA) is an integral part of protein structure prediction methods that typically generate multiple candidate models. The challenge lies in ranking and selecting the best models using a variety of physical, knowledge-based, and geometric consensus (GC)-based scoring functions. In particular, 3D-Jury and related GC methods assume that well-predicted (sub-)structures are more likely to occur frequently in a population of candidate models, compared to incorrectly folded fragments. While this approach is very successful in the context of diversified sets of models, identifying similar substructures is computationally expensive since all pairs of models need to be superimposed using MaxSub or related heuristics for structure-to-structure alignment. Here, we consider a fast alternative, in which structural similarity is assessed using 1D profiles, e.g., consisting of relative solvent accessibilities and secondary structures of equivalent amino acid residues in the respective models. We show that the new approach, dubbed 1D-Jury, allows to implicitly compare and rank N models in O(N) time, as opposed to quadratic complexity of 3D-Jury and related clustering-based methods. In addition, 1D-Jury avoids computationally expensive 3D superposition of pairs of models. At the same time, structural similarity scores based on 1D profiles are shown to correlate strongly with those obtained using MaxSub. In terms of the ability to select the best models as top candidates 1D-Jury performs on par with other GC methods. Other potential applications of the new approach, including fast clustering of large numbers of intermediate structures generated by folding simulations, are discussed as well. PMID:21244273

  5. Molecules-in-molecules fragment-based method for the calculation of chiroptical spectra of large molecules: Vibrational circular dichroism and Raman optical activity spectra of alanine polypeptides.

    PubMed

    Jose, K V Jovan; Raghavachari, Krishnan

    2016-12-01

    The molecules-in-molecules (MIM) fragment-based method has recently been adapted to evaluate the chiroptical (vibrational circular dichroism [VCD] and Raman optical activity [ROA]) spectra of large molecules such as peptides. In the MIM-VCD and MIM-ROA methods, the relevant higher energy derivatives of the parent molecule are assembled from the corresponding derivatives of smaller fragment subsystems. In addition, the missing long-range interfragment interactions are accounted at a computationally less expensive level of theory (MIM2). In this work we employed the MIM-VCD and MIM-ROA fragment-based methods to explore the evolution of the chiroptical spectroscopic characteristics of 3 10 -helix, α-helix, β-hairpin, γ-turn, and β-extended conformers of gas phase polyalanine (chain length n = 6-14). The different conformers of polyalanine show distinctive features in the MIM chiroptical spectra and the associated spectral intensities increase with evolution of system size. For a better understanding the site-specific effects on the vibrational spectra, isotopic substitutions were also performed employing the MIM method. An increasing redshift with the number of isotopically labeled 13 C=O functional groups in the peptide molecule was seen. For larger polypeptides, we implemented the two-step-MIM model to circumvent the high computational expense associated with the evaluation of chiroptical spectra at a high level of theory using large basis sets. The chiroptical spectra of α-(alanine) 20 polypeptide obtained using the two-step-MIM model, including continuum solvation effects, show good agreement with the full calculations and experiment. This benchmark study suggests that the MIM-fragment approach can assist in predicting and interpreting chiroptical spectra of large polypeptides. © 2016 Wiley Periodicals, Inc.

  6. Numerical investigation of a modified family of centered schemes applied to multiphase equations with nonconservative sources

    NASA Astrophysics Data System (ADS)

    Crochet, M. W.; Gonthier, K. A.

    2013-12-01

    Systems of hyperbolic partial differential equations are frequently used to model the flow of multiphase mixtures. These equations often contain sources, referred to as nozzling terms, that cannot be posed in divergence form, and have proven to be particularly challenging in the development of finite-volume methods. Upwind schemes have recently shown promise in properly resolving the steady wave solution of the associated multiphase Riemann problem. However, these methods require a full characteristic decomposition of the system eigenstructure, which may be either unavailable or computationally expensive. Central schemes, such as the Kurganov-Tadmor (KT) family of methods, require minimal characteristic information, which makes them easily applicable to systems with an arbitrary number of phases. However, the proper implementation of nozzling terms in these schemes has been mathematically ambiguous. The primary objectives of this work are twofold: first, an extension of the KT family of schemes is proposed that formally accounts for the nonconservative nozzling sources. This modification results in a semidiscrete form that retains the simplicity of its predecessor and introduces little additional computational expense. Second, this modified method is applied to multiple, but equivalent, forms of the multiphase equations to perform a numerical study by solving several one-dimensional test problems. Both ideal and Mie-Grüneisen equations of state are used, with the results compared to an analytical solution. This study demonstrates that the magnitudes of the resulting numerical errors are sensitive to the form of the equations considered, and suggests an optimal form to minimize these errors. Finally, a separate modification of the wave propagation speeds used in the KT family is also suggested that can reduce the extent of numerical diffusion in multiphase flows.

  7. A Toolkit for ARB to Integrate Custom Databases and Externally Built Phylogenies

    DOE PAGES

    Essinger, Steven D.; Reichenberger, Erin; Morrison, Calvin; ...

    2015-01-21

    Researchers are perpetually amassing biological sequence data. The computational approaches employed by ecologists for organizing this data (e.g. alignment, phylogeny, etc.) typically scale nonlinearly in execution time with the size of the dataset. This often serves as a bottleneck for processing experimental data since many molecular studies are characterized by massive datasets. To keep up with experimental data demands, ecologists are forced to choose between continually upgrading expensive in-house computer hardware or outsourcing the most demanding computations to the cloud. Outsourcing is attractive since it is the least expensive option, but does not necessarily allow direct user interaction with themore » data for exploratory analysis. Desktop analytical tools such as ARB are indispensable for this purpose, but they do not necessarily offer a convenient solution for the coordination and integration of datasets between local and outsourced destinations. Therefore, researchers are currently left with an undesirable tradeoff between computational throughput and analytical capability. To mitigate this tradeoff we introduce a software package to leverage the utility of the interactive exploratory tools offered by ARB with the computational throughput of cloud-based resources. Our pipeline serves as middleware between the desktop and the cloud allowing researchers to form local custom databases containing sequences and metadata from multiple resources and a method for linking data outsourced for computation back to the local database. Furthermore, a tutorial implementation of the toolkit is provided in the supporting information, S1 Tutorial.« less

  8. A Toolkit for ARB to Integrate Custom Databases and Externally Built Phylogenies

    PubMed Central

    Essinger, Steven D.; Reichenberger, Erin; Morrison, Calvin; Blackwood, Christopher B.; Rosen, Gail L.

    2015-01-01

    Researchers are perpetually amassing biological sequence data. The computational approaches employed by ecologists for organizing this data (e.g. alignment, phylogeny, etc.) typically scale nonlinearly in execution time with the size of the dataset. This often serves as a bottleneck for processing experimental data since many molecular studies are characterized by massive datasets. To keep up with experimental data demands, ecologists are forced to choose between continually upgrading expensive in-house computer hardware or outsourcing the most demanding computations to the cloud. Outsourcing is attractive since it is the least expensive option, but does not necessarily allow direct user interaction with the data for exploratory analysis. Desktop analytical tools such as ARB are indispensable for this purpose, but they do not necessarily offer a convenient solution for the coordination and integration of datasets between local and outsourced destinations. Therefore, researchers are currently left with an undesirable tradeoff between computational throughput and analytical capability. To mitigate this tradeoff we introduce a software package to leverage the utility of the interactive exploratory tools offered by ARB with the computational throughput of cloud-based resources. Our pipeline serves as middleware between the desktop and the cloud allowing researchers to form local custom databases containing sequences and metadata from multiple resources and a method for linking data outsourced for computation back to the local database. A tutorial implementation of the toolkit is provided in the supporting information, S1 Tutorial. Availability: http://www.ece.drexel.edu/gailr/EESI/tutorial.php. PMID:25607539

  9. An approach to the design of wide-angle optical systems with special illumination and IFOV requirements

    NASA Astrophysics Data System (ADS)

    Pravdivtsev, Andrey V.

    2012-06-01

    The article presents the approach to the design wide-angle optical systems with special illumination and instantaneous field of view (IFOV) requirements. The unevenness of illumination reduces the dynamic range of the system, which negatively influence on the system ability to perform their task. The result illumination on the detector depends among other factors from the IFOV changes. It is also necessary to consider IFOV in the synthesis of data processing algorithms, as it directly affects to the potential "signal/background" ratio for the case of statistically homogeneous backgrounds. A numerical-analytical approach that simplifies the design of wideangle optical systems with special illumination and IFOV requirements is presented. The solution can be used for optical systems which field of view greater than 180 degrees. Illumination calculation in optical CAD is based on computationally expensive tracing of large number of rays. The author proposes to use analytical expression for some characteristics which illumination depends on. The rest characteristic are determined numerically in calculation with less computationally expensive operands, the calculation performs not every optimization step. The results of analytical calculation inserts in the merit function of optical CAD optimizer. As a result we reduce the optimizer load, since using less computationally expensive operands. It allows reducing time and resources required to develop a system with the desired characteristics. The proposed approach simplifies the creation and understanding of the requirements for the quality of the optical system, reduces the time and resources required to develop an optical system, and allows creating more efficient EOS.

  10. Attitude to the Use of the Computer for Learning Biological Concepts and Achievement of Students in an Environment Dominated by Indigenous Technology.

    ERIC Educational Resources Information Center

    Jegede, Olugbemiro J.; And Others

    The use of computers to facilitate learning is yet to make an appreciable in-road into the teaching-learning process in most developing Third World countries. The purchase cost and maintenance expenses of the equipment are the major inhibiting factors related to adoption of this high technology in these countries. This study investigated: (1) the…

  11. Analysis of Disaster Preparedness Planning Measures in DoD Computer Facilities

    DTIC Science & Technology

    1993-09-01

    city, stae, aod ZP code) 10 Source of Funding Numbers SProgram Element No lProject No ITask No lWork Unit Accesion I 11 Title include security...Computer Disaster Recovery .... 13 a. PC and LAN Lessons Learned . . ..... 13 2. Distributed Architectures . . . .. . 14 3. Backups...amount of expense, but no client problems." (Leeke, 1993, p. 8) 2. Distributed Architectures The majority of operations that were disrupted by the

  12. Network Support for Group Coordination

    DTIC Science & Technology

    2000-01-01

    telecommuting and ubiquitous computing [40], the advent of networked multimedia, and less expensive technology have shifted telecollaboration into...of Computer Engineering,Santa Cruz,CA,95064 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/ MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10...participants A and B, the payoff structure for choosing two actions i and j is P = Aij + Bij . If P = 0, then the interaction is called a zero -sum game, and

  13. Development of Multidisciplinary, Multifidelity Analysis, Integration, and Optimization of Aerospace Vehicles

    DTIC Science & Technology

    2010-02-27

    investigated in more detail. The intermediate level of fidelity, though more expensive, is then used to refine the analysis , add geometric detail, and...design stage is used to further refine the analysis , narrowing the design to a handful of options. Figure 1. Integrated Hierarchical Framework. In...computational structural and computational fluid modeling. For the structural analysis tool we used McIntosh Structural Dynamics’ finite element code CNEVAL

  14. COST FUNCTION STUDIES FOR POWER REACTORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heestand, J.; Wos, L.T.

    1961-11-01

    A function to evaluate the cost of electricity produced by a nuclear power reactor was developed. The basic equation, revenue = capital charges + profit + operating expenses, was expanded in terms of various cost parameters to enable analysis of multiregion nuclear reactors with uranium and/or plutonium for fuel. A corresponding IBM 704 computer program, which will compute either the price of electricity or the value of plutonium, is presented in detail. (auth)

  15. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  16. Fast perceptual image hash based on cascade algorithm

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly; Yavtushenko, Evgeniya

    2017-09-01

    In this paper, we propose a perceptual image hash algorithm based on cascade algorithm, which can be applied in image authentication, retrieval, and indexing. Image perceptual hash uses for image retrieval in sense of human perception against distortions caused by compression, noise, common signal processing and geometrical modifications. The main disadvantage of perceptual hash is high time expenses. In the proposed cascade algorithm of image retrieval initializes with short hashes, and then a full hash is applied to the processed results. Computer simulation results show that the proposed hash algorithm yields a good performance in terms of robustness, discriminability, and time expenses.

  17. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  18. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  19. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  20. Fast reconstruction of optical properties for complex segmentations in near infrared imaging

    NASA Astrophysics Data System (ADS)

    Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador

    2017-04-01

    The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.

  1. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  2. Loosely Coupled GPS-Aided Inertial Navigation System for Range Safety

    NASA Technical Reports Server (NTRS)

    Heatwole, Scott; Lanzi, Raymond J.

    2010-01-01

    The Autonomous Flight Safety System (AFSS) aims to replace the human element of range safety operations, as well as reduce reliance on expensive, downrange assets for launches of expendable launch vehicles (ELVs). The system consists of multiple navigation sensors and flight computers that provide a highly reliable platform. It is designed to ensure that single-event failures in a flight computer or sensor will not bring down the whole system. The flight computer uses a rules-based structure derived from range safety requirements to make decisions whether or not to destroy the rocket.

  3. Application of the System Identification Technique to Goal-Directed Saccades.

    DTIC Science & Technology

    1984-07-30

    1983 to May 31, 1984 by the AFOSR under Grant No. AFOSR-83-0187. 1. Salaries & Wages $7,257 2. Employee Benefits $ 4186 3. Indirect Costs $1,177 *’ 1...Equipment $2,127 DEC VT100 Terminal Computer Terminal Table & Chair Computer Interface 5. Travel $ 672 6. Miscellaneous Expenses 281 Computer Costs ...Telephone Xeroxing Report Costs Total $12,000 A 1cc;3t Ion r . ;. ., ’o n. e, Ef V r CI3 k.i *r 7’r’ ’ - s-I - . CLef • -- * 0 - -- -, r ~ 𔄁 . r w

  4. Advances in Neutron Radiography: Application to Additive Manufacturing Inconel 718

    DOE PAGES

    Bilheux, Hassina Z; Song, Gian; An, Ke; ...

    2016-01-01

    Reactor-based neutron radiography is a non-destructive, non-invasive characterization technique that has been extensively used for engineering materials such as inspection of components, evaluation of porosity, and in-operando observations of engineering parts. Neutron radiography has flourished at reactor facilities for more than four decades and is relatively new to accelerator-based neutron sources. Recent advances in neutron source and detector technologies, such as the Spallation Neutron Source (SNS) at the Oak Ridge National Laboratory (ORNL) in Oak Ridge, TN, and the microchannel plate (MCP) detector, respectively, enable new contrast mechanisms using the neutron scattering Bragg features for crystalline information such as averagemore » lattice strain, crystalline plane orientation, and identification of phases in a neutron radiograph. Additive manufacturing (AM) processes or 3D printing have recently become very popular and have a significant potential to revolutionize the manufacturing of materials by enabling new designs with complex geometries that are not feasible using conventional manufacturing processes. However, the technique lacks standards for process optimization and control compared to conventional processes. Residual stresses are a common occurrence in materials that are machined, rolled, heat treated, welded, etc., and have a significant impact on a component s mechanical behavior and durability. They may also arise during the 3D printing process, and defects such as internal cracks can propagate over time as the component relaxes after being removed from its build plate (the base plate utilized to print materials on). Moreover, since access to the AM material is possible only after the component has been fully manufactured, it is difficult to characterize the material for defects a priori to minimize expensive re-runs. Currently, validation of the AM process and materials is mainly through expensive trial-and-error experiments at the component level, whereas in conventional processes the level of confidence in predictive computational modeling is high enough to allow process and materials optimization through computational approaches. Thus, there is a clear need for non-destructive characterization techniques and for the establishment of processing- microstructure databases that can be used for developing and validating predictive modeling tools for AM.« less

  5. GPSS/360 computer models to simulate aircraft passenger emergency evacuations.

    DOT National Transportation Integrated Search

    1972-09-01

    Live tests of emergency evacuation of transport aircraft are becoming increasingly expensive as the planes grow to a size seating hundreds of passengers. Repeated tests, to cope with random variations, increase these costs, as well as risks of injuri...

  6. Another View of "PC vs. Mac."

    ERIC Educational Resources Information Center

    DeMillion, John A.

    1998-01-01

    An article by Nan Wodarz in the November 1997 issue listed reasons why the Microsoft computer operating system was superior to the Apple Macintosh platform. This rebuttal contends the Macintosh is less expensive, lasts longer, and requires less technical staff for support. (MLF)

  7. Experimental CAD Course Uses Low-Cost Systems.

    ERIC Educational Resources Information Center

    Wohlers, Terry

    1984-01-01

    Describes the outstanding results obtained when a department of industrial sciences used special software on microcomputers to teach computer-aided design (CAD) as an alternative to much more expensive equipment. The systems used and prospects for the future are also considered. (JN)

  8. Securing SIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    PubMed

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  9. SecSIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    PubMed

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  10. A simple parameterization of aerosol emissions in RAMS

    NASA Astrophysics Data System (ADS)

    Letcher, Theodore

    Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical model. Furthermore, SA formation is greatly reduced during the winter months due to the lack of naturally produced organic VOC's. Because of these reasons, it was felt that neglecting SOA within the model was the best course of action. The actual parameterization uses a prescribed source map to add aerosol to the model at two vertical levels that surround an arbitrary height decided by the user. To best represent the real-world, the WRF Chemistry model was run using the National Emissions Inventory (NEI2005) to represent anthropogenic emissions and the Model Emissions of Gases and Aerosols from Nature (MEGAN) to represent natural contributions to aerosol. WRF Chemistry was run for one hour, after which the aerosol output along with the hygroscopicity parameter (κ) were saved into a data file that had the capacity to be interpolated to an arbitrary grid used in RAMS. The comparison of this parameterization to observations collected at Mesa Verde National Park (MVNP) during the Inhibition of Snowfall from Pollution Aerosol (ISPA-III) field campaign yielded promising results. The model was able to simulate the variability in near surface aerosol concentration with reasonable accuracy, though with a general low bias. Furthermore, this model compared much better to the observations than did the WRF Chemistry model using a fraction of the computational expense. This emissions scheme was able to show reasonable solutions regarding the aerosol concentrations and can therefore be used to provide an estimate of the seasonal impact of increased CCN on water resources in Western Colorado with relatively low computational expense.

  11. 26 CFR 301.6323(e)-1 - Priority of interest and expenses.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... interest is not subrogated to the rights of the holder of the State sales tax lien. However, if the holder...)-1, to the rights of the holder of the sales tax lien, he will also be entitled to any additional....6323(e)-1 Priority of interest and expenses. (a) In general. If the lien imposed by section 6321 is not...

  12. Internal Controls and Compliance With Laws and Regulations for Expense Account Line Items on the FY 1996 Defense Business Operations Fund Consolidated Financial Statements.

    DTIC Science & Technology

    1998-03-04

    issues discussed in this report. The primary audit objective was to determine whether the expenses on the FY 1996 DBOF consolidated financial statements were...34 November 16, 1993. In addition, we determined whether controls were adequate to ensure that the consolidated financial statements were free of material

  13. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  14. GPU Accelerated Prognostics

    NASA Technical Reports Server (NTRS)

    Gorospe, George E., Jr.; Daigle, Matthew J.; Sankararaman, Shankar; Kulkarni, Chetan S.; Ng, Eley

    2017-01-01

    Prognostic methods enable operators and maintainers to predict the future performance for critical systems. However, these methods can be computationally expensive and may need to be performed each time new information about the system becomes available. In light of these computational requirements, we have investigated the application of graphics processing units (GPUs) as a computational platform for real-time prognostics. Recent advances in GPU technology have reduced cost and increased the computational capability of these highly parallel processing units, making them more attractive for the deployment of prognostic software. We present a survey of model-based prognostic algorithms with considerations for leveraging the parallel architecture of the GPU and a case study of GPU-accelerated battery prognostics with computational performance results.

  15. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  16. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  17. A Finite Element Projection Method for the Solution of Particle Transport Problems with Anisotropic Scattering.

    DTIC Science & Technology

    1984-07-01

    piecewise constant energy dependence. This is a seven-dimensional problem with time dependence, three spatial and two angular or directional variables and...in extending the computer implementation of the method to time and energy dependent problems, and to solving and validating this technique on a...problems they have severe limitations. The Monte Carlo method, usually requires the use of many hours of expensive computer time , and for deep

  18. Dual-scale Galerkin methods for Darcy flow

    NASA Astrophysics Data System (ADS)

    Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex

    2018-02-01

    The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.

  19. Temperature resolution enhancing of commercially available IR camera using computer processing

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2015-09-01

    As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Using such THz camera, one can see a temperature difference on the human skin if this difference is caused by different temperatures inside the body. Because the passive THz camera is very expensive, we try to use the IR camera for observing of such phenomenon. We use a computer code that is available for treatment of the images captured by commercially available IR camera, manufactured by Flir Corp. Using this code we demonstrate clearly changing of human body skin temperature induced by water drinking. Nevertheless, in some cases it is necessary to use additional computer processing to show clearly changing of human body temperature. One of these approaches is developed by us. We believe that we increase ten times (or more) the temperature resolution of such camera. Carried out experiments can be used for solving the counter-terrorism problem and for medicine problems solving. Shown phenomenon is very important for the detection of forbidden objects and substances concealed inside the human body using non-destructive control without X-ray application. Early we have demonstrated such possibility using THz radiation.

  20. Multivariate moment closure techniques for stochastic kinetic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporallymore » evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.« less

  1. History of computer-assisted orthopedic surgery (CAOS) in sports medicine.

    PubMed

    Jackson, Douglas W; Simon, Timothy M

    2008-06-01

    Computer-assisted orthopedic surgery and navigation applications have a history rooted in the desire to link imaging technology with real-time anatomic landmarks. Although applications are still evolving in the clinical and research setting, computer-assisted orthopedic surgery has already demonstrated in certain procedures its potential for improving the surgeon's accuracy, reproducibility (once past the learning curve), and in reducing outlier outcomes. It is also being used as an educational tool to assist less experienced surgeons in interpreting measurements and precision placements related to well defined anatomic landmarks. It also can assist experienced surgeons, in real-time, plan their bony cuts, tunnel placement, and with ligament balancing. Presently, the additional time, the expense to acquire the needed software and hardware, and restricted reimbursement have slowed the widespread use of navigation. Its current applications have been primarily in joint replacement surgery, spine surgery, and trauma. It has not been widely used in the clinical setting for sports medicine procedures. Sports medicine applications such as individualizing tunnel placement in ligament surgery, opening wedge osteotomy with and without accompanying ligament reconstruction, and balancing and tensioning of the ligaments during the procedure (allowing real-time corrections if necessary) are currently being evaluated and being used on a limited clinical basis.

  2. Cloud Computing with iPlant Atmosphere.

    PubMed

    McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos

    2013-10-15

    Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. Copyright © 2013 John Wiley & Sons, Inc.

  3. SIMULATING ATMOSPHERIC EXPOSURE IN A NATIONAL RISK ASSESSMENT USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME

    EPA Science Inventory

    Multimedia risk assessments require the temporal integration of atmospheric concentration and deposition with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-term average a...

  4. COMPETITIVE METAGENOMIC DNA HYBRIDIZATION IDENTIFIES HOST-SPECIFIC GENETIC MARKERS IN HUMAN FECAL MICROBIAL COMMUNITIES

    EPA Science Inventory

    Although recent technological advances in DNA sequencing and computational biology now allow scientists to compare entire microbial genomes, the use of these approaches to discern key genomic differences between natural microbial communities remains prohibitively expensive for mo...

  5. Desktop Publishing for Counselors.

    ERIC Educational Resources Information Center

    Lucking, Robert; Mitchum, Nancy

    1990-01-01

    Discusses the fundamentals of desktop publishing for counselors, including hardware and software systems and peripherals. Notes by using desktop publishing, counselors can produce their own high-quality documents without the expense of commercial printers. Concludes computers present a way of streamlining the communications of a counseling…

  6. Use of off-the-shelf PC-based flight simulators for aviation human factors research.

    DOT National Transportation Integrated Search

    1996-04-01

    Flight simulation has historically been an expensive proposition, particularly if out-the-window views were desired. Advances in computer technology have allowed a modular, off-the-shelf flight simulation (based on 80486 processors or Pentiums) to be...

  7. Software Prototyping: Designing Systems for Users.

    ERIC Educational Resources Information Center

    Spies, Phyllis Bova

    1983-01-01

    Reports on major change in computer software development process--the prototype model, i.e., implementation of skeletal system that is enhanced during interaction with users. Expensive and unreliable software, software design errors, traditional development approach, resources required for prototyping, success stories, and systems designer's role…

  8. Identification of Bacterial DNA Markers for the Detection of Human and Cattle Fecal Pollution - SLIDES

    EPA Science Inventory

    Technological advances in DNA sequencing and computational biology allow scientists to compare entire microbial genomes. However, the use of these approaches to discern key genomic differences between natural microbial communities remains prohibitively expensive for most laborato...

  9. IDENTIFICATION OF BACTERIAL DNA MARKERS FOR THE DETECTION OF HUMAN AND CATTLE FECAL POLLUTION

    EPA Science Inventory

    Technological advances in DNA sequencing and computational biology allow scientists to compare entire microbial genomes. However, the use of these approaches to discern key genomic differences between natural microbial communities remains prohibitively expensive for most laborato...

  10. Iterative framework radiation hybrid mapping

    USDA-ARS?s Scientific Manuscript database

    Building comprehensive radiation hybrid maps for large sets of markers is a computationally expensive process, since the basic mapping problem is equivalent to the traveling salesman problem. The mapping problem is also susceptible to noise, and as a result, it is often beneficial to remove markers ...

  11. Numerical Experiments with a Turbulent Single-Mode Rayleigh-Taylor Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cloutman, L.D.

    2000-04-01

    Direct numerical simulation is a powerful tool for studying turbulent flows. Unfortunately, it is also computationally expensive and often beyond the reach of the largest, fastest computers. Consequently, a variety of turbulence models have been devised to allow tractable and affordable simulations of averaged flow fields. Unfortunately, these present a variety of practical difficulties, including the incorporation of varying degrees of empiricism and phenomenology, which leads to a lack of universality. This unsatisfactory state of affairs has led to the speculation that one can avoid the expense and bother of using a turbulence model by relying on the grid andmore » numerical diffusion of the computational fluid dynamics algorithm to introduce a spectral cutoff on the flow field and to provide dissipation at the grid scale, thereby mimicking two main effects of a large eddy simulation model. This paper shows numerical examples of a single-mode Rayleigh-Taylor instability in which this procedure produces questionable results. We then show a dramatic improvement when two simple subgrid-scale models are employed. This study also illustrates the extreme sensitivity to initial conditions that is a common feature of turbulent flows.« less

  12. Using Reconstructed POD Modes as Turbulent Inflow for LES Wind Turbine Simulations

    NASA Astrophysics Data System (ADS)

    Nielson, Jordan; Bhaganagar, Kiran; Juttijudata, Vejapong; Sirisup, Sirod

    2016-11-01

    Currently, in order to get realistic atmospheric effects of turbulence, wind turbine LES simulations require computationally expensive precursor simulations. At times, the precursor simulation is more computationally expensive than the wind turbine simulation. The precursor simulations are important because they capture turbulence in the atmosphere and as stated above, turbulence impacts the power production estimation. On the other hand, POD analysis has been shown to be capable of capturing turbulent structures. The current study was performed to determine the plausibility of using lower dimension models from POD analysis of LES simulations as turbulent inflow to wind turbine LES simulations. The study will aid the wind energy community by lowering the computational cost of full scale wind turbine LES simulations, while maintaining a high level of turbulent information and being able to quickly apply the turbulent inflow to multi turbine wind farms. This will be done by comparing a pure LES precursor wind turbine simulation with simulations that use reduced POD mod inflow conditions. The study shows the feasibility of using lower dimension models as turbulent inflow of LES wind turbine simulations. Overall the power production estimation and velocity field of the wind turbine wake are well captured with small errors.

  13. A glacier runoff extension to the Precipitation Runoff Modeling System

    USGS Publications Warehouse

    Van Beusekom, Ashley E.; Viger, Roland

    2016-01-01

    A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while maintaining model usability. PRMSglacier is validated on two basins in Alaska, Wolverine, and Gulkana Glacier basin, which have been studied since 1966 and have a substantial amount of data with which to test model performance over a long period of time covering a wide range of climatic and hydrologic conditions. When error in field measurements is considered, the Nash-Sutcliffe efficiencies of streamflow are 0.87 and 0.86, the absolute bias fractions of the winter mass balance simulations are 0.10 and 0.08, and the absolute bias fractions of the summer mass balances are 0.01 and 0.03, all computed over 42 years for the Wolverine and Gulkana Glacier basins, respectively. Without taking into account measurement error, the values are still within the range achieved by the more computationally expensive codes tested over shorter time periods.

  14. The role of under-determined approximations in engineering and science application

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1992-01-01

    There is currently a great deal of interest in using response surfaces in the optimization of aircraft performance. The objective function and/or constraint equations involved in these optimization problems may come from numerous disciplines such as structures, aerodynamics, environmental engineering, etc. In each of these disciplines, the mathematical complexity of the governing equations usually dictates that numerical results be obtained from large computer programs such as a finite element method program. Thus, when performing optimization studies, response surfaces are a convenient way of transferring information from the various disciplines to the optimization algorithm as opposed to bringing all the sundry computer programs together in a massive computer code. Response surfaces offer another advantage in the optimization of aircraft structures. A characteristic of these types of optimization problems is that evaluation of the objective function and response equations (referred to as a functional evaluation) can be very expensive in a computational sense. Because of the computational expense in obtaining functional evaluations, the present study was undertaken to investigate under-determinined approximations. An under-determined approximation is one in which there are fewer training pairs (pieces of information about a function) than there are undetermined parameters (coefficients or weights) associated with the approximation. Both polynomial approximations and neural net approximations were examined. Three main example problems were investigated: (1) a function of one design variable was considered; (2) a function of two design variables was considered; and (3) a 35 bar truss with 4 design variables was considered.

  15. Advances in biobased lubricant additive development

    USDA-ARS?s Scientific Manuscript database

    Lubricant formulations comprise two categories of ingredients: base oils and additives. Depending on its application, a formulation may contain one or more from each category. Additives are the most expensive ingredients of lubricant formulations and, for some applications, can comprise 25 to 40% w/...

  16. Role of Kekulé and Non-Kekulé Structures in the Radical Character of Alternant Polycyclic Aromatic Hydrocarbons: A TAO-DFT Study

    PubMed Central

    Yeh, Chia-Nan; Chai, Jeng-Da

    2016-01-01

    We investigate the role of Kekulé and non-Kekulé structures in the radical character of alternant polycyclic aromatic hydrocarbons (PAHs) using thermally-assisted-occupation density functional theory (TAO-DFT), an efficient electronic structure method for the study of large ground-state systems with strong static correlation effects. Our results reveal that the studies of Kekulé and non-Kekulé structures qualitatively describe the radical character of alternant PAHs, which could be useful when electronic structure calculations are infeasible due to the expensive computational cost. In addition, our results support previous findings on the increase in radical character with increasing system size. For alternant PAHs with the same number of aromatic rings, the geometrical arrangements of aromatic rings are responsible for their radical character. PMID:27457289

  17. Telehealth innovations in health education and training.

    PubMed

    Conde, José G; De, Suvranu; Hall, Richard W; Johansen, Edward; Meglan, Dwight; Peng, Grace C Y

    2010-01-01

    Telehealth applications are increasingly important in many areas of health education and training. In addition, they will play a vital role in biomedical research and research training by facilitating remote collaborations and providing access to expensive/remote instrumentation. In order to fulfill their true potential to leverage education, training, and research activities, innovations in telehealth applications should be fostered across a range of technology fronts, including online, on-demand computational models for simulation; simplified interfaces for software and hardware; software frameworks for simulations; portable telepresence systems; artificial intelligence applications to be applied when simulated human patients are not options; and the development of more simulator applications. This article presents the results of discussion on potential areas of future development, barries to overcome, and suggestions to translate the promise of telehealth applications into a transformed environment of training, education, and research in the health sciences.

  18. Tunable inter-qubit coupling as a resource for gate based quantum computing with superconducting circuits

    NASA Astrophysics Data System (ADS)

    Chiaro, B.; Neill, C.; Chen, Z.; Dunsworth, A.; Foxen, B.; Quintana, C.; Wenner, J.; Martinis, J. M.; Google Quantum Hardware Team

    Fast, high fidelity two qubit gates are an essential requirement of a quantum processor. In this talk, we discuss how the tunable coupling of the gmon architecture provides a pathway for an improved two qubit controlled-Z gate. The maximum inter-qubit coupling strength gmax = 60 MHz is sufficient for fast adiabatic two qubit gates to be performed as quickly as single qubit gates, reducing dephasing errors. Additionally, the ability to turn the coupling off allows all qubits to idle at low magnetic flux sensitivity, further reducing susceptibility to noise. However, the flexibility that this platform offers comes at the expense of increased control complexity. We describe our strategy for addressing the control challenges of the gmon architecture and show experimental progress toward fast, high fidelity controlled-Z gates with gmon qubits.

  19. Comparison of DAC and MONACO DSMC Codes with Flat Plate Simulation

    NASA Technical Reports Server (NTRS)

    Padilla, Jose F.

    2010-01-01

    Various implementations of the direct simulation Monte Carlo (DSMC) method exist in academia, government and industry. By comparing implementations, deficiencies and merits of each can be discovered. This document reports comparisons between DSMC Analysis Code (DAC) and MONACO. DAC is NASA's standard DSMC production code and MONACO is a research DSMC code developed in academia. These codes have various differences; in particular, they employ distinct computational grid definitions. In this study, DAC and MONACO are compared by having each simulate a blunted flat plate wind tunnel test, using an identical volume mesh. Simulation expense and DSMC metrics are compared. In addition, flow results are compared with available laboratory data. Overall, this study revealed that both codes, excluding grid adaptation, performed similarly. For parallel processing, DAC was generally more efficient. As expected, code accuracy was mainly dependent on physical models employed.

  20. Atomistic Modeling of Pd Site Preference in NiTi

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Noebe, Ronald D.; Mosca, Hugo O.

    2004-01-01

    An analysis of the site subsitution behavior of Pd in NiTi was performed using the BFS method for alloys. Through a combination of Monte Carlo simulations and detailed atom-by-atom energetic analyses of various computational cells, representing compositions of NiTi with up to 10 at% Pd, a detailed understanding of site occupancy of Pd in NiTi was revealed. Pd subsituted at the expense of Ni in a NiTi alloy will prefer the Ni-sites. Pd subsituted at the expense of Ti shows a very weak preference for Ti-sites that diminishes as the amount of Pd in the alloy increases and as the temperature increases.

  1. Real-time algorithm for acoustic imaging with a microphone array.

    PubMed

    Huang, Xun

    2009-05-01

    Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.

  2. A qualitative analysis of bus simulator training on transit incidents : a case study in Florida. [Summary].

    DOT National Transportation Integrated Search

    2013-01-01

    The simulator was once a very expensive, large-scale mechanical device for training military pilots or astronauts. Modern computers, linking sophisticated software and large-screen displays, have yielded simulators for the desktop or configured as sm...

  3. A LAN Primer.

    ERIC Educational Resources Information Center

    Hazari, Sunil I.

    1991-01-01

    Local area networks (LANs) are systems of computers and peripherals connected together for the purposes of electronic mail and the convenience of sharing information and expensive resources. In planning the design of such a system, the components to consider are hardware, software, transmission media, topology, operating systems, and protocols.…

  4. Migration without migraines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lines, L.; Burton, A.; Lu, H.X.

    Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less

  5. The curse of planning: dissecting multiple reinforcement-learning systems by taxing the central executive.

    PubMed

    Otto, A Ross; Gershman, Samuel J; Markman, Arthur B; Daw, Nathaniel D

    2013-05-01

    A number of accounts of human and animal behavior posit the operation of parallel and competing valuation systems in the control of choice behavior. In these accounts, a flexible but computationally expensive model-based reinforcement-learning system has been contrasted with a less flexible but more efficient model-free reinforcement-learning system. The factors governing which system controls behavior-and under what circumstances-are still unclear. Following the hypothesis that model-based reinforcement learning requires cognitive resources, we demonstrated that having human decision makers perform a demanding secondary task engenders increased reliance on a model-free reinforcement-learning strategy. Further, we showed that, across trials, people negotiate the trade-off between the two systems dynamically as a function of concurrent executive-function demands, and people's choice latencies reflect the computational expenses of the strategy they employ. These results demonstrate that competition between multiple learning systems can be controlled on a trial-by-trial basis by modulating the availability of cognitive resources.

  6. The Curse of Planning: Dissecting multiple reinforcement learning systems by taxing the central executive

    PubMed Central

    Otto, A. Ross; Gershman, Samuel J.; Markman, Arthur B.; Daw, Nathaniel D.

    2013-01-01

    A number of accounts of human and animal behavior posit the operation of parallel and competing valuation systems in the control of choice behavior. Along these lines, a flexible but computationally expensive model-based reinforcement learning system has been contrasted with a less flexible but more efficient model-free reinforcement learning system. The factors governing which system controls behavior—and under what circumstances—are still unclear. Based on the hypothesis that model-based reinforcement learning requires cognitive resources, we demonstrate that having human decision-makers perform a demanding secondary task engenders increased reliance on a model-free reinforcement learning strategy. Further, we show that across trials, people negotiate this tradeoff dynamically as a function of concurrent executive function demands and their choice latencies reflect the computational expenses of the strategy employed. These results demonstrate that competition between multiple learning systems can be controlled on a trial-by-trial basis by modulating the availability of cognitive resources. PMID:23558545

  7. Automated combinatorial method for fast and robust prediction of lattice thermal conductivity

    NASA Astrophysics Data System (ADS)

    Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Toher, Cormac; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano

    The lack of computationally inexpensive and accurate ab-initio based methodologies to predict lattice thermal conductivity, κl, without computing the anharmonic force constants or performing time-consuming ab-initio molecular dynamics, is one of the obstacles preventing the accelerated discovery of new high or low thermal conductivity materials. The Slack equation is the best alternative to other more expensive methodologies but is highly dependent on two variables: the acoustic Debye temperature, θa, and the Grüneisen parameter, γ. Furthermore, different definitions can be used for these two quantities depending on the model or approximation. Here, we present a combinatorial approach based on the quasi-harmonic approximation to elucidate which definitions of both variables produce the best predictions of κl. A set of 42 compounds was used to test accuracy and robustness of all possible combinations. This approach is ideal for obtaining more accurate values than fast screening models based on the Debye model, while being significantly less expensive than methodologies that solve the Boltzmann transport equation.

  8. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1993-01-01

    In the last two decades, there have been extensive developments in computational aerodynamics, which constitutes a major part of the general area of computational fluid dynamics. Such developments are essential to advance the understanding of the physics of complex flows, to complement expensive wind-tunnel tests, and to reduce the overall design cost of an aircraft, particularly in the area of aeroelasticity. Aeroelasticity plays an important role in the design and development of aircraft, particularly modern aircraft, which tend to be more flexible. Several phenomena that can be dangerous and limit the performance of an aircraft occur because of the interaction of the flow with flexible components. For example, an aircraft with highly swept wings may experience vortex-induced aeroelastic oscillations. Also, undesirable aeroelastic phenomena due to the presence and movement of shock waves occur in the transonic range. Aeroelastically critical phenomena, such as a low transonic flutter speed, have been known to occur through limited wind-tunnel tests and flight tests. Aeroelastic tests require extensive cost and risk. An aeroelastic wind-tunnel experiment is an order of magnitude more expensive than a parallel experiment involving only aerodynamics. By complementing the wind-tunnel experiments with numerical simulations the overall cost of the development of aircraft can be considerably reduced. In order to accurately compute aeroelastic phenomenon it is necessary to solve the unsteady Euler/Navier-Stokes equations simultaneously with the structural equations of motion. These equations accurately describe the flow phenomena for aeroelastic applications. At Ames a code, ENSAERO, is being developed for computing the unsteady aerodynamics and aeroelasticity of aircraft and it solves the Euler/Navier-Stokes equations. The purpose of this contract is to continue the algorithm enhancements of ENSAERO and to apply the code to complicated geometries. During the last year, the geometric capability of the code was extended to simulate transonic flows, a wing with oscillating control surface. Single-grid and zonal approaches were tested. For the zonal approach, a new interpolation technique was introduced. The key development of the algorithm was an interface treatment between moving zones for a control surface using the virtual-zone concept. The work performed during the period, 1 Apr. 1992 through 31 Mar. 1993 is summarized. Additional details on the various aspects of the study are given in the Appendices.

  9. Extreme-Scale Algorithms & Software Resilience (EASIR) Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, James W.

    This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emergingmore » memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a subset of the IEEE Floating Point Standard 754-2008, uses just 6 words to represent a “reproducible accumulator,” and requires just one read-only pass over the data, or one reduction in parallel. New instructions based on this work are being considered for inclusion in the future IEEE 754-2018 floating-point standard, and new reproducible BLAS are being considered for the next version of the BLAS standard.« less

  10. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  11. Storage and computationally efficient permutations of factorized covariance and square-root information arrays

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.

  12. Habitual control of goal selection in humans

    PubMed Central

    Cushman, Fiery; Morris, Adam

    2015-01-01

    Humans choose actions based on both habit and planning. Habitual control is computationally frugal but adapts slowly to novel circumstances, whereas planning is computationally expensive but can adapt swiftly. Current research emphasizes the competition between habits and plans for behavioral control, yet many complex tasks instead favor their integration. We consider a hierarchical architecture that exploits the computational efficiency of habitual control to select goals while preserving the flexibility of planning to achieve those goals. We formalize this mechanism in a reinforcement learning setting, illustrate its costs and benefits, and experimentally demonstrate its spontaneous application in a sequential decision-making task. PMID:26460050

  13. Current Grid Generation Strategies and Future Requirements in Hypersonic Vehicle Design, Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)

    1998-01-01

    Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.

  14. OPEX: Optimized Eccentricity Computation in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henderson, Keith

    2011-11-14

    Real-world graphs have many properties of interest, but often these properties are expensive to compute. We focus on eccentricity, radius and diameter in this work. These properties are useful measures of the global connectivity patterns in a graph. Unfortunately, computing eccentricity for all nodes is O(n2) for a graph with n nodes. We present OPEX, a novel combination of optimizations which improves computation time of these properties by orders of magnitude in real-world experiments on graphs of many different sizes. We run OPEX on graphs with up to millions of links. OPEX gives either exact results or bounded approximations, unlikemore » its competitors which give probabilistic approximations or sacrifice node-level information (eccentricity) to compute graphlevel information (diameter).« less

  15. A DNA sequence analysis package for the IBM personal computer.

    PubMed Central

    Lagrimini, L M; Brentano, S T; Donelson, J E

    1984-01-01

    We present here a collection of DNA sequence analysis programs, called "PC Sequence" (PCS), which are designed to run on the IBM Personal Computer (PC). These programs are written in IBM PC compiled BASIC and take full advantage of the IBM PC's speed, error handling, and graphics capabilities. For a modest initial expense in hardware any laboratory can use these programs to quickly perform computer analysis on DNA sequences. They are written with the novice user in mind and require very little training or previous experience with computers. Also provided are a text editing program for creating and modifying DNA sequence files and a communications program which enables the PC to communicate with and collect information from mainframe computers and DNA sequence databases. PMID:6546433

  16. Medical Spending Differences in the United States and Canada: The Role of Prices, Procedures, and Administrative Expenses

    PubMed Central

    Pozen, Alexis; Cutler, David M.

    2011-01-01

    The United States far outspends Canada on health care, but the sources of additional spending are unclear. We evaluated the importance of incomes, administration, and medical interventions in this difference. Pooling various sources, we calculated medical personnel incomes, administrative expenses, and procedure volume and intensity for the United States and Canada. We found that Canada spent $1,589 per capita less on physicians and hospitals in 2002. Administration accounted for the largest share of this difference (39%), followed by incomes (31%), and more intensive provision of medical services (14%). Whether this additional spending is wasteful or warranted is unknown. PMID:20812461

  17. Telemedicine. Final report/project accomplishments summary CRADA number 95-KCP-1014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VanDeusen, A.L.

    1997-04-01

    This project was initiated to fill existing voids in the telemedicine equipment market. Currently, when a medical facility adds telemedicine capability to their video conference system, they must purchase expensive and bulky encoders and decoders in order to send information over the available data channel. Even with this expensive equipment, only one data type (stethoscope or ECG) can be sent at a time. In addition, since existing encoders and decoders are not designed specifically for telemedicine, special cables must be built to connect with this equipment. This project resulted in the design and construction of an encoder/decoder system that resolvedmore » these issues. The unit (referred to as the Telecoder) is designed specifically for the telemedicine market. The Telecoder is compact, handles two types of data (stethoscope and ECG) simultaneously, integrates with existing medical equipment, and is less expensive. In addition to the Telecoder module, a prototype was built that adds all the necessary logic and interfaces necessary to integrate the basic encoder design into additional Cardionics products. Although a complete integration into other Cardionics products was not in the scope of this CRADA, all the basic design work has been done to allow Cardionics to complete the work.« less

  18. CRADA Final Report: Weld Predictor App

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billings, Jay Jay

    Welding is an important manufacturing process used in a broad range of industries and market sectors, including automotive, aerospace, heavy manufacturing, medical, and defense. During welded fabrication, high localized heat input and subsequent rapid cooling result in the creation of residual stresses and distortion. These residual stresses can significantly affect the fatigue resistance, cracking behavior, and load-carrying capacity of welded structures during service. Further, additional fitting and tacking time is often required to fit distorted subassemblies together, resulting in non-value added cost. Using trial-and-error methods to determine which welding parameters, welding sequences, and fixture designs will most effectively reduce distortionmore » is a time-consuming and expensive process. For complex structures with many welds, this approach can take several months. For this reason, efficient and accurate methods of mitigating distortion are in-demand across all industries where welding is used. Analytical and computational methods and commercial software tools have been developed to predict welding-induced residual stresses and distortion. Welding process parameters, fixtures, and tooling can be optimized to reduce the HAZ softening and minimize weld residual stress and distortion, improving performance and reducing design, fabrication and testing costs. However, weld modeling technology tools are currently accessible only to engineers and designers with a background in finite element analysis (FEA) who work with large manufacturers, research institutes, and universities with access to high-performance computing (HPC) resources. Small and medium enterprises (SMEs) in the US do not typically have the human and computational resources needed to adopt and utilize weld modeling technology. To allow an engineer with no background in FEA and SMEs to gain access to this important design tool, EWI and the Ohio Supercomputer Center (OSC) developed the online weld application software tool “WeldPredictor” ( https://eweldpredictor.ewi.org ). About 1400 users have tested this application. This project marked the beginning of development on the next version of WeldPredictor that addresses many outstanding features of the original, including 3D models, allow more material hardening laws, model material phase transformation, and uses open source finite element solvers to quickly solve problems (as opposed to expensive commercial tools).« less

  19. Study on the feasibility of provision of distance learning programmes in surgery to Malawi.

    PubMed

    Mains, Edward A A; Blackmur, James P; Dewhurst, David; Ward, Ross M; Garden, O James; Wigmore, Stephen J

    2011-12-01

    Medical educational opportunities and resources are considerably limited in the developing world. The expansion of computing and Internet access means that there exists a potential to provide education to students through distance learning programmes. This study investigated the feasibility of providing distance learning course in surgery in Malawi. The study investigated the user requirements, technical requirements and Internet connections in two teaching hospitals in Malawi. In addition the appropriateness of current course material from the Edinburgh Surgical Sciences Qualification to Malawi trainees was assessed. The study found a high degree of interest from Malawian trainees in distance learning. The provision of basic science modules such as anatomy and physiology and the ability to access journals were considered highly desirable. The current ESSQ course would require extensive re-modelling to make it suitable to an African trainee's requirements. Internet speeds remain slow and access is currently expensive. There is considerable interest in distance learning programmes in Malawi but access to them is limited partly because of slow and expensive Internet access. Understanding the needs of trainees in countries such as Malawi will allow better direction of educational aid and resources to support surgical training. Copyright © 2010 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  20. Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments

    NASA Technical Reports Server (NTRS)

    Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.

    2012-01-01

    ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data on the backshell of a scaled model of the Orion Multi-Purpose Crew Vehicle (MPCV). Heat augmentation effects due to the presence of cavities and RCS jet firings were also investigated. The high quality data produced by this effort presents a new set of data which can be used to assess the performance of CFD methods. In this work, a hybrid LES/RANS model developed at North Carolina State University (NCSU) is used to simulate several runs from these experiments, and evaluate the performance of high fidelity methods as compared to more typical RANS models. .

  1. 12 CFR 308.171 - Responses to application.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... PRACTICE AND PROCEDURE Rules and Procedures Relating to the Recovery of Attorney Fees and Other Expenses... determines that the public interest requires such participation in order to permit additional exploration of matters raised in the comments. (d) Additional response. Additional filings in the nature of pleadings may...

  2. 12 CFR 308.171 - Responses to application.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... PRACTICE AND PROCEDURE Rules and Procedures Relating to the Recovery of Attorney Fees and Other Expenses... determines that the public interest requires such participation in order to permit additional exploration of matters raised in the comments. (d) Additional response. Additional filings in the nature of pleadings may...

  3. A City Manager Looks at Trends Affecting Public Libraries.

    ERIC Educational Resources Information Center

    Kemp, Roger L.

    1999-01-01

    Highlights some important conditions, both present and future, which will have an impact on public libraries. Discusses holding down expenses, including user fees, alternative funding sources, and private cosponsorship of programs; increasing productivity; use of computers and new technologies; staff development and internal marketing; improving…

  4. Computer Conferencing and Electronic Mail.

    ERIC Educational Resources Information Center

    Kaye, Tony

    This paper discusses a number of problems associated with distance education methods used in adult education and training fields, including limited opportunities for dialogue and group interaction among students and between students and tutors; the expense of updating and modifying mass-produced print and audiovisual materials; and the relative…

  5. 26 CFR 1.460-1 - Long-term contracts.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... attributable to designing the satellite and developing computer software using the PCM. Example 7. Non-long... customer has title to, control over, or bears the risk of loss from, the property manufactured or... as design and engineering costs, other than expenses attributable to bidding and negotiating...

  6. 26 CFR 1.460-1 - Long-term contracts.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... attributable to designing the satellite and developing computer software using the PCM. Example 7. Non-long... customer has title to, control over, or bears the risk of loss from, the property manufactured or... as design and engineering costs, other than expenses attributable to bidding and negotiating...

  7. 26 CFR 1.460-1 - Long-term contracts.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... attributable to designing the satellite and developing computer software using the PCM. Example 7. Non-long... customer has title to, control over, or bears the risk of loss from, the property manufactured or... as design and engineering costs, other than expenses attributable to bidding and negotiating...

  8. 26 CFR 1.460-1 - Long-term contracts.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... attributable to designing the satellite and developing computer software using the PCM. Example 7. Non-long... customer has title to, control over, or bears the risk of loss from, the property manufactured or... as design and engineering costs, other than expenses attributable to bidding and negotiating...

  9. Maps and Map Learning in Social Studies

    ERIC Educational Resources Information Center

    Bednarz, Sarah Witham; Acheson, Gillian; Bednarz, Robert S.

    2006-01-01

    The importance of maps and other graphic representations has become more important to geography and geographers. This is due to the development and widespread diffusion of geographic (spatial) technologies. As computers and silicon chips have become more capable and less expensive, geographic information systems (GIS), global positioning satellite…

  10. Multi-Protocol LAN Design and Implementation: A Case Study.

    ERIC Educational Resources Information Center

    Hazari, Sunil

    1995-01-01

    Reports on the installation of a local area network (LAN) at East Carolina University. Topics include designing the network; computer labs and electronic mail; Internet connectivity; LAN expenses; and recommendations on planning, equipment, administration, and training. A glossary of networking terms is also provided. (AEF)

  11. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    NASA Technical Reports Server (NTRS)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  12. Off the Shelf Cloud Robotics for the Smart Home: Empowering a Wireless Robot through Cloud Computing.

    PubMed

    Ramírez De La Pinta, Javier; Maestre Torreblanca, José María; Jurado, Isabel; Reyes De Cozar, Sergio

    2017-03-06

    In this paper, we explore the possibilities offered by the integration of home automation systems and service robots. In particular, we examine how advanced computationally expensive services can be provided by using a cloud computing approach to overcome the limitations of the hardware available at the user's home. To this end, we integrate two wireless low-cost, off-the-shelf systems in this work, namely, the service robot Rovio and the home automation system Z-wave. Cloud computing is used to enhance the capabilities of these systems so that advanced sensing and interaction services based on image processing and voice recognition can be offered.

  13. Off the Shelf Cloud Robotics for the Smart Home: Empowering a Wireless Robot through Cloud Computing

    PubMed Central

    Ramírez De La Pinta, Javier; Maestre Torreblanca, José María; Jurado, Isabel; Reyes De Cozar, Sergio

    2017-01-01

    In this paper, we explore the possibilities offered by the integration of home automation systems and service robots. In particular, we examine how advanced computationally expensive services can be provided by using a cloud computing approach to overcome the limitations of the hardware available at the user’s home. To this end, we integrate two wireless low-cost, off-the-shelf systems in this work, namely, the service robot Rovio and the home automation system Z-wave. Cloud computing is used to enhance the capabilities of these systems so that advanced sensing and interaction services based on image processing and voice recognition can be offered. PMID:28272305

  14. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  15. The development of a computer-assisted instruction system for clinical nursing skills with virtual instruments concepts: A case study for intra-aortic balloon pumping.

    PubMed

    Chang, Ching-I; Yan, Huey-Yeu; Sung, Wen-Hsu; Shen, Shu-Cheng; Chuang, Pao-Yu

    2006-01-01

    The purpose of this research was to develop a computer-aided instruction system for intra-aortic balloon pumping (IABP) skills in clinical nursing with virtual instrument (VI) concepts. Computer graphic technologies were incorporated to provide not only static clinical nursing education, but also the simulated function of operating an expensive medical instrument with VI techniques. The content of nursing knowledge was adapted from current well-accepted clinical training materials. The VI functions were developed using computer graphic technology with photos of real medical instruments taken by digital camera. We wish the system could provide beginners of nursing education important teaching assistance.

  16. A Zonal Approach for Prediction of Jet Noise

    NASA Technical Reports Server (NTRS)

    Shih, S. H.; Hixon, D. R.; Mankbadi, Reda R.

    1995-01-01

    A zonal approach for direct computation of sound generation and propagation from a supersonic jet is investigated. The present work splits the computational domain into a nonlinear, acoustic-source regime and a linear acoustic wave propagation regime. In the nonlinear regime, the unsteady flow is governed by the large-scale equations, which are the filtered compressible Navier-Stokes equations. In the linear acoustic regime, the sound wave propagation is described by the linearized Euler equations. Computational results are presented for a supersonic jet at M = 2. 1. It is demonstrated that no spurious modes are generated in the matching region and the computational expense is reduced substantially as opposed to fully large-scale simulation.

  17. Access to innovation: is there a difference in the use of expensive anticancer drugs between French hospitals?

    PubMed

    Bonastre, Julia; Chevalier, Julie; Van der Laan, Chantal; Delibes, Michel; De Pouvourville, Gerard

    2014-06-01

    In DRG-based hospital payment systems, expensive drugs are often funded separately. In France, specific expensive drugs (including a large proportion of anticancer drugs) are fully reimbursed up to national reimbursement tariffs to ensure equity of access. Our objective was to analyse the use of expensive anticancer drugs in public and private hospitals, and between regions. We had access to sales per anticancer drug and per hospital in the year 2008. We used a multilevel model to study the variation in the mean expenditure of expensive anticancer drugs per course of chemotherapy and per hospital. The mean expenditure per course of chemotherapy was €922 [95% CI: 890-954]. At the hospital level, specialisation in chemotherapies for breast cancers was associated with a higher expenditure of anticancer drugs per course for those hospitals with the highest proportion of cancers at this site. There were no differences in the use of expensive drugs between the private and the public hospital sector after controlling for case mix. There were no differences between the mean expenditures per region. The absence of disparities in the use of expensive anticancer drugs between hospitals and regions may indicate that exempting chemotherapies from DRG-based payments and providing additional reimbursement for these drugs has been successful at ensuring equal access to care. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Offsetting the Effects of Medical Expenses on Older Adults' Household Food Budgets: An Analysis of the Standard Medical Expense Deduction.

    PubMed

    Adams, Grace Bagwell; Lee, Jung Sun; Bhargava, Vibha; Super, David A

    2017-04-01

    The Supplemental Nutrition Assistance Program (SNAP) provides critical nutrition assistance to over 40 million Americans each month. Low-income older adults (60 and older) and disabled participants experience additional budgetary constraints because of high out-of-pocket medical expenses. In recent years, some states have adopted a "Standard Medical Expense Deduction" (SMED) for senior and disabled beneficiaries, making it easier to report medical expenses in the SNAP application process. We conduct a descriptive national analysis that shows increases in benefit levels and reporting of medical expenses for states that have implemented SMED. We then present descriptive findings from Medicare claims data among a sample of low-income older adults in need of food assistance in Georgia. Average medical expenses among this sample approach $200 per month, whereas those for persons diagnosed with multiple chronic conditions exceed $300 per month. Policy implications of this analysis include the need for more states to consider adoption of SMED or alternative estimating approaches, leading to increases in benefit levels for the neediest beneficiaries and decreases in administrative burden among state agencies. We present two possible policy approaches states might take to receive approval for these changes from U.S. Department of Agriculture. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. School-related expenses, living expenses, and income sources for graduate students in nurse anesthesia programs.

    PubMed

    Heikkila, Dianna

    2002-02-01

    Nurse anesthesia programs (NAPs) are the highest priced programs for graduate students compared with 7 other nursing master's degree programs. Not only are nurse anesthesia programs expensive, but also most students are encouraged by the policies within their individual programs to terminate full-time employment before matriculation. The purpose of this study was to determine school-related and living expenses, as well as the income and sources of income for graduate students in the second year of their NAP. To obtain the information, a student cost survey was designed and administered to participants attending NAPs across the United States during the 2001 school year. In addition, total degree costs were analyzed using a cost model assessing 4 components: educational costs, living expenses, net income foregone, and loan costs. The results showed that total degree costs incurred by graduate students in NAPs to complete their nurse anesthesia education totals $173,007. The analysis of the sources of income showed the following sources were used by respondents: guaranteed student loans; a spouse's income; agreements with future employers; stipends from universities, hospitals, and/or the military; grants; family support; and self-income. Completing a nurse anesthesia education program is expensive, although the expected return on the investment is high. Nevertheless, the expense may keep qualified graduate students from entering NAPs.

  20. A Method for Aircraft Concept Selection Using Multicriteria Interactive Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Buonanno, Michael; Mavris, Dimitri

    2005-01-01

    The problem of aircraft concept selection has become increasingly difficult in recent years as a result of a change from performance as the primary evaluation criteria of aircraft concepts to the current situation in which environmental effects, economics, and aesthetics must also be evaluated and considered in the earliest stages of the decision-making process. This has prompted a shift from design using historical data regression techniques for metric prediction to the use of physics-based analysis tools that are capable of analyzing designs outside of the historical database. The use of optimization methods with these physics-based tools, however, has proven difficult because of the tendency of optimizers to exploit assumptions present in the models and drive the design towards a solution which, while promising to the computer, may be infeasible due to factors not considered by the computer codes. In addition to this difficulty, the number of discrete options available at this stage may be unmanageable due to the combinatorial nature of the concept selection problem, leading the analyst to arbitrarily choose a sub-optimum baseline vehicle. These concept decisions such as the type of control surface scheme to use, though extremely important, are frequently made without sufficient understanding of their impact on the important system metrics because of a lack of computational resources or analysis tools. This paper describes a hybrid subjective/quantitative optimization method and its application to the concept selection of a Small Supersonic Transport. The method uses Genetic Algorithms to operate on a population of designs and promote improvement by varying more than sixty parameters governing the vehicle geometry, mission, and requirements. In addition to using computer codes for evaluation of quantitative criteria such as gross weight, expert input is also considered to account for criteria such as aeroelasticity or manufacturability which may be impossible or too computationally expensive to consider explicitly in the analysis. Results indicate that concepts resulting from the use of this method represent designs which are promising to both the computer and the analyst, and that a mapping between concepts and requirements that would not otherwise be apparent is revealed.

  1. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE PAGES

    Butler, Troy; Wildey, Timothy

    2018-01-01

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  2. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Troy; Wildey, Timothy

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  3. Implementation of extended Lagrangian dynamics in GROMACS for polarizable simulations using the classical Drude oscillator model.

    PubMed

    Lemkul, Justin A; Roux, Benoît; van der Spoel, David; MacKerell, Alexander D

    2015-07-15

    Explicit treatment of electronic polarization in empirical force fields used for molecular dynamics simulations represents an important advancement in simulation methodology. A straightforward means of treating electronic polarization in these simulations is the inclusion of Drude oscillators, which are auxiliary, charge-carrying particles bonded to the cores of atoms in the system. The additional degrees of freedom make these simulations more computationally expensive relative to simulations using traditional fixed-charge (additive) force fields. Thus, efficient tools are needed for conducting these simulations. Here, we present the implementation of highly scalable algorithms in the GROMACS simulation package that allow for the simulation of polarizable systems using extended Lagrangian dynamics with a dual Nosé-Hoover thermostat as well as simulations using a full self-consistent field treatment of polarization. The performance of systems of varying size is evaluated, showing that the present code parallelizes efficiently and is the fastest implementation of the extended Lagrangian methods currently available for simulations using the Drude polarizable force field. © 2015 Wiley Periodicals, Inc.

  4. Design and implementation of an inexpensive target scanner for the growth of thin films by the laser-ablation process

    NASA Astrophysics Data System (ADS)

    Rao, A. M.; Moodera, J. S.

    1991-04-01

    The design of a target scanner that is inexpensive and easy to construct is described. Our target scanner system does not require an expensive personal computer to raster the laser beam uniformily over the target material, unlike the computer driven target scanners that are currently being used in the thin-film industry. The main components of our target scanner comprise a bidirectional motor, a two-position switch, and a standard optical mirror mount.

  5. CLOCS (Computer with Low Context-Switching Time) Operating System Reference Documents

    DTIC Science & Technology

    1988-05-06

    system are met. In sum, real-time constraints make programming harder in genera420], because they add a whole new dimension - the time dimension - to ...be preempted until it allows itself to be. More is Stored; Less is Computed Alan Jay Smith, of Berkeley, has said that any program can be made five...times as swift to run, at the expense of five times the storage space. While his numbers may be questioned, his premise may not: programs can be made

  6. Experimental realization of an entanglement access network and secure multi-party computation

    NASA Astrophysics Data System (ADS)

    Chang, X.-Y.; Deng, D.-L.; Yuan, X.-X.; Hou, P.-Y.; Huang, Y.-Y.; Duan, L.-M.

    2016-07-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography.

  7. Experimental realization of an entanglement access network and secure multi-party computation

    NASA Astrophysics Data System (ADS)

    Chang, Xiuying; Deng, Donglin; Yuan, Xinxing; Hou, Panyu; Huang, Yuanyuan; Duan, Luming; Department of Physics, University of Michigan Collaboration; CenterQuantum Information in Tsinghua University Team

    2017-04-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography.

  8. Can Early Rehabilitation after Total Hip Arthroplasty Reduce Its Major Complications and Medical Expenses? Report from a Nationally Representative Cohort.

    PubMed

    Chiung-Jui Su, Daniel; Yuan, Kuo-Shu; Weng, Shih-Feng; Hong, Rong-Bin; Wu, Ming-Ping; Wu, Hing-Man; Chou, Willy

    2015-01-01

    To investigate whether early rehabilitation reduces the occurrence of posttotal hip arthroplasty (THA) complications, adverse events, and medical expenses within one postoperative year. We retrospectively retrieve data from Taiwan's National Health Insurance Research Database. Patients who had undergone THA during the period from 1998 to 2010 were recruited, matched for propensity scores, and divided into 2 groups: early rehabilitation (Early Rehab) and delayed rehabilitation (Delayed Rehab). Eight hundred twenty of 999 THA patients given early rehabilitation treatments were matched to 205 of 233 THA patients given delayed rehabilitation treatments. The Delayed Rehab group had significantly (all p < 0.001) higher medical and rehabilitation expenses and more outpatient department (OPD) visits than the Early Rehab group. In addition, the Delayed Rehab group was associated with more prosthetic infection (odds ratio (OR): 3.152; 95% confidence interval (CI): 1.211-8.203; p < 0.05) than the Early Rehab group. Early rehabilitation can significantly reduce the incidence of prosthetic infection, total rehabilitation expense, total medical expenses, and number of OPD visits within the first year after THA.

  9. Self-consistent clustering analysis: an efficient multiscale scheme for inelastic heterogeneous materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Z.; Bessa, M. A.; Liu, W.K.

    A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical andmore » concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about 43,000 is achieved by using the SCA method, as opposed to FE2, enabling the solution of an otherwise computationally intractable problem. The second example uses a crystal plasticity constitutive law and computes the fatigue potency of extrinsic microscale features such as voids. This shows that local stress and strain are capture sufficiently well by SCA. This model has been incorporated in a process-structure-properties prediction framework for process design in additive manufacturing.« less

  10. 11 CFR 9035.1 - Campaign expenditure limitation; compliance and fundraising exemptions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...: (i) Coordinated expenditures under 11 CFR 109.20; (ii) Coordinated communications under 11 CFR 109.21... coordinated communications pursuant to 11 CFR 109.37 that are in-kind contributions received or accepted by... this section, 100% of salary, overhead and computer expenses incurred after a candidate's date of...

  11. 11 CFR 9035.1 - Campaign expenditure limitation; compliance and fundraising exemptions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...: (i) Coordinated expenditures under 11 CFR 109.20; (ii) Coordinated communications under 11 CFR 109.21... coordinated communications pursuant to 11 CFR 109.37 that are in-kind contributions received or accepted by... this section, 100% of salary, overhead and computer expenses incurred after a candidate's date of...

  12. Do Early Outs Work Out? Teacher Early Retirement Incentive Plans.

    ERIC Educational Resources Information Center

    Brown, Herb R.; Repa, J. Theodore

    1993-01-01

    School districts offer teacher early retirement incentive plans (TERIPs) as an opportunity to hire less expensive teachers, reduce fringe benefits costs, and eliminate teaching positions. Discusses reasons for teachers to accept TERIP, and describes a computer model that allows school officials to calculate and compare costs incurred if an…

  13. 14 CFR Section 24 - Profit and Loss Elements

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Maintenance Burden” shall reflect a memorandum allocation by each air carrier of the total expenses included... operation personnel in readiness for assignment to an in-flight status. (2) “Maintenance” shall include all... line 5 of this schedule. (f) “Operating Profit (Loss)” shall be computed by subtracting the total...

  14. 14 CFR Section 24 - Profit and Loss Elements

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Maintenance Burden” shall reflect a memorandum allocation by each air carrier of the total expenses included... operation personnel in readiness for assignment to an in-flight status. (2) “Maintenance” shall include all... line 5 of this schedule. (f) “Operating Profit (Loss)” shall be computed by subtracting the total...

  15. 14 CFR Section 24 - Profit and Loss Elements

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Maintenance Burden” shall reflect a memorandum allocation by each air carrier of the total expenses included... operation personnel in readiness for assignment to an in-flight status. (2) “Maintenance” shall include all... line 5 of this schedule. (f) “Operating Profit (Loss)” shall be computed by subtracting the total...

  16. 26 CFR 54.4980B-5 - COBRA continuation coverage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (for example, because of a divorce), the family deductible may be computed separately for each... the year. The plan provides that upon the divorce of a covered employee, coverage will end immediately... family had accumulated $420 of covered expenses before the divorce, as follows: $70 by each parent, $200...

  17. Reinforce Networking Theory with OPNET Simulation

    ERIC Educational Resources Information Center

    Guo, Jinhua; Xiang, Weidong; Wang, Shengquan

    2007-01-01

    As networking systems have become more complex and expensive, hands-on experiments based on networking simulation have become essential for teaching the key computer networking topics to students. The simulation approach is the most cost effective and highly useful because it provides a virtual environment for an assortment of desirable features…

  18. A DIY Ultrasonic Signal Generator for Sound Experiments

    ERIC Educational Resources Information Center

    Riad, Ihab F.

    2018-01-01

    Many physics departments around the world have electronic and mechanical workshops attached to them that can help build experimental setups and instruments for research and the training of undergraduate students. The workshops are usually run by experienced technicians and equipped with expensive lathing, computer numerical control (CNC) machines,…

  19. Teaching Children Thinking

    ERIC Educational Resources Information Center

    Papert, Seymour

    2005-01-01

    The phrase "technology and education" usually means inventing new gadgets to teach the same old stuff in a thinly disguised version of the same old way. Moreover, if the gadgets are computers, the same old teaching becomes incredibly more expensive and biased towards its dullest parts, namely the kind of rote learning in which measurable…

  20. Don't Outsource It. Do It!

    ERIC Educational Resources Information Center

    Nuzzo, David

    1999-01-01

    Discusses outsourcing in library technical-services departments and how to make the department more cost-effective to limit the need for outsourcing as a less expensive alternative. Topics include experiences at State University of New York at Buffalo; efficient use of computers for in-house programs; and staff participation. (LRW)

  1. Recording Computer-Based Demonstrations and Board Work

    ERIC Educational Resources Information Center

    Spencer, Neil H.

    2010-01-01

    This article describes how a demonstration of statistical (or other) software can be recorded without expensive video equipment and saved as a presentation to be displayed with software such as Microsoft PowerPoint. Work carried out on a tablet PC, for example, can also be recorded in this fashion.

  2. Common Sense Wordworking III: Desktop Publishing and Desktop Typesetting.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1987-01-01

    Describes current desktop publishing packages available for microcomputers and discusses the disadvantages, especially in cost, for most personal computer users. Also described is a less expensive alternative technology--desktop typesetting--which meets the requirements of users who do not need elaborate techniques for combining text and graphics.…

  3. CRITTERS! A Realistic Simulation for Teaching Evolutionary Biology

    ERIC Educational Resources Information Center

    Latham, Luke G., II; Scully, Erik P.

    2008-01-01

    Evolutionary processes can be studied in nature and in the laboratory, but time and financial constraints result in few opportunities for undergraduate and high school students to explore the agents of genetic change in populations. One alternative to time consuming and expensive teaching laboratories is the use of computer simulations. We…

  4. Learning Hierarchical Skills for Game Agents from Video of Human Behavior

    DTIC Science & Technology

    2009-01-01

    intelligent agents for computer games is an im- portant aspect of game development . However, traditional methods are expensive, and the resulting agents...Constructing autonomous agents is an essential task in game development . In this paper, we outlined a system that an- alyzes preprocessed video footage of

  5. Processing Polarity: How the Ungrammatical Intrudes on the Grammatical

    ERIC Educational Resources Information Center

    Vasishth, Shravan; Brussow, Sven; Lewis, Richard L.; Drenhaus, Heiner

    2008-01-01

    A central question in online human sentence comprehension is, "How are linguistic relations established between different parts of a sentence?" Previous work has shown that this dependency resolution process can be computationally expensive, but the underlying reasons for this are still unclear. This article argues that dependency…

  6. Low Cost Alternatives to Commercial Lab Kits for Physics Experiments

    ERIC Educational Resources Information Center

    Kodejška, C.; De Nunzio, G.; Kubinek, R.; Ríha, J.

    2015-01-01

    Conducting experiments in physics using modern measuring techniques, and particularly those utilizing computers, is often much more attractive to students than conducting experiments conventionally. However, the cost of professional kits in the Czech Republic is still very expensive for many schools. The basic equipment for one student workplace…

  7. Long-Range Budget Planning in Private Colleges and Universities

    ERIC Educational Resources Information Center

    Hopkins, David S. P.; Massy, William F.

    1977-01-01

    Computer models have greatly assisted budget planners in privately financed institutions to identify and analyze major financial problems. The implementation of such a model at Stanford University is described that considers student aid expenses, indirect cost recovery, endowments, price elasticity of enrollment, and student/faculty ratios.…

  8. Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction

    NASA Astrophysics Data System (ADS)

    Keith, Theo G., Jr.; Hixon, Duane R.

    2002-07-01

    Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively coarse grid, the numerical solution is effectively filtered into a directly calculated mean flow with the small-scale turbulence being modeled, and an unsteady large-scale component that is also being directly calculated. In this way, the unsteady disturbances are calculated in a nonlinear way, with a direct effect on the mean flow. This method is not as fast as the LEE approach, but does have many advantages to recommend it; however, like the LEE approach, only the effect of the largest unsteady structures will be captured. An initial calculation was performed on a supersonic jet exhaust plume, with promising results, but the calculation was hampered by the explicit time marching scheme that was employed. This explicit scheme required a very small time step to resolve the nozzle boundary layer, which caused a long run time. Current work is focused on testing a lower-order implicit time marching method to combat this problem.

  9. Multidisciplinary propulsion simulation using the numerical propulsion system simulator (NPSS)

    NASA Technical Reports Server (NTRS)

    Claus, Russel W.

    1994-01-01

    Implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributions to the high cost is the need to perform many large scale system tests. The traditional design analysis procedure decomposes the engine into isolated components and focuses attention on each single physical discipline (e.g., fluid for structural dynamics). Consequently, the interactions that naturally occur between components and disciplines can be masked by the limited interactions that occur between individuals or teams doing the design and must be uncovered during expensive engine testing. This overview will discuss a cooperative effort of NASA, industry, and universities to integrate disciplines, components, and high performance computing into a Numerical propulsion System Simulator (NPSS).

  10. Symmetrically private information retrieval based on blind quantum computing

    NASA Astrophysics Data System (ADS)

    Sun, Zhiwei; Yu, Jianping; Wang, Ping; Xu, Lingling

    2015-05-01

    Universal blind quantum computation (UBQC) is a new secure quantum computing protocol which allows a user Alice who does not have any sophisticated quantum technology to delegate her computing to a server Bob without leaking any privacy. Using the features of UBQC, we propose a protocol to achieve symmetrically private information retrieval, which allows a quantum limited Alice to query an item from Bob with a fully fledged quantum computer; meanwhile, the privacy of both parties is preserved. The security of our protocol is based on the assumption that malicious Alice has no quantum computer, which avoids the impossibility proof of Lo. For the honest Alice, she is almost classical and only requires minimal quantum resources to carry out the proposed protocol. Therefore, she does not need any expensive laboratory which can maintain the coherence of complicated quantum experimental setups.

  11. Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mason, B. H.; Walsh, J. L.

    2001-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.

  12. Current CFD Practices in Launch Vehicle Applications

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, Cetin

    2012-01-01

    The quest for sustained space exploration will require the development of advanced launch vehicles, and efficient and reliable operating systems. Development of launch vehicles via test-fail-fix approach is very expensive and time consuming. For decision making, modeling and simulation (M&S) has played increasingly important roles in many aspects of launch vehicle development. It is therefore essential to develop and maintain most advanced M&S capability. More specifically computational fluid dynamics (CFD) has been providing critical data for developing launch vehicles complementing expensive testing. During the past three decades CFD capability has increased remarkably along with advances in computer hardware and computing technology. However, most of the fundamental CFD capability in launch vehicle applications is derived from the past advances. Specific gaps in the solution procedures are being filled primarily through "piggy backed" efforts.on various projects while solving today's problems. Therefore, some of the advanced capabilities are not readily available for various new tasks, and mission-support problems are often analyzed using ad hoc approaches. The current report is intended to present our view on state-of-the-art (SOA) in CFD and its shortcomings in support of space transport vehicle development. Best practices in solving current issues will be discussed using examples from ascending launch vehicles. Some of the pacing will be discussed in conjunction with these examples.

  13. Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke

    NASA Astrophysics Data System (ADS)

    Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro

    Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.

  14. Using Multistate Reweighting to Rapidly and Efficiently Explore Molecular Simulation Parameters Space for Nonbonded Interactions.

    PubMed

    Paliwal, Himanshu; Shirts, Michael R

    2013-11-12

    Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.

  15. Combining configurational energies and forces for molecular force field optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.

    While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less

  16. Evaluation of Jaundice in Adults.

    PubMed

    Fargo, Matthew V; Grogan, Scott P; Saguil, Aaron

    2017-02-01

    Jaundice in adults can be an indicator of significant underlying disease. It is caused by elevated serum bilirubin levels in the unconjugated or conjugated form. The evaluation of jaundice relies on the history and physical examination. The initial laboratory evaluation should include fractionated bilirubin, a complete blood count, alanine transaminase, aspartate transaminase, alkaline phosphatase, ?-glutamyltransferase, prothrombin time and/or international normalized ratio, albumin, and protein. Imaging with ultrasonography or computed tomography can differentiate between extrahepatic obstructive and intrahepatic parenchymal disorders. Ultrasonography is the least invasive and least expensive imaging method. A more extensive evaluation may include additional cancer screening, biliary imaging, autoimmune antibody assays, and liver biopsy. Unconjugated hyperbilirubinemia occurs with increased bilirubin production caused by red blood cell destruction, such as hemolytic disorders, and disorders of impaired bilirubin conjugation, such as Gilbert syndrome. Conjugated hyperbilirubinemia occurs in disorders of hepatocellular damage, such as viral and alcoholic hepatitis, and cholestatic disorders, such as choledocholithiasis and neoplastic obstruction of the biliary tree.

  17. Combining configurational energies and forces for molecular force field optimization

    DOE PAGES

    Vlcek, Lukas; Sun, Weiwei; Kent, Paul R. C.

    2017-07-21

    While quantum chemical simulations have been increasingly used as an invaluable source of information for atomistic model development, the high computational expenses typically associated with these techniques often limit thorough sampling of the systems of interest. It is therefore of great practical importance to use all available information as efficiently as possible, and in a way that allows for consistent addition of constraints that may be provided by macroscopic experiments. We propose a simple approach that combines information from configurational energies and forces generated in a molecular dynamics simulation to increase the effective number of samples. Subsequently, this information ismore » used to optimize a molecular force field by minimizing the statistical distance similarity metric. We also illustrate the methodology on an example of a trajectory of configurations generated in equilibrium molecular dynamics simulations of argon and water and compare the results with those based on the force matching method.« less

  18. The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: Single-probe measurements from DR12 galaxy clustering – towards an accurate model

    DOE PAGES

    Chia -Hsun Chuang; Pellejero-Ibanez, Marco; Rodriguez-Torres, Sergio; ...

    2016-06-26

    We analyze the broad-range shape of the monopole and quadrupole correlation functions of the BOSS Data Release 12 (DR12) CMASS and LOWZ galaxy sample to obtain constraints on the Hubble expansion rate H(z), the angular-diameter distance DA(z), the normalised growth rate f(z)σ 8(z), and the physical matter density Ω mh 2. In addition, we adopt wide and flat priors on all model parameters in order to ensure the results are those of a `single-probe' galaxy clustering analysis. We also marginalize over three nuisance terms that account for potential observational systematics affecting the measured monopole. However, such Monte Carlo Markov Chainmore » analysis is computationally expensive for advanced theoretical models, thus we develop a new methodology to speed up our analysis.« less

  19. Telehealth Innovations in Health Education and Training

    PubMed Central

    De, Suvranu; Hall, Richard W.; Johansen, Edward; Meglan, Dwight; Peng, Grace C.Y.

    2010-01-01

    Abstract Telehealth applications are increasingly important in many areas of health education and training. In addition, they will play a vital role in biomedical research and research training by facilitating remote collaborations and providing access to expensive/remote instrumentation. In order to fulfill their true potential to leverage education, training, and research activities, innovations in telehealth applications should be fostered across a range of technology fronts, including online, on-demand computational models for simulation; simplified interfaces for software and hardware; software frameworks for simulations; portable telepresence systems; artificial intelligence applications to be applied when simulated human patients are not options; and the development of more simulator applications. This article presents the results of discussion on potential areas of future development, barries to overcome, and suggestions to translate the promise of telehealth applications into a transformed environment of training, education, and research in the health sciences. PMID:20155874

  20. Model predictive and reallocation problem for CubeSat fault recovery and attitude control

    NASA Astrophysics Data System (ADS)

    Franchi, Loris; Feruglio, Lorenzo; Mozzillo, Raffaele; Corpino, Sabrina

    2018-01-01

    In recent years, thanks to the increase of the know-how on machine-learning techniques and the advance of the computational capabilities of on-board processing, expensive computing algorithms, such as Model Predictive Control, have begun to spread in space applications even on small on-board processor. The paper presents an algorithm for an optimal fault recovery of a 3U CubeSat, developed in MathWorks Matlab & Simulink environment. This algorithm involves optimization techniques aiming at obtaining the optimal recovery solution, and involves a Model Predictive Control approach for the attitude control. The simulated system is a CubeSat in Low Earth Orbit: the attitude control is performed with three magnetic torquers and a single reaction wheel. The simulation neglects the errors in the attitude determination of the satellite, and focuses on the recovery approach and control method. The optimal recovery approach takes advantage of the properties of magnetic actuation, which gives the possibility of the redistribution of the control action when a fault occurs on a single magnetic torquer, even in absence of redundant actuators. In addition, the paper presents the results of the implementation of Model Predictive approach to control the attitude of the satellite.

  1. Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis

    PubMed Central

    Steele, Joe; Bastola, Dhundy

    2014-01-01

    Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base–base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel–Ziv techniques from data compression. PMID:23904502

  2. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  3. BHR equations re-derived with immiscible particle effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwarzkopf, John Dennis; Horwitz, Jeremy A.

    2015-05-01

    Compressible and variable density turbulent flows with dispersed phase effects are found in many applications ranging from combustion to cloud formation. These types of flows are among the most challenging to simulate. While the exact equations governing a system of particles and fluid are known, computational resources limit the scale and detail that can be simulated in this type of problem. Therefore, a common method is to simulate averaged versions of the flow equations, which still capture salient physics and is relatively less computationally expensive. Besnard developed such a model for variable density miscible turbulence, where ensemble-averaging was applied tomore » the flow equations to yield a set of filtered equations. Besnard further derived transport equations for the Reynolds stresses, the turbulent mass flux, and the density-specific volume covariance, to help close the filtered momentum and continuity equations. We re-derive the exact BHR closure equations which include integral terms owing to immiscible effects. Physical interpretations of the additional terms are proposed along with simple models. The goal of this work is to extend the BHR model to allow for the simulation of turbulent flows where an immiscible dispersed phase is non-trivially coupled with the carrier phase.« less

  4. Enzyme stabilization via computationally guided protein stapling.

    PubMed

    Moore, Eric J; Zorine, Dmitri; Hansen, William A; Khare, Sagar D; Fasan, Rudi

    2017-11-21

    Thermostabilization represents a critical and often obligatory step toward enhancing the robustness of enzymes for organic synthesis and other applications. While directed evolution methods have provided valuable tools for this purpose, these protocols are laborious and time-consuming and typically require the accumulation of several mutations, potentially at the expense of catalytic function. Here, we report a minimally invasive strategy for enzyme stabilization that relies on the installation of genetically encoded, nonreducible covalent staples in a target protein scaffold using computational design. This methodology enables the rapid development of myoglobin-based cyclopropanation biocatalysts featuring dramatically enhanced thermostability (Δ T m = +18.0 °C and Δ T 50 = +16.0 °C) as well as increased stability against chemical denaturation [Δ C m (GndHCl) = 0.53 M], without altering their catalytic efficiency and stereoselectivity properties. In addition, the stabilized variants offer superior performance and selectivity compared with the parent enzyme in the presence of a high concentration of organic cosolvents, enabling the more efficient cyclopropanation of a water-insoluble substrate. This work introduces and validates an approach for protein stabilization which should be applicable to a variety of other proteins and enzymes.

  5. A Progressive Damage Model for unidirectional Fibre Reinforced Composites with Application to Impact and Penetration Simulation

    NASA Astrophysics Data System (ADS)

    Kerschbaum, M.; Hopmann, C.

    2016-06-01

    The computationally efficient simulation of the progressive damage behaviour of continuous fibre reinforced plastics is still a challenging task with currently available computer aided engineering methods. This paper presents an original approach for an energy based continuum damage model which accounts for stress-/strain nonlinearities, transverse and shear stress interaction phenomena, quasi-plastic shear strain components, strain rate effects, regularised damage evolution and consideration of load reversal effects. The physically based modelling approach enables experimental determination of all parameters on ply level to avoid expensive inverse analysis procedures. The modelling strategy, implementation and verification of this model using commercially available explicit finite element software are detailed. The model is then applied to simulate the impact and penetration of carbon fibre reinforced cross-ply specimens with variation of the impact speed. The simulation results show that the presented approach enables a good representation of the force-/displacement curves and especially well agreement with the experimentally observed fracture patterns. In addition, the mesh dependency of the results were assessed for one impact case showing only very little change of the simulation results which emphasises the general applicability of the presented method.

  6. Out-of-pocket fertility patient expense: data from a multicenter prospective infertility cohort.

    PubMed

    Wu, Alex K; Odisho, Anobel Y; Washington, Samuel L; Katz, Patricia P; Smith, James F

    2014-02-01

    The high costs of fertility care may deter couples from seeking care. Urologists often are asked about the costs of these treatments. To our knowledge previous studies have not addressed the direct out-of-pocket costs to couples. We characterized these expenses in patients seeking fertility care. Couples were prospectively recruited from 8 community and academic reproductive endocrinology clinics. Each participating couple completed face-to-face or telephone interviews and cost diaries at study enrollment, and 4, 10 and 18 months of care. We determined overall out-of-pocket costs, in addition to relationships between out-of-pocket costs and treatment type, clinical outcomes and socioeconomic characteristics on multivariate linear regression analysis. A total of 332 couples completed cost diaries and had data available on treatment and outcomes. Average age was 36.8 and 35.6 years in men and women, respectively. Of this cohort 19% received noncycle based therapy, 4% used ovulation induction medication only, 22% underwent intrauterine insemination and 55% underwent in vitro fertilization. The median overall out-of-pocket expense was $5,338 (IQR 1,197-19,840). Couples using medication only had the lowest median out-of-pocket expenses at $912 while those using in vitro fertilization had the highest at $19,234. After multivariate adjustment the out-of-pocket expense was not significantly associated with successful pregnancy. On multivariate analysis couples treated with in vitro fertilization spent an average of $15,435 more than those treated with intrauterine insemination. Couples spent about $6,955 for each additional in vitro fertilization cycle. These data provide real-world estimates of out-of-pocket costs, which can be used to help couples plan for expenses that they may incur with treatment. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  7. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    NASA Astrophysics Data System (ADS)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  8. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  9. Simple, inexpensive computerized rodent activity meters.

    PubMed

    Horton, R M; Karachunski, P I; Kellermann, S A; Conti-Fine, B M

    1995-10-01

    We describe two approaches for using obsolescent computers, either an IBM PC XT or an Apple Macintosh Plus, to accurately quantify spontaneous rodent activity, as revealed by continuous monitoring of the spontaneous usage of running activity wheels. Because such computers can commonly be obtained at little or no expense, and other commonly available materials and inexpensive parts can be used, these meters can be built quite economically. Construction of these meters requires no specialized electronics expertise, and their software requirements are simple. The computer interfaces are potentially of general interest, as they could also be used for monitoring a variety of events in a research setting.

  10. Reduced-Order Models for the Aeroelastic Analysis of Ares Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Vatsa, Veer N.; Biedron, Robert T.

    2010-01-01

    This document presents the development and application of unsteady aerodynamic, structural dynamic, and aeroelastic reduced-order models (ROMs) for the ascent aeroelastic analysis of the Ares I-X flight test and Ares I crew launch vehicles using the unstructured-grid, aeroelastic FUN3D computational fluid dynamics (CFD) code. The purpose of this work is to perform computationally-efficient aeroelastic response calculations that would be prohibitively expensive via computation of multiple full-order aeroelastic FUN3D solutions. These efficient aeroelastic ROM solutions provide valuable insight regarding the aeroelastic sensitivity of the vehicles to various parameters over a range of dynamic pressures.

  11. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  12. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  13. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  14. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  15. The successful merger of theoretical thermochemistry with fragment-based methods in quantum chemistry.

    PubMed

    Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-12-16

    CONSPECTUS: Quantum chemistry and electronic structure theory have proven to be essential tools to the experimental chemist, in terms of both a priori predictions that pave the way for designing new experiments and rationalizing experimental observations a posteriori. Translating the well-established success of electronic structure theory in obtaining the structures and energies of small chemical systems to increasingly larger molecules is an exciting and ongoing central theme of research in quantum chemistry. However, the prohibitive computational scaling of highly accurate ab initio electronic structure methods poses a fundamental challenge to this research endeavor. This scenario necessitates an indirect fragment-based approach wherein a large molecule is divided into small fragments and is subsequently reassembled to compute its energy accurately. In our quest to further reduce the computational expense associated with the fragment-based methods and overall enhance the applicability of electronic structure methods to large molecules, we realized that the broad ideas involved in a different area, theoretical thermochemistry, are transferable to the area of fragment-based methods. This Account focuses on the effective merger of these two disparate frontiers in quantum chemistry and how new concepts inspired by theoretical thermochemistry significantly reduce the total number of electronic structure calculations needed to be performed as part of a fragment-based method without any appreciable loss of accuracy. Throughout, the generalized connectivity based hierarchy (CBH), which we developed to solve a long-standing problem in theoretical thermochemistry, serves as the linchpin in this merger. The accuracy of our method is based on two strong foundations: (a) the apt utilization of systematic and sophisticated error-canceling schemes via CBH that result in an optimal cutting scheme at any given level of fragmentation and (b) the use of a less expensive second layer of electronic structure method to recover all the missing long-range interactions in the parent large molecule. Overall, the work featured here dramatically decreases the computational expense and empowers the execution of very accurate ab initio calculations (gold-standard CCSD(T)) on large molecules and thereby facilitates sophisticated electronic structure applications to a wide range of important chemical problems.

  16. Sensitivity of chemistry-transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, Sajeev; Martin, Randall V.; Keller, Christoph A.

    2016-05-01

    Chemistry-transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemistry-transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to operator duration. Subsequently, we compare the species simulated with operator durations from 10 to 60 min as typically used by global chemistry-transport models, and identify the operator durations that optimize both computational expense and simulation accuracy. We find that longer continuous transport operator duration increases concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production with longer transport operator duration. Longer chemical operator duration decreases sulfate and ammonium but increases nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by up to a factor of 5 from fine (5 min) to coarse (60 min) operator duration. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, secondary inorganic aerosols, ozone and carbon monoxide with a finer temporal or spatial resolution taken as "truth". Relative simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) operator duration. Chemical operator duration twice that of the transport operator duration offers more simulation accuracy per unit computation. However, the relative simulation error from coarser spatial resolution generally exceeds that from longer operator duration; e.g., degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different operator durations in offline chemistry-transport models. We encourage chemistry-transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.

  17. Sensitivity of chemical transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, S.; Martin, R. V.; Keller, C. A.

    2015-11-01

    Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different temporal resolutions in offline chemical transport models. We encourage the chemical transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.

  18. A Practical, Robust Methodology for Acquiring New Observation Data Using Computationally Expensive Groundwater Models

    NASA Astrophysics Data System (ADS)

    Siade, Adam J.; Hall, Joel; Karelse, Robert N.

    2017-11-01

    Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.

  19. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  20. Molecular dynamics simulations and applications in computational toxicology and nanotoxicology.

    PubMed

    Selvaraj, Chandrabose; Sakkiah, Sugunadevi; Tong, Weida; Hong, Huixiao

    2018-02-01

    Nanotoxicology studies toxicity of nanomaterials and has been widely applied in biomedical researches to explore toxicity of various biological systems. Investigating biological systems through in vivo and in vitro methods is expensive and time taking. Therefore, computational toxicology, a multi-discipline field that utilizes computational power and algorithms to examine toxicology of biological systems, has gained attractions to scientists. Molecular dynamics (MD) simulations of biomolecules such as proteins and DNA are popular for understanding of interactions between biological systems and chemicals in computational toxicology. In this paper, we review MD simulation methods, protocol for running MD simulations and their applications in studies of toxicity and nanotechnology. We also briefly summarize some popular software tools for execution of MD simulations. Published by Elsevier Ltd.

  1. Adaptive Crack Modeling with Interface Solid Elements for Plain and Fiber Reinforced Concrete Structures.

    PubMed

    Zhan, Yijian; Meschke, Günther

    2017-07-08

    The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense.

  2. Efficient computation of photonic crystal waveguide modes with dispersive material.

    PubMed

    Schmidt, Kersten; Kappeler, Roman

    2010-03-29

    The optimization of PhC waveguides is a key issue for successfully designing PhC devices. Since this design task is computationally expensive, efficient methods are demanded. The available codes for computing photonic bands are also applied to PhC waveguides. They are reliable but not very efficient, which is even more pronounced for dispersive material. We present a method based on higher order finite elements with curved cells, which allows to solve for the band structure taking directly into account the dispersiveness of the materials. This is accomplished by reformulating the wave equations as a linear eigenproblem in the complex wave-vectors k. For this method, we demonstrate the high efficiency for the computation of guided PhC waveguide modes by a convergence analysis.

  3. Adaptive Crack Modeling with Interface Solid Elements for Plain and Fiber Reinforced Concrete Structures

    PubMed Central

    Zhan, Yijian

    2017-01-01

    The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense. PMID:28773130

  4. Enabling Earth Science: The Facilities and People of the NCCS

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The NCCS's mass data storage system allows scientists to store and manage the vast amounts of data generated by these computations, and its high-speed network connections allow the data to be accessed quickly from the NCCS archives. Some NCCS users perform studies that are directly related to their ability to run computationally expensive and data-intensive simulations. Because the number and type of questions scientists research often are limited by computing power, the NCCS continually pursues the latest technologies in computing, mass storage, and networking technologies. Just as important as the processors, tapes, and routers of the NCCS are the personnel who administer this hardware, create and manage accounts, maintain security, and assist the scientists, often working one on one with them.

  5. DRUMS: Disk Repository with Update Management and Select option for high throughput sequencing data.

    PubMed

    Nettling, Martin; Thieme, Nils; Both, Andreas; Grosse, Ivo

    2014-02-04

    New technologies for analyzing biological samples, like next generation sequencing, are producing a growing amount of data together with quality scores. Moreover, software tools (e.g., for mapping sequence reads), calculating transcription factor binding probabilities, estimating epigenetic modification enriched regions or determining single nucleotide polymorphism increase this amount of position-specific DNA-related data even further. Hence, requesting data becomes challenging and expensive and is often implemented using specialised hardware. In addition, picking specific data as fast as possible becomes increasingly important in many fields of science. The general problem of handling big data sets was addressed by developing specialized databases like HBase, HyperTable or Cassandra. However, these database solutions require also specialized or distributed hardware leading to expensive investments. To the best of our knowledge, there is no database capable of (i) storing billions of position-specific DNA-related records, (ii) performing fast and resource saving requests, and (iii) running on a single standard computer hardware. Here, we present DRUMS (Disk Repository with Update Management and Select option), satisfying demands (i)-(iii). It tackles the weaknesses of traditional databases while handling position-specific DNA-related data in an efficient manner. DRUMS is capable of storing up to billions of records. Moreover, it focuses on optimizing relating single lookups as range request, which are needed permanently for computations in bioinformatics. To validate the power of DRUMS, we compare it to the widely used MySQL database. The test setting considers two biological data sets. We use standard desktop hardware as test environment. DRUMS outperforms MySQL in writing and reading records by a factor of two up to a factor of 10000. Furthermore, it can work with significantly larger data sets. Our work focuses on mid-sized data sets up to several billion records without requiring cluster technology. Storing position-specific data is a general problem and the concept we present here is a generalized approach. Hence, it can be easily applied to other fields of bioinformatics.

  6. Automated symbolic calculations in nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Kröger, Martin; Hütter, Markus

    2010-12-01

    We cast the Jacobi identity for continuous fields into a local form which eliminates the need to perform any partial integration to the expense of performing variational derivatives. This allows us to test the Jacobi identity definitely and efficiently and to provide equations between different components defining a potential Poisson bracket. We provide a simple Mathematica TM notebook which allows to perform this task conveniently, and which offers some additional functionalities of use within the framework of nonequilibrium thermodynamics: reversible equations of change for fields, and the conservation of entropy during the reversible dynamics. Program summaryProgram title: Poissonbracket.nb Catalogue identifier: AEGW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 227 952 No. of bytes in distributed program, including test data, etc.: 268 918 Distribution format: tar.gz Programming language: Mathematica TM 7.0 Computer: Any computer running Mathematica TM 6.0 and later versions Operating system: Linux, MacOS, Windows RAM: 100 Mb Classification: 4.2, 5, 23 Nature of problem: Testing the Jacobi identity can be a very complex task depending on the structure of the Poisson bracket. The Mathematica TM notebook provided here solves this problem using a novel symbolic approach based on inherent properties of the variational derivative, highly suitable for the present tasks. As a by product, calculations performed with the Poisson bracket assume a compact form. Solution method: The problem is first cast into a form which eliminates the need to perform partial integration for arbitrary functionals at the expense of performing variational derivatives. The corresponding equations are conveniently obtained using the symbolic programming environment Mathematica TM. Running time: For the test cases and most typical cases in the literature, the running time is of the order of seconds or minutes, respectively.

  7. Simulation and Analysis of the AFLC Bulk Data Network Using Abstract Data Types.

    DTIC Science & Technology

    1981-12-01

    performs. Simulation is more expensive than queueing, but it is often ə the only way to study complex funtional relationships in a large system. Unlike... relationship between through- put, response and cost is shown in Figure 2. At a given cost level, additional throughput can be obtained at the expense...improved by adding resources, but this increases the total cost of the system. Network models are used to study the relationship between cost

  8. MAGMA: Generalized Gene-Set Analysis of GWAS Data

    PubMed Central

    de Leeuw, Christiaan A.; Mooij, Joris M.; Heskes, Tom; Posthuma, Danielle

    2015-01-01

    By aggregating data for complex traits in a biologically meaningful way, gene and gene-set analysis constitute a valuable addition to single-marker analysis. However, although various methods for gene and gene-set analysis currently exist, they generally suffer from a number of issues. Statistical power for most methods is strongly affected by linkage disequilibrium between markers, multi-marker associations are often hard to detect, and the reliance on permutation to compute p-values tends to make the analysis computationally very expensive. To address these issues we have developed MAGMA, a novel tool for gene and gene-set analysis. The gene analysis is based on a multiple regression model, to provide better statistical performance. The gene-set analysis is built as a separate layer around the gene analysis for additional flexibility. This gene-set analysis also uses a regression structure to allow generalization to analysis of continuous properties of genes and simultaneous analysis of multiple gene sets and other gene properties. Simulations and an analysis of Crohn’s Disease data are used to evaluate the performance of MAGMA and to compare it to a number of other gene and gene-set analysis tools. The results show that MAGMA has significantly more power than other tools for both the gene and the gene-set analysis, identifying more genes and gene sets associated with Crohn’s Disease while maintaining a correct type 1 error rate. Moreover, the MAGMA analysis of the Crohn’s Disease data was found to be considerably faster as well. PMID:25885710

  9. MAGMA: generalized gene-set analysis of GWAS data.

    PubMed

    de Leeuw, Christiaan A; Mooij, Joris M; Heskes, Tom; Posthuma, Danielle

    2015-04-01

    By aggregating data for complex traits in a biologically meaningful way, gene and gene-set analysis constitute a valuable addition to single-marker analysis. However, although various methods for gene and gene-set analysis currently exist, they generally suffer from a number of issues. Statistical power for most methods is strongly affected by linkage disequilibrium between markers, multi-marker associations are often hard to detect, and the reliance on permutation to compute p-values tends to make the analysis computationally very expensive. To address these issues we have developed MAGMA, a novel tool for gene and gene-set analysis. The gene analysis is based on a multiple regression model, to provide better statistical performance. The gene-set analysis is built as a separate layer around the gene analysis for additional flexibility. This gene-set analysis also uses a regression structure to allow generalization to analysis of continuous properties of genes and simultaneous analysis of multiple gene sets and other gene properties. Simulations and an analysis of Crohn's Disease data are used to evaluate the performance of MAGMA and to compare it to a number of other gene and gene-set analysis tools. The results show that MAGMA has significantly more power than other tools for both the gene and the gene-set analysis, identifying more genes and gene sets associated with Crohn's Disease while maintaining a correct type 1 error rate. Moreover, the MAGMA analysis of the Crohn's Disease data was found to be considerably faster as well.

  10. Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem

    PubMed Central

    Akutsah, Francis; Olusanya, Micheal O.; Adewumi, Aderemi O.

    2018-01-01

    The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems. PMID:29554662

  11. Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem.

    PubMed

    Ezugwu, Absalom E; Akutsah, Francis; Olusanya, Micheal O; Adewumi, Aderemi O

    2018-01-01

    The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems.

  12. 12 CFR 747.613 - Further proceedings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Procedures Applicable to Recovery of Attorneys Fees and Other Expenses Under the Equal Access to... argument, additional written submissions or an evidentiary hearing. Such further proceedings shall be held... the disputed issues and shall explain why the additional proceedings are necessary to resolve the...

  13. 12 CFR 747.613 - Further proceedings.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Procedures Applicable to Recovery of Attorneys Fees and Other Expenses Under the Equal Access to... argument, additional written submissions or an evidentiary hearing. Such further proceedings shall be held... the disputed issues and shall explain why the additional proceedings are necessary to resolve the...

  14. 12 CFR 747.613 - Further proceedings.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... and Procedures Applicable to Recovery of Attorneys Fees and Other Expenses Under the Equal Access to... argument, additional written submissions or an evidentiary hearing. Such further proceedings shall be held... the disputed issues and shall explain why the additional proceedings are necessary to resolve the...

  15. Crunching Knowledge: The Coming Environment for the Information Specialist.

    ERIC Educational Resources Information Center

    Nelson, Milo

    The adjustment of librarians to technological change has been difficult because they have been too close observers of the present at the expense of daydreaming about society's likely future. The brisk pace of business, industry, and Wall Street has been accelerated even more by developments in information technology and computer communications. A…

  16. Efficacy and Utility of Computer-Assisted Cognitive Behavioural Therapy for Anxiety Disorders

    ERIC Educational Resources Information Center

    Przeworski, Amy; Newman, Michelle G.

    2006-01-01

    Despite the efficacy of cognitive behavioural treatment for anxiety disorders, more than 70% of individuals with anxiety disorders go untreated every year. This is partially due to obstacles to treatment including limited access to mental health services for rural residents, the expense of treatment and the inconvenience of attending weekly…

  17. Optimize Resources and Help Reduce Cost of Ownership with Dell[TM] Systems Management

    ERIC Educational Resources Information Center

    Technology & Learning, 2008

    2008-01-01

    Maintaining secure, convenient administration of the PC system environment can be a significant drain on resources. Deskside visits can greatly increase the cost of supporting a large number of computers. Even simple tasks, such as tracking inventory or updating software, quickly become expensive when they require physically visiting every…

  18. 19 CFR 10.710 - Value-content requirement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., character, or use, which is then used in Jordan in the production or manufacture of a new or different... production or manufacture of a new or different article of commerce that is imported into the United States... determined by computing the sum of: (A) All expenses incurred in the growth, production, or manufacture of...

  19. A Simple, Low-Cost, Data-Logging Pendulum Built from a Computer Mouse

    ERIC Educational Resources Information Center

    Gintautas, Vadas; Hubler, Alfred

    2009-01-01

    Lessons and homework problems involving a pendulum are often a big part of introductory physics classes and laboratory courses from high school to undergraduate levels. Although laboratory equipment for pendulum experiments is commercially available, it is often expensive and may not be affordable for teachers on fixed budgets, particularly in…

  20. 26 CFR 1.863-3 - Allocation and apportionment of income from certain sales of inventory.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... income from sources within and without the United States determined under the 50/50 method. Research and... Possession Purchase Sales—(A) Business activity method. Gross income from Possession Purchase Sales is... from Possession Purchase Sales computed under the business activity method, the amounts of expenses...

  1. 30 CFR 206.353 - How do I determine transmission deductions?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Depreciation under paragraphs (g) and (h) of this section and a return on undepreciated capital investment under paragraphs (g) and (i) of this section or (iv) A return on the capital investment in the..., are not allowable expenses. (g) To compute costs associated with capital investment, a lessee may use...

  2. 30 CFR 206.354 - How do I determine generating deductions?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Depreciation under paragraphs (g) and (h) of this section and a return on undepreciated capital investment under paragraphs (g) and (i) of this section; or (iv) A return on capital investment in the power plant... allowable expenses. (g) To compute costs associated with capital investment, a lessee may use either...

  3. An Authoring System for Creating Computer-Based Role-Performance Trainers.

    ERIC Educational Resources Information Center

    Guralnick, David; Kass, Alex

    This paper describes a multimedia authoring system called MOPed-II. Like other authoring systems, MOPed-II reduces the time and expense of producing end-user applications by eliminating much of the programming effort they require. However, MOPed-II reflects an approach to authoring tools for educational multimedia which is different from most…

  4. Cost Effective Computer-Assisted Legal Research, or When Two Are Better Than One.

    ERIC Educational Resources Information Center

    Griffith, Cary

    1986-01-01

    An analysis of pricing policies and costs of LEXIS and WESTLAW indicates that it is less expensive to subscribe to both using a PC microcomputer rather than a dedicated terminal. Rules for when to use each database are essential to lowering the costs of online legal research. (EM)

  5. Hardware for Hard-Up Schools?

    ERIC Educational Resources Information Center

    St. John, Stuart A.

    2012-01-01

    The purpose of this work was to investigate ways in which everyday computers can be used in schools to fulfil several of the roles of more expensive, specialized laboratory equipment for teaching and learning purposes. The brief adopted was to keep things as straightforward as possible so that any school science department with a few basic tools…

  6. 12 CFR 563.170 - Examinations and audits; appraisals; establishment and maintenance of records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... any time, by the Office, with appraisals when deemed advisable, in accordance with general policies from time to time established by the Office. The costs, as computed by the Office, of any examinations made by it, including office analysis, overhead, per diem, travel expense, other supervision by the...

  7. Budgeting for Quality and Survival in the 21st Century--Guidelines for Directors.

    ERIC Educational Resources Information Center

    Whitehead, R. Ann

    2003-01-01

    Offers practical guidelines for directors of child care centers on creating a budget and managing the center's finances. Suggests ways to establish priorities, establish a tuition rate, compute projected monthly enrollment and income, budget variable and fixed expenses, create the final budget, and monitor financial statements. (JPB)

  8. 26 CFR 1.50B-4 - Partnerships.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Partnerships. 1.50B-4 Section 1.50B-4 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY INCOME TAX INCOME TAXES Rules for Computing Credit for Expenses of Work Incentive Programs § 1.50B-4 Partnerships. (a) General rule—(1) In general...

  9. A glacier runoff extension to the Precipitation Runoff Modeling System

    Treesearch

    A. E. Van Beusekom; R. J. Viger

    2016-01-01

    A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while...

  10. 24 CFR 990.165 - Computation of project expense level (PEL).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) Ownership type (profit, non-profit, or limited dividend); and (10) Geographic. (c) Cost adjustments. HUD... ceiling; (3) Application of a four percent reduction for any PEL calculated over $325 PUM, with the reduction limited so that a PEL will not be reduced to less than $325; and (4) The reduction of audit costs...

  11. DYNER: A DYNamic ClustER for Education and Research

    ERIC Educational Resources Information Center

    Kehagias, Dimitris; Grivas, Michael; Mamalis, Basilis; Pantziou, Grammati

    2006-01-01

    Purpose: The purpose of this paper is to evaluate the use of a non-expensive dynamic computing resource, consisting of a Beowulf class cluster and a NoW, as an educational and research infrastructure. Design/methodology/approach: Clusters, built using commodity-off-the-shelf (COTS) hardware components and free, or commonly used, software, provide…

  12. 37 CFR 385.23 - Royalty rates and subscriber-based royalty floors for specific types of services.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND... DIGITAL PHONORECORDS Limited Offerings, Mixed Service Bundles, Music Bundles, Paid Locker Services and... expensed for the rights to make the relevant permanent digital downloads and ringtones. (b) Computation of...

  13. Simulating the fate of fall- and spring-applied poultry litter nitrogen in corn production

    USDA-ARS?s Scientific Manuscript database

    Monitoring the fate of N derived from manures applied to fertilize crops is difficult, time consuming, and relatively expensive. But computer simulation models can help understand the interactions among various N processes in the soil-plant system and determine the fate of applied N. The RZWQM2 was ...

  14. State-of-the-art methods for testing materials outdoors

    Treesearch

    R. Sam Williams

    2004-01-01

    In recent years, computers, sensors, microelectronics, and communication technologies have made it possible to automate the way materials are tested in the field. It is now possible to purchase monitoring equipment to measure weather and materials properties. The measurement of materials response often requires innovative approaches and added expense, but the...

  15. Introduction to Parallel Computing

    DTIC Science & Technology

    1992-05-01

    Instruction Stream, Multiple Data Stream Machines .................... 19 2.4 Networks of M achines...independent memory units and connecting them to the processors by an interconnection network . Many different interconnection schemes have been considered, and...connected to the same processor at the same time. Crossbar switching networks are still too expensive to be practical for connecting large numbers of

  16. Economical Unsteady High-Fidelity Aerodynamics for Structural Optimization with a Flutter Constraint

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Stanford, Bret K.

    2017-01-01

    Structural optimization with a flutter constraint for a vehicle designed to fly in the transonic regime is a particularly difficult task. In this speed range, the flutter boundary is very sensitive to aerodynamic nonlinearities, typically requiring high-fidelity Navier-Stokes simulations. However, the repeated application of unsteady computational fluid dynamics to guide an aeroelastic optimization process is very computationally expensive. This expense has motivated the development of methods that incorporate aspects of the aerodynamic nonlinearity, classical tools of flutter analysis, and more recent methods of optimization. While it is possible to use doublet lattice method aerodynamics, this paper focuses on the use of an unsteady high-fidelity aerodynamic reduced order model combined with successive transformations that allows for an economical way of utilizing high-fidelity aerodynamics in the optimization process. This approach is applied to the common research model wing structural design. As might be expected, the high-fidelity aerodynamics produces a heavier wing than that optimized with doublet lattice aerodynamics. It is found that the optimized lower skin of the wing using high-fidelity aerodynamics differs significantly from that using doublet lattice aerodynamics.

  17. Bayesian sensitivity analysis of bifurcating nonlinear models

    NASA Astrophysics Data System (ADS)

    Becker, W.; Worden, K.; Rowson, J.

    2013-01-01

    Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.

  18. Signal decomposition for surrogate modeling of a constrained ultrasonic design space

    NASA Astrophysics Data System (ADS)

    Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.

    2018-04-01

    The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.

  19. Reimbursement and costs of pediatric ambulatory diabetes care by using the resource-based relative value scale: is multidisciplinary care financially viable?

    PubMed

    Melzer, Sanford M; Richards, Gail E; Covington, Maxine L

    2004-09-01

    The ambulatory care for children with diabetes mellitus (DM) within an endocrinology specialty practice typically includes services provided by a multidisciplinary team. The resource-based relative value scale (RBRVS) is increasingly used to determine payments for ambulatory services in pediatrics. It is not known to what extent resource-based practice expenses and physician work values as allocated through the RBRVS for physician and non-physician practice expenses cover the actual costs of multidisciplinary ambulatory care for children with DM. A pediatric endocrinology and diabetes clinic staffed by faculty physicians and hospital support staff in a children's hospital. Data from a faculty practice plan billing records and income and expense reports during the period from 1 July 2000 to 30 June 2001 were used to determine endocrinologist physician ambulatory productivity, revenue collection, and direct expenses (salary, benefits, billing, and professional liability (PLI)). Using the RBRVS, ambulatory care revenue was allocated between physician, PLI, and practice expenses. Applying the activity-based costing (ABC) method, activity logs were used to determine non-physician and facility practice expenses associated with endocrine (ENDO) or diabetes visits. Of the 4735 ambulatory endocrinology visits, 1420 (30%) were for DM care. Physicians generated $866,582 in gross charges. Cash collections of 52% of gross charges provided revenue of $96 per visit. Using the actual Current Procedural Terminology (CPT)-4 codes reported for these services and the RBRVS system, the revenue associated with the 13,007 total relative value units (TRVUs) produced was allocated, with 58% going to cover physician work expenses and 42% to cover non-physician practice salary, facility, and PLI costs. Allocated revenue of $40.60 per visit covered 16 and 31% of non-physician and facility practice expenses per DM and general ENDO visit, respectively. RBRVS payments ($35/RVU) covered 46% of all expenses ($76.74/RVU), including 132% of physician expenses for the time worked in the clinic ($27/RVU), and only 23% of actual incurred practice expenses ($152/TRVU). Clinical revenues in a pediatric endocrinology practice, allocated by using the RBRVS system, do cover physician expenses for the time spent working in a hospital ENDO and DM clinic, but do not closely approximate non-physician and facility practice expenses while delivering multidisciplinary care to children with DM. Using payment based on the RBRVS system, and without additional payments to compensate for increased practice expenses incurred in the delivery of multidisciplinary care, this care model may not be financially viable.

  20. Computational chemistry for NH 3 synthesis, hydrotreating, and NO x reduction: Three topics of special interest to Haldor Topsøe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elnabawy, Ahmed O.; Rangarajan, Srinivas; Mavrikakis, Manos

    Computational chemistry, especially density functional theory, has experienced a remarkable growth in terms of application over the last few decades. This is attributed to the improvements in theory and computing infrastructure that enable the analysis of systems of unprecedented size and detail at an affordable computational expense. In this perspective, we discuss recent progress and current challenges facing electronic structure theory in the context of heterogeneous catalysis. We specifically focus on the impact of computational chemistry in elucidating and designing catalytic systems in three topics of interest to Haldor Topsøe – ammonia, synthesis, hydrotreating, and NO x reduction. Furthermore, wemore » then discuss the common tools and concepts in computational catalysis that underline these topics and provide a perspective on the challenges and future directions of research in this area of catalysis research.« less

  1. Fast Legendre moment computation for template matching

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Normalized cross correlation (NCC) based template matching is insensitive to intensity changes and it has many applications in image processing, object detection, video tracking and pattern recognition. However, normalized cross correlation implementation is computationally expensive since it involves both correlation computation and normalization implementation. In this paper, we propose Legendre moment approach for fast normalized cross correlation implementation and show that the computational cost of this proposed approach is independent of template mask sizes which is significantly faster than traditional mask size dependent approaches, especially for large mask templates. Legendre polynomials have been widely used in solving Laplace equation in electrodynamics in spherical coordinate systems, and solving Schrodinger equation in quantum mechanics. In this paper, we extend Legendre polynomials from physics to computer vision and pattern recognition fields, and demonstrate that Legendre polynomials can help to reduce the computational cost of NCC based template matching significantly.

  2. Computational chemistry for NH 3 synthesis, hydrotreating, and NO x reduction: Three topics of special interest to Haldor Topsøe

    DOE PAGES

    Elnabawy, Ahmed O.; Rangarajan, Srinivas; Mavrikakis, Manos

    2015-06-05

    Computational chemistry, especially density functional theory, has experienced a remarkable growth in terms of application over the last few decades. This is attributed to the improvements in theory and computing infrastructure that enable the analysis of systems of unprecedented size and detail at an affordable computational expense. In this perspective, we discuss recent progress and current challenges facing electronic structure theory in the context of heterogeneous catalysis. We specifically focus on the impact of computational chemistry in elucidating and designing catalytic systems in three topics of interest to Haldor Topsøe – ammonia, synthesis, hydrotreating, and NO x reduction. Furthermore, wemore » then discuss the common tools and concepts in computational catalysis that underline these topics and provide a perspective on the challenges and future directions of research in this area of catalysis research.« less

  3. Non-Boolean computing with nanomagnets for computer vision applications

    NASA Astrophysics Data System (ADS)

    Bhanja, Sanjukta; Karunaratne, D. K.; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep

    2016-02-01

    The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

  4. Second derivative time integration methods for discontinuous Galerkin solutions of unsteady compressible flows

    NASA Astrophysics Data System (ADS)

    Nigro, A.; De Bartolo, C.; Crivellini, A.; Bassi, F.

    2017-12-01

    In this paper we investigate the possibility of using the high-order accurate A (α) -stable Second Derivative (SD) schemes proposed by Enright for the implicit time integration of the Discontinuous Galerkin (DG) space-discretized Navier-Stokes equations. These multistep schemes are A-stable up to fourth-order, but their use results in a system matrix difficult to compute. Furthermore, the evaluation of the nonlinear function is computationally very demanding. We propose here a Matrix-Free (MF) implementation of Enright schemes that allows to obtain a method without the costs of forming, storing and factorizing the system matrix, which is much less computationally expensive than its matrix-explicit counterpart, and which performs competitively with other implicit schemes, such as the Modified Extended Backward Differentiation Formulae (MEBDF). The algorithm makes use of the preconditioned GMRES algorithm for solving the linear system of equations. The preconditioner is based on the ILU(0) factorization of an approximated but computationally cheaper form of the system matrix, and it has been reused for several time steps to improve the efficiency of the MF Newton-Krylov solver. We additionally employ a polynomial extrapolation technique to compute an accurate initial guess to the implicit nonlinear system. The stability properties of SD schemes have been analyzed by solving a linear model problem. For the analysis on the Navier-Stokes equations, two-dimensional inviscid and viscous test cases, both with a known analytical solution, are solved to assess the accuracy properties of the proposed time integration method for nonlinear autonomous and non-autonomous systems, respectively. The performance of the SD algorithm is compared with the ones obtained by using an MF-MEBDF solver, in order to evaluate its effectiveness, identifying its limitations and suggesting possible further improvements.

  5. Modeling Endovascular Coils as Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Yadollahi Farsani, H.; Herrmann, M.; Chong, B.; Frakes, D.

    2016-12-01

    Minimally invasive surgeries are the stat-of-the-art treatments for many pathologies. Treating brain aneurysms is no exception; invasive neurovascular clipping is no longer the only option and endovascular coiling has introduced itself as the most common treatment. Coiling isolates the aneurysm from blood circulation by promoting thrombosis within the aneurysm. One approach to studying intra-aneurysmal hemodynamics consists of virtually deploying finite element coil models and then performing computational fluid dynamics. However, this approach is often computationally expensive and requires extensive resources to perform. The porous medium approach has been considered as an alternative to the conventional coil modeling approach because it lessens the complexities of computational fluid dynamics simulations by reducing the number of mesh elements needed to discretize the domain. There have been a limited number of attempts at treating the endovascular coils as homogeneous porous media. However, the heterogeneity associated with coil configurations requires a more accurately defined porous medium in which the porosity and permeability change throughout the domain. We implemented this approach by introducing a lattice of sample volumes and utilizing techniques available in the field of interactive computer graphics. We observed that the introduction of the heterogeneity assumption was associated with significant changes in simulated aneurysmal flow velocities as compared to the homogeneous assumption case. Moreover, as the sample volume size was decreased, the flow velocities approached an asymptotical value, showing the importance of the sample volume size selection. These results demonstrate that the homogeneous assumption for porous media that are inherently heterogeneous can lead to considerable errors. Additionally, this modeling approach allowed us to simulate post-treatment flows without considering the explicit geometry of a deployed endovascular coil mass, greatly simplifying computation.

  6. 12 CFR 308.180 - Further proceedings.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... PRACTICE AND PROCEDURE Rules and Procedures Relating to the Recovery of Attorney Fees and Other Expenses... further proceedings such as an informal conference, oral argument, additional written submissions, or an... identify the information sought or the issues in dispute and shall explain why additional proceedings are...

  7. 12 CFR 308.180 - Further proceedings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... PRACTICE AND PROCEDURE Rules and Procedures Relating to the Recovery of Attorney Fees and Other Expenses... further proceedings such as an informal conference, oral argument, additional written submissions, or an... identify the information sought or the issues in dispute and shall explain why additional proceedings are...

  8. 12 CFR 308.180 - Further proceedings.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... PRACTICE AND PROCEDURE Rules and Procedures Relating to the Recovery of Attorney Fees and Other Expenses... further proceedings such as an informal conference, oral argument, additional written submissions, or an... identify the information sought or the issues in dispute and shall explain why additional proceedings are...

  9. 12 CFR 308.180 - Further proceedings.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... PRACTICE AND PROCEDURE Rules and Procedures Relating to the Recovery of Attorney Fees and Other Expenses... further proceedings such as an informal conference, oral argument, additional written submissions, or an... identify the information sought or the issues in dispute and shall explain why additional proceedings are...

  10. Exploration computer applications to primary dispersion halos: Kougarok tin prospect, Seward Peninsula, Alaska, USA

    USGS Publications Warehouse

    Reid, Jeffrey C.

    1989-01-01

    Computer processing and high resolution graphics display of geochemical data were used to quickly, accurately, and efficiently obtain important decision-making information for tin (cassiterite) exploration, Seward Peninsula, Alaska (USA). Primary geochemical dispersion patterns were determined for tin-bearing intrusive granite phases of Late Cretaceous age with exploration bedrock lithogeochemistry at the Kougarok tin prospect. Expensive diamond drilling footage was required to reach exploration objectives. Recognition of element distribution and dispersion patterns was useful in subsurface interpretation and correlation, and to aid location of other holes.

  11. Experimental realization of an entanglement access network and secure multi-party computation

    PubMed Central

    Chang, X.-Y.; Deng, D.-L.; Yuan, X.-X.; Hou, P.-Y.; Huang, Y.-Y.; Duan, L.-M.

    2016-01-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography. PMID:27404561

  12. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  13. Decision rules for unbiased inventory estimates

    NASA Technical Reports Server (NTRS)

    Argentiero, P. D.; Koch, D.

    1979-01-01

    An efficient and accurate procedure for estimating inventories from remote sensing scenes is presented. In place of the conventional and expensive full dimensional Bayes decision rule, a one-dimensional feature extraction and classification technique was employed. It is shown that this efficient decision rule can be used to develop unbiased inventory estimates and that for large sample sizes typical of satellite derived remote sensing scenes, resulting accuracies are comparable or superior to more expensive alternative procedures. Mathematical details of the procedure are provided in the body of the report and in the appendix. Results of a numerical simulation of the technique using statistics obtained from an observed LANDSAT scene are included. The simulation demonstrates the effectiveness of the technique in computing accurate inventory estimates.

  14. Accelerating activity coefficient calculations using multicore platforms, and profiling the energy use resulting from such calculations.

    NASA Astrophysics Data System (ADS)

    Topping, David; Alibay, Irfan; Bane, Michael

    2017-04-01

    To predict the evolving concentration, chemical composition and ability of aerosol particles to act as cloud droplets, we rely on numerical modeling. Mechanistic models attempt to account for the movement of compounds between the gaseous and condensed phases at a molecular level. This 'bottom up' approach is designed to increase our fundamental understanding. However, such models rely on predicting the properties of molecules and subsequent mixtures. For partitioning between the gaseous and condensed phases this includes: saturation vapour pressures; Henrys law coefficients; activity coefficients; diffusion coefficients and reaction rates. Current gas phase chemical mechanisms predict the existence of potentially millions of individual species. Within a dynamic ensemble model, this can often be used as justification for neglecting computationally expensive process descriptions. Indeed, on whether we can quantify the true sensitivity to uncertainties in molecular properties, even at the single aerosol particle level it has been impossible to embed fully coupled representations of process level knowledge with all possible compounds, typically relying on heavily parameterised descriptions. Relying on emerging numerical frameworks, and designed for the changing landscape of high-performance computing (HPC), in this study we focus specifically on the ability to capture activity coefficients in liquid solutions using the UNIFAC method. Activity coefficients are often neglected with the largely untested hypothesis that they are simply too computationally expensive to include in dynamic frameworks. We present results demonstrating increased computational efficiency for a range of typical scenarios, including a profiling of the energy use resulting from reliance on such computations. As the landscape of HPC changes, the latter aspect is important to consider in future applications.

  15. Reducing the Time and Cost of Testing Engines

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Producing a new aircraft engine currently costs approximately $1 billion, with 3 years of development time for a commercial engine and 10 years for a military engine. The high development time and cost make it extremely difficult to transition advanced technologies for cleaner, quieter, and more efficient new engines. To reduce this time and cost, NASA created a vision for the future where designers would use high-fidelity computer simulations early in the design process in order to resolve critical design issues before building the expensive engine hardware. To accomplish this vision, NASA's Glenn Research Center initiated a collaborative effort with the aerospace industry and academia to develop its Numerical Propulsion System Simulation (NPSS), an advanced engineering environment for the analysis and design of aerospace propulsion systems and components. Partners estimate that using NPSS has the potential to dramatically reduce the time, effort, and expense necessary to design and test jet engines by generating sophisticated computer simulations of an aerospace object or system. These simulations will permit an engineer to test various design options without having to conduct costly and time-consuming real-life tests. By accelerating and streamlining the engine system design analysis and test phases, NPSS facilitates bringing the final product to market faster. NASA's NPSS Version (V)1.X effort was a task within the Agency s Computational Aerospace Sciences project of the High Performance Computing and Communication program, which had a mission to accelerate the availability of high-performance computing hardware and software to the U.S. aerospace community for its use in design processes. The technology brings value back to NASA by improving methods of analyzing and testing space transportation components.

  16. A virtual surgical training system that simulates cutting of soft tissue using a modified pre-computed elastic model.

    PubMed

    Toe, Kyaw Kyar; Huang, Weimin; Yang, Tao; Duan, Yuping; Zhou, Jiayin; Su, Yi; Teo, Soo-Kng; Kumar, Selvaraj Senthil; Lim, Calvin Chi-Wan; Chui, Chee Kong; Chang, Stephen

    2015-08-01

    This work presents a surgical training system that incorporates cutting operation of soft tissue simulated based on a modified pre-computed linear elastic model in the Simulation Open Framework Architecture (SOFA) environment. A precomputed linear elastic model used for the simulation of soft tissue deformation involves computing the compliance matrix a priori based on the topological information of the mesh. While this process may require a few minutes to several hours, based on the number of vertices in the mesh, it needs only to be computed once and allows real-time computation of the subsequent soft tissue deformation. However, as the compliance matrix is based on the initial topology of the mesh, it does not allow any topological changes during simulation, such as cutting or tearing of the mesh. This work proposes a way to modify the pre-computed data by correcting the topological connectivity in the compliance matrix, without re-computing the compliance matrix which is computationally expensive.

  17. Global computing for bioinformatics.

    PubMed

    Loewe, Laurence

    2002-12-01

    Global computing, the collaboration of idle PCs via the Internet in a SETI@home style, emerges as a new way of massive parallel multiprocessing with potentially enormous CPU power. Its relations to the broader, fast-moving field of Grid computing are discussed without attempting a review of the latter. This review (i) includes a short table of milestones in global computing history, (ii) lists opportunities global computing offers for bioinformatics, (iii) describes the structure of problems well suited for such an approach, (iv) analyses the anatomy of successful projects and (v) points to existing software frameworks. Finally, an evaluation of the various costs shows that global computing indeed has merit, if the problem to be solved is already coded appropriately and a suitable global computing framework can be found. Then, either significant amounts of computing power can be recruited from the general public, or--if employed in an enterprise-wide Intranet for security reasons--idle desktop PCs can substitute for an expensive dedicated cluster.

  18. Household catastrophic medical expenses in eastern China: determinants and policy implications

    PubMed Central

    2013-01-01

    Background Much of research on household catastrophic medical expenses in China has focused on less developed areas and little is known about this problem in more developed areas. This study aimed to analyse the incidence and determinants of catastrophic medical expenses in eastern China. Methods Data were obtained from a health care utilization and expense survey of 11,577 households conducted in eastern China in 2008. The incidence of household catastrophic medical expenses was calculated using the method introduced by the World Health Organization. A multi-level logistic regression model was used to identify the determinants. Results The incidence of household catastrophic medical expenses in eastern China ranged from 9.24% to 24.79%. Incidence of household catastrophic medical expenses was lower if the head of household had a higher level of education, labor insurance coverage, while the incidence was higher if they lived in rural areas, had a family member with chronic diseases, had a child younger than 5 years old, had a person at home who was at least 65 years old, and had a household member who was hospitalized. Moreover, the impact of the economic level on catastrophic medical expenses was non-linear. The poorest group had a lower incidence than that of the second lowest income group and the group with the highest income had a higher incidence than that of the second highest income group. In addition, region was a significant determinant. Conclusions Reducing the incidence of household catastrophic medical expenses should be one of the priorities of health policy. It can be achieved by improving residents’ health status to reduce avoidable health services such as hospitalization. It is also important to design more targeted health insurance in order to increase financial support for such vulnerable groups as the poor, chronically ill, children, and senior populations. PMID:24308317

  19. Current and anticipated uses of the thermal hydraulics codes at the NRC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caruso, R.

    1997-07-01

    The focus of Thermal-Hydraulic computer code usage in nuclear regulatory organizations has undergone a considerable shift since the codes were originally conceived. Less work is being done in the area of {open_quotes}Design Basis Accidents,{close_quotes}, and much more emphasis is being placed on analysis of operational events, probabalistic risk/safety assessment, and maintenance practices. All of these areas need support from Thermal-Hydraulic computer codes to model the behavior of plant fluid systems, and they all need the ability to perform large numbers of analyses quickly. It is therefore important for the T/H codes of the future to be able to support thesemore » needs, by providing robust, easy-to-use, tools that produce easy-to understand results for a wider community of nuclear professionals. These tools need to take advantage of the great advances that have occurred recently in computer software, by providing users with graphical user interfaces for both input and output. In addition, reduced costs of computer memory and other hardware have removed the need for excessively complex data structures and numerical schemes, which make the codes more difficult and expensive to modify, maintain, and debug, and which increase problem run-times. Future versions of the T/H codes should also be structured in a modular fashion, to allow for the easy incorporation of new correlations, models, or features, and to simplify maintenance and testing. Finally, it is important that future T/H code developers work closely with the code user community, to ensure that the code meet the needs of those users.« less

  20. Estimating Angle-of-Arrival and Time-of-Flight for Multipath Components Using WiFi Channel State Information.

    PubMed

    Ahmed, Afaz Uddin; Arablouei, Reza; Hoog, Frank de; Kusy, Branislav; Jurdak, Raja; Bergmann, Neil

    2018-05-29

    Channel state information (CSI) collected during WiFi packet transmissions can be used for localization of commodity WiFi devices in indoor environments with multipath propagation. To this end, the angle of arrival (AoA) and time of flight (ToF) for all dominant multipath components need to be estimated. A two-dimensional (2D) version of the multiple signal classification (MUSIC) algorithm has been shown to solve this problem using 2D grid search, which is computationally expensive and is therefore not suited for real-time localisation. In this paper, we propose using a modified matrix pencil (MMP) algorithm instead. Specifically, we show that the AoA and ToF estimates can be found independently of each other using the one-dimensional (1D) MMP algorithm and the results can be accurately paired to obtain the AoA⁻ToF pairs for all multipath components. Thus, the 2D estimation problem reduces to running 1D estimation multiple times, substantially reducing the computational complexity. We identify and resolve the problem of degenerate performance when two or more multipath components have the same AoA. In addition, we propose a packet aggregation model that uses the CSI data from multiple packets to improve the performance under noisy conditions. Simulation results show that our algorithm achieves two orders of magnitude reduction in the computational time over the 2D MUSIC algorithm while achieving similar accuracy. High accuracy and low computation complexity of our approach make it suitable for applications that require location estimation to run on resource-constrained embedded devices in real time.

  1. Software design to calculate and simulate the mechanical response of electromechanical lifts

    NASA Astrophysics Data System (ADS)

    Herrera, I.; Romero, E.

    2016-05-01

    Lift engineers and lift companies which are involved in the design process of new products or in the research and development of improved components demand a predictive tool of the lift slender system response before testing expensive prototypes. A method for solving the movement of any specified lift system by means of a computer program is presented. The mechanical response of the lift operating in a user defined installation and configuration, for a given excitation and other configuration parameters of real electric motors and its control system, is derived. A mechanical model with 6 degrees of freedom is used. The governing equations are integrated step by step through the Meden-Kutta algorithm in the MATLAB platform. Input data consists on the set point speed for a standard trip and the control parameters of a number of controllers and lift drive machines. The computer program computes and plots very accurately the vertical displacement, velocity, instantaneous acceleration and jerk time histories of the car, counterweight, frame, passengers/loads and lift drive in a standard trip between any two floors of the desired installation. The resulting torque, rope tension and deviation of the velocity plot with respect to the setpoint speed are shown. The software design is implemented in a demo release of the computer program called ElevaCAD. Further on, the program offers the possibility to select the configuration of the lift system and the performance parameters of each component. In addition to the overall system response, detailed information of transients, vibrations of the lift components, ride quality levels, modal analysis and frequency spectrum (FFT) are plotted.

  2. Reduced description of reactive flows with tabulation of chemistry

    NASA Astrophysics Data System (ADS)

    Ren, Zhuyin; Goldin, Graham M.; Hiremath, Varun; Pope, Stephen B.

    2011-12-01

    The direct use of large chemical mechanisms in multi-dimensional Computational Fluid Dynamics (CFD) is computationally expensive due to the large number of chemical species and the wide range of chemical time scales involved. To meet this challenge, a reduced description of reactive flows in combination with chemistry tabulation is proposed to effectively reduce the computational cost. In the reduced description, the species are partitioned into represented species and unrepresented species; the reactive system is described in terms of a smaller number of represented species instead of the full set of chemical species in the mechanism; and the evolution equations are solved only for the represented species. When required, the unrepresented species are reconstructed assuming that they are in constrained chemical equilibrium. In situ adaptive tabulation (ISAT) is employed to speed the chemistry calculation through tabulating information of the reduced system. The proposed dimension-reduction / tabulation methodology determines and tabulates in situ the necessary information of the nr-dimensional reduced system based on the ns-species detailed mechanism. Compared to the full description with ISAT, the reduced descriptions achieve additional computational speed-up by solving fewer transport equations and faster ISAT retrieving. The approach is validated in both a methane/air premixed flame and a methane/air non-premixed flame. With the GRI 1.2 mechanism consisting of 31 species, the reduced descriptions (with 12 to 16 represented species) achieve a speed-up factor of up to three compared to the full description with ISAT, with a relatively moderate decrease in accuracy compared to the full description.

  3. Accelerating epistasis analysis in human genetics with consumer graphics hardware.

    PubMed

    Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H

    2009-07-24

    Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.

  4. Calculating orthologs in bacteria and Archaea: a divide and conquer approach.

    PubMed

    Halachev, Mihail R; Loman, Nicholas J; Pallen, Mark J

    2011-01-01

    Among proteins, orthologs are defined as those that are derived by vertical descent from a single progenitor in the last common ancestor of their host organisms. Our goal is to compute a complete set of protein orthologs derived from all currently available complete bacterial and archaeal genomes. Traditional approaches typically rely on all-against-all BLAST searching which is prohibitively expensive in terms of hardware requirements or computational time (requiring an estimated 18 months or more on a typical server). Here, we present xBASE-Orth, a system for ongoing ortholog annotation, which applies a "divide and conquer" approach and adopts a pragmatic scheme that trades accuracy for speed. Starting at species level, xBASE-Orth carefully constructs and uses pan-genomes as proxies for the full collections of coding sequences at each level as it progressively climbs the taxonomic tree using the previously computed data. This leads to a significant decrease in the number of alignments that need to be performed, which translates into faster computation, making ortholog computation possible on a global scale. Using xBASE-Orth, we analyzed an NCBI collection of 1,288 bacterial and 94 archaeal complete genomes with more than 4 million coding sequences in 5 weeks and predicted more than 700 million ortholog pairs, clustered in 175,531 orthologous groups. We have also identified sets of highly conserved bacterial and archaeal orthologs and in so doing have highlighted anomalies in genome annotation and in the proposed composition of the minimal bacterial genome. In summary, our approach allows for scalable and efficient computation of the bacterial and archaeal ortholog annotations. In addition, due to its hierarchical nature, it is suitable for incorporating novel complete genomes and alternative genome annotations. The computed ortholog data and a continuously evolving set of applications based on it are integrated in the xBASE database, available at http://www.xbase.ac.uk/.

  5. SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).

    PubMed

    Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J

    2012-06-01

    To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.

  6. Additive direct-write microfabrication for MEMS: A review

    NASA Astrophysics Data System (ADS)

    Teh, Kwok Siong

    2017-12-01

    Direct-write additive manufacturing refers to a rich and growing repertoire of well-established fabrication techniques that builds solid objects directly from computer- generated solid models without elaborate intermediate fabrication steps. At the macroscale, direct-write techniques such as stereolithography, selective laser sintering, fused deposition modeling ink-jet printing, and laminated object manufacturing have significantly reduced concept-to-product lead time, enabled complex geometries, and importantly, has led to the renaissance in fabrication known as the maker movement. The technological premises of all direct-write additive manufacturing are identical—converting computer generated three-dimensional models into layers of two-dimensional planes or slices, which are then reconstructed sequentially into threedimensional solid objects in a layer-by-layer format. The key differences between the various additive manufacturing techniques are the means of creating the finished layers and the ancillary processes that accompany them. While still at its infancy, direct-write additive manufacturing techniques at the microscale have the potential to significantly lower the barrier-of-entry—in terms of cost, time and training—for the prototyping and fabrication of MEMS parts that have larger dimensions, high aspect ratios, and complex shapes. In recent years, significant advancements in materials chemistry, laser technology, heat and fluid modeling, and control systems have enabled additive manufacturing to achieve higher resolutions at the micrometer and nanometer length scales to be a viable technology for MEMS fabrication. Compared to traditional MEMS processes that rely heavily on expensive equipment and time-consuming steps, direct-write additive manufacturing techniques allow for rapid design-to-prototype realization by limiting or circumventing the need for cleanrooms, photolithography and extensive training. With current direct-write additive manufacturing technologies, it is possible to fabricate unsophisticated micrometer scale structures at adequate resolutions and precisions using materials that range from polymers, metals, ceramics, to composites. In both academia and industry, direct-write additive manufacturing offers extraordinary promises to revolutionize research and development in microfabrication and MEMS technologies. Importantly, direct-write additive manufacturing could appreciably augment current MEMS fabrication technologies, enable faster design-to-product cycle, empower new paradigms in MEMS designs, and critically, encourage wider participation in MEMS research at institutions or for individuals with limited or no access to cleanroom facilities. This article aims to provide a limited review of the current landscape of direct-write additive manufacturing techniques that are potentially applicable for MEMS microfabrication.

  7. State of the Art of Network Security Perspectives in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Oh, Tae Hwan; Lim, Shinyoung; Choi, Young B.; Park, Kwang-Roh; Lee, Heejo; Choi, Hyunsang

    Cloud computing is now regarded as one of social phenomenon that satisfy customers' needs. It is possible that the customers' needs and the primary principle of economy - gain maximum benefits from minimum investment - reflects realization of cloud computing. We are living in the connected society with flood of information and without connected computers to the Internet, our activities and work of daily living will be impossible. Cloud computing is able to provide customers with custom-tailored features of application software and user's environment based on the customer's needs by adopting on-demand outsourcing of computing resources through the Internet. It also provides cloud computing users with high-end computing power and expensive application software package, and accordingly the users will access their data and the application software where they are located at the remote system. As the cloud computing system is connected to the Internet, network security issues of cloud computing are considered as mandatory prior to real world service. In this paper, survey and issues on the network security in cloud computing are discussed from the perspective of real world service environments.

  8. Palatability of tannin-rich sericea lespedeza fed to broilers.

    USDA-ARS?s Scientific Manuscript database

    As parasites become resistant to available anthelmintics, new methods of control are needed. New drugs take a long time to develop in addition to being expensive; therefore, there is increasing interest in finding and using natural alternatives. Additionally, natural remedies are needed for the or...

  9. Computing the Energy Cost of the Information Transmitted by Model Biological Neurons

    NASA Astrophysics Data System (ADS)

    Torrealdea, F. J.; Sarasola, C.; d'Anjou, A.; Moujahid, A.

    2009-08-01

    We assign an energy function to a Hindmarsh-Rose model of a neuron and use it to compute values of average energy consumption during its signalling activity. We also compute values of information entropy of an isolated neuron and of mutual information between two electrically coupled neurons. We find that for the isolated neuron the chaotic signaling regime is the one with the biggest ratio of information entropy to energy consumption. We also find that in the case of electrically coupled neurons there are values of the coupling strength at which the mutual information to energy consumption ratio is maximum, that is, that transmitting at that coupling conditions is energetically less expensive.

  10. The gputools package enables GPU computing in R.

    PubMed

    Buckner, Joshua; Wilson, Justin; Seligman, Mark; Athey, Brian; Watson, Stanley; Meng, Fan

    2010-01-01

    By default, the R statistical environment does not make use of parallelism. Researchers may resort to expensive solutions such as cluster hardware for large analysis tasks. Graphics processing units (GPUs) provide an inexpensive and computationally powerful alternative. Using R and the CUDA toolkit from Nvidia, we have implemented several functions commonly used in microarray gene expression analysis for GPU-equipped computers. R users can take advantage of the better performance provided by an Nvidia GPU. The package is available from CRAN, the R project's repository of packages, at http://cran.r-project.org/web/packages/gputools More information about our gputools R package is available at http://brainarray.mbni.med.umich.edu/brainarray/Rgpgpu

  11. Comparison of two- and three-dimensional flow computations with laser anemometer measurements in a transonic compressor rotor

    NASA Technical Reports Server (NTRS)

    Chima, R. V.; Strazisar, A. J.

    1982-01-01

    Two and three dimensional inviscid solutions for the flow in a transonic axial compressor rotor at design speed are compared with probe and laser anemometers measurements at near-stall and maximum-flow operating points. Experimental details of the laser anemometer system and computational details of the two dimensional axisymmetric code and three dimensional Euler code are described. Comparisons are made between relative Mach number and flow angle contours, shock location, and shock strength. A procedure for using an efficient axisymmetric code to generate downstream pressure input for computationally expensive Euler codes is discussed. A film supplement shows the calculations of the two operating points with the time-marching Euler code.

  12. An Advanced User Interface Approach for Complex Parameter Study Process Specification in the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob; Yan, Jerry C. (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have now become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are now seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers great resource opportunity but at the expense of great difficulty of use. We present an approach to this problem which stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  13. GPGPU-based explicit finite element computations for applications in biomechanics: the performance of material models, element technologies, and hardware generations.

    PubMed

    Strbac, V; Pierce, D M; Vander Sloten, J; Famaey, N

    2017-12-01

    Finite element (FE) simulations are increasingly valuable in assessing and improving the performance of biomedical devices and procedures. Due to high computational demands such simulations may become difficult or even infeasible, especially when considering nearly incompressible and anisotropic material models prevalent in analyses of soft tissues. Implementations of GPGPU-based explicit FEs predominantly cover isotropic materials, e.g. the neo-Hookean model. To elucidate the computational expense of anisotropic materials, we implement the Gasser-Ogden-Holzapfel dispersed, fiber-reinforced model and compare solution times against the neo-Hookean model. Implementations of GPGPU-based explicit FEs conventionally rely on single-point (under) integration. To elucidate the expense of full and selective-reduced integration (more reliable) we implement both and compare corresponding solution times against those generated using underintegration. To better understand the advancement of hardware, we compare results generated using representative Nvidia GPGPUs from three recent generations: Fermi (C2075), Kepler (K20c), and Maxwell (GTX980). We explore scaling by solving the same boundary value problem (an extension-inflation test on a segment of human aorta) with progressively larger FE meshes. Our results demonstrate substantial improvements in simulation speeds relative to two benchmark FE codes (up to 300[Formula: see text] while maintaining accuracy), and thus open many avenues to novel applications in biomechanics and medicine.

  14. Go Green, Save Green with ENERGY STAR[R

    ERIC Educational Resources Information Center

    Hatcher, Caterina

    2010-01-01

    Did you know that the nation's 17,450 K-12 school districts spend more on energy than on computers and textbooks combined? Energy costs represent a typical school district's second largest operating expense after salaries. Schools that have earned the ENERGY STAR--EPA's mark of superior energy performance--cost 40 cents per square foot less to…

  15. 26 CFR 1.861-11T - Special rules for allocating and apportioning interest expense of an affiliated group of...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... or Z's direct assets is exclusively financial services income. The foreign source income generated by... computation of foreign source taxable income for purposes of section 904 (relating to various limitations on the foreign tax credit). Section 904 imposes separate foreign tax credit limitations on passive income...

  16. Faculty Flow in a Medical School: A Policy Simulator. AIR Forum 1979 Paper.

    ERIC Educational Resources Information Center

    Kutina, Kenneth L.; Bruss, Edward A.

    A computer-based simulation model is described that can be used in an interactive mode to analyze the effects of alternative hiring, promotion, tenure granting, retirement, and salary policies on faculty size, distribution, and aggregate salary expense. The model was designed to be adequately flexible and comprehensive to incorporate the array of…

  17. 25 CFR 700.173 - Average net earnings of business or farm.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 2 2011-04-01 2011-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings of...

  18. 25 CFR 700.173 - Average net earnings of business or farm.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings of...

  19. 26 CFR 1.50A-3 - Recomputation of credit allowed by section 40.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ....50A-3 Section 1.50A-3 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY INCOME TAX INCOME TAXES Rules for Computing Credit for Expenses of Work Incentive Programs § 1.50A-3 Recomputation...) In general. If the employment of any employee, with respect to whom work incentive program (WIN...

  20. Low-Cost Computer-Controlled Current Stimulator for the Student Laboratory

    ERIC Educational Resources Information Center

    Guclu, Burak

    2007-01-01

    Electrical stimulation of nerve and muscle tissues is frequently used for teaching core concepts in physiology. It is usually expensive to provide every student group in the laboratory with an individual stimulator. This article presents the design and application of a low-cost [about $100 (U.S.)] isolated stimulator that can be controlled by two…

Top