76 FR 21673 - Alternative Efficiency Determination Methods and Alternate Rating Methods
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-18
... EERE-2011-BP-TP-00024] RIN 1904-AC46 Alternative Efficiency Determination Methods and Alternate Rating Methods AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of... and data related to the use of computer simulations, mathematical methods, and other alternative...
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Energy efficiency indicators for high electric-load buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aebischer, Bernard; Balmer, Markus A.; Kinney, Satkartar
2003-06-01
Energy per unit of floor area is not an adequate indicator for energy efficiency in high electric-load buildings. For two activities, restaurants and computer centres, alternative indicators for energy efficiency are discussed.
NASA Astrophysics Data System (ADS)
Vodenicarevic, D.; Locatelli, N.; Mizrahi, A.; Friedman, J. S.; Vincent, A. F.; Romera, M.; Fukushima, A.; Yakushiji, K.; Kubota, H.; Yuasa, S.; Tiwari, S.; Grollier, J.; Querlioz, D.
2017-11-01
Low-energy random number generation is critical for many emerging computing schemes proposed to complement or replace von Neumann architectures. However, current random number generators are always associated with an energy cost that is prohibitive for these computing schemes. We introduce random number bit generation based on specific nanodevices: superparamagnetic tunnel junctions. We experimentally demonstrate high-quality random bit generation that represents an orders-of-magnitude improvement in energy efficiency over current solutions. We show that the random generation speed improves with nanodevice scaling, and we investigate the impact of temperature, magnetic field, and cross talk. Finally, we show how alternative computing schemes can be implemented using superparamagentic tunnel junctions as random number generators. These results open the way for fabricating efficient hardware computing devices leveraging stochasticity, and they highlight an alternative use for emerging nanodevices.
Efficient computation of hashes
NASA Astrophysics Data System (ADS)
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.
2014-06-01
The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.
Beyond Moore’s technologies: operation principles of a superconductor alternative
Klenov, Nikolay V; Bakurskiy, Sergey V; Kupriyanov, Mikhail Yu; Gudkov, Alexander L; Sidorenko, Anatoli S
2017-01-01
The predictions of Moore’s law are considered by experts to be valid until 2020 giving rise to “post-Moore’s” technologies afterwards. Energy efficiency is one of the major challenges in high-performance computing that should be answered. Superconductor digital technology is a promising post-Moore’s alternative for the development of supercomputers. In this paper, we consider operation principles of an energy-efficient superconductor logic and memory circuits with a short retrospective review of their evolution. We analyze their shortcomings in respect to computer circuits design. Possible ways of further research are outlined. PMID:29354341
A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.
Moretti, Loris; Sartori, Luca
2016-10-01
Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...
ERIC Educational Resources Information Center
Ishola, Bashiru Abayomi
2017-01-01
Cloud computing has recently emerged as a potential alternative to the traditional on-premise computing that businesses can leverage to achieve operational efficiencies. Consequently, technology managers are often tasked with the responsibilities to analyze the barriers and variables critical to organizational cloud adoption decisions. This…
Digital-computer program for design analysis of salient, wound pole alternators
NASA Technical Reports Server (NTRS)
Repas, D. S.
1973-01-01
A digital computer program for analyzing the electromagnetic design of salient, wound pole alternators is presented. The program, which is written in FORTRAN 4, calculates the open-circuit saturation curve, the field-current requirements at rated voltage for various loads and losses, efficiency, reactances, time constants, and weights. The methods used to calculate some of these items are presented or appropriate references are cited. Instructions for using the program and typical program input and output for an alternator design are given, and an alphabetical list of most FORTRAN symbols and the complete program listing with flow charts are included.
10 CFR 431.17 - Determination of efficiency.
Code of Federal Regulations, 2010 CFR
2010-01-01
... method or methods used; the mathematical model, the engineering or statistical analysis, computer... accordance with § 431.16 of this subpart, or by application of an alternative efficiency determination method... must be: (i) Derived from a mathematical model that represents the mechanical and electrical...
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations
Southern, James A.; Plank, Gernot; Vigmond, Edward J.; Whiteley, Jonathan P.
2017-01-01
The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time whilst still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counter-intuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks it is shown that the coupled method is up to 80% faster than the conventional uncoupled method — and that parallel performance is better for the larger coupled problem. PMID:19457741
Designing overall stoichiometric conversions and intervening metabolic reactions
Chowdhury, Anupam; Maranas, Costas D.
2015-11-04
Existing computational tools for de novo metabolic pathway assembly, either based on mixed integer linear programming techniques or graph-search applications, generally only find linear pathways connecting the source to the target metabolite. The overall stoichiometry of conversion along with alternate co-reactant (or co-product) combinations is not part of the pathway design. Therefore, global carbon and energy efficiency is in essence fixed with no opportunities to identify more efficient routes for recycling carbon flux closer to the thermodynamic limit. Here, we introduce a two-stage computational procedure that both identifies the optimum overall stoichiometry (i.e., optStoic) and selects for (non-)native reactions (i.e.,more » minRxn/minFlux) that maximize carbon, energy or price efficiency while satisfying thermodynamic feasibility requirements. Implementation for recent pathway design studies identified non-intuitive designs with improved efficiencies. Specifically, multiple alternatives for non-oxidative glycolysis are generated and non-intuitive ways of co-utilizing carbon dioxide with methanol are revealed for the production of C 2+ metabolites with higher carbon efficiency.« less
Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2016-01-01
An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.
Exact and efficient simulation of concordant computation
NASA Astrophysics Data System (ADS)
Cable, Hugo; Browne, Daniel E.
2015-11-01
Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.
10 CFR 431.197 - Manufacturer's determination of efficiency for distribution transformers.
Code of Federal Regulations, 2010 CFR
2010-01-01
... methods used; the mathematical model, the engineering or statistical analysis, computer simulation or... (b)(3) of this section, or by application of an alternative efficiency determination method (AEDM... section only if: (i) The AEDM has been derived from a mathematical model that represents the electrical...
McLaren, D G; Buchanan, D S; Williams, J E
1987-10-01
A static, deterministic computer model, programmed in Microsoft Basic for IBM PC and Apple Macintosh computers, was developed to calculate production efficiency (cost per kg of product) for nine alternative types of crossbreeding system involving four breeds of swine. The model simulates efficiencies for four purebred and 60 alternative two-, three- and four-breed rotation, rotaterminal, backcross and static cross systems. Crossbreeding systems were defined as including all purebred, crossbred and commercial matings necessary to maintain a total of 10,000 farrowings. Driving variables for the model are mean conception rate at first service and for an 8-wk breeding season, litter size born, preweaning survival rate, postweaning average daily gain, feed-to-gain ratio and carcass backfat. Predictions are computed using breed direct genetic and maternal effects for the four breeds, plus individual, maternal and paternal specific heterosis values, input by the user. Inputs required to calculate the number of females farrowing in each sub-system include the proportion of males and females replaced each breeding cycle in purebred and crossbred populations, the proportion of male and female offspring in seedstock herds that become breeding animals, and the number of females per boar. Inputs required to calculate the efficiency of terminal production (cost-to-product ratio) for each sub-system include breeding herd feed intake, gilt development costs, feed costs and labor and overhead costs. Crossbreeding system efficiency is calculated as the weighted average of sub-system cost-to-product ratio values, weighting by the number of females farrowing in each sub-system.
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
NASA Technical Reports Server (NTRS)
Guruswamy, G. P.; Goorjian, P. M.
1984-01-01
An efficient coordinate transformation technique is presented for constructing grids for unsteady, transonic aerodynamic computations for delta-type wings. The original shearing transformation yielded computations that were numerically unstable and this paper discusses the sources of those instabilities. The new shearing transformation yields computations that are stable, fast, and accurate. Comparisons of those two methods are shown for the flow over the F5 wing that demonstrate the new stability. Also, comparisons are made with experimental data that demonstrate the accuracy of the new method. The computations were made by using a time-accurate, finite-difference, alternating-direction-implicit (ADI) algorithm for the transonic small-disturbance potential equation.
Hardwood silviculture and skyline yarding on steep slopes: economic and environmental impacts
John E. Baumgras; Chris B. LeDoux
1995-01-01
Ameliorating the visual and environmental impact associated with harvesting hardwoods on steep slopes will require the efficient use of skyline yarding along with silvicultural alternatives to clearcutting. In evaluating the effects of these alternatives on harvesting revenue, results of field studies and computer simulations were used to estimate costs and revenue for...
Computer-based learning: interleaving whole and sectional representation of neuroanatomy.
Pani, John R; Chariker, Julia H; Naaz, Farah
2013-01-01
The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). Copyright © 2012 American Association of Anatomists.
Computer-Based Learning: Interleaving Whole and Sectional Representation of Neuroanatomy
Pani, John R.; Chariker, Julia H.; Naaz, Farah
2015-01-01
The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). PMID:22761001
Fast methods to numerically integrate the Reynolds equation for gas fluid films
NASA Technical Reports Server (NTRS)
Dimofte, Florin
1992-01-01
The alternating direction implicit (ADI) method is adopted, modified, and applied to the Reynolds equation for thin, gas fluid films. An efficient code is developed to predict both the steady-state and dynamic performance of an aerodynamic journal bearing. An alternative approach is shown for hybrid journal gas bearings by using Liebmann's iterative solution (LIS) for elliptic partial differential equations. The results are compared with known design criteria from experimental data. The developed methods show good accuracy and very short computer running time in comparison with methods based on an inverting of a matrix. The computer codes need a small amount of memory and can be run on either personal computers or on mainframe systems.
NASA Astrophysics Data System (ADS)
Marinos, Alexandros; Briscoe, Gerard
Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.
Application of microarray analysis on computer cluster and cloud platforms.
Bernau, C; Boulesteix, A-L; Knaus, J
2013-01-01
Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.
ERIC Educational Resources Information Center
Shih, Ching-Hsiang
2011-01-01
This study combines multi-mice technology (people with disabilities can use standard mice, instead of specialized alternative computer input devices, to achieve complete mouse operation) with an assistive pointing function (i.e. cursor-capturing, which enables the user to move the cursor to the target center automatically), to assess whether two…
NASA Astrophysics Data System (ADS)
Rousis, Damon A.
The expected growth of civil aviation over the next twenty years places significant emphasis on revolutionary technology development aimed at mitigating the environmental impact of commercial aircraft. As the number of technology alternatives grows along with model complexity, current methods for Pareto finding and multiobjective optimization quickly become computationally infeasible. Coupled with the large uncertainty in the early stages of design, optimal designs are sought while avoiding the computational burden of excessive function calls when a single design change or technology assumption could alter the results. This motivates the need for a robust and efficient evaluation methodology for quantitative assessment of competing concepts. This research presents a novel approach that combines Bayesian adaptive sampling with surrogate-based optimization to efficiently place designs near Pareto frontier intersections of competing concepts. Efficiency is increased over sequential multiobjective optimization by focusing computational resources specifically on the location in the design space where optimality shifts between concepts. At the intersection of Pareto frontiers, the selection decisions are most sensitive to preferences place on the objectives, and small perturbations can lead to vastly different final designs. These concepts are incorporated into an evaluation methodology that ultimately reduces the number of failed cases, infeasible designs, and Pareto dominated solutions across all concepts. A set of algebraic samples along with a truss design problem are presented as canonical examples for the proposed approach. The methodology is applied to the design of ultra-high bypass ratio turbofans to guide NASA's technology development efforts for future aircraft. Geared-drive and variable geometry bypass nozzle concepts are explored as enablers for increased bypass ratio and potential alternatives over traditional configurations. The method is shown to improve sampling efficiency and provide clusters of feasible designs that motivate a shift towards revolutionary technologies that reduce fuel burn, emissions, and noise on future aircraft.
The NASA Energy Conservation Program
NASA Technical Reports Server (NTRS)
Gaffney, G. P.
1977-01-01
Large energy-intensive research and test equipment at NASA installations is identified, and methods for reducing energy consumption outlined. However, some of the research facilities are involved in developing more efficient, fuel-conserving aircraft, and tradeoffs between immediate and long-term conservation may be necessary. Major programs for conservation include: computer-based systems to automatically monitor and control utility consumption; a steam-producing solid waste incinerator; and a computer-based cost analysis technique to engineer more efficient heating and cooling of buildings. Alternate energy sources in operation or under evaluation include: solar collectors; electric vehicles; and ultrasonically emulsified fuel to attain higher combustion efficiency. Management support, cooperative participation by employees, and effective reporting systems for conservation programs, are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayes, D.F.; Schroeder, P.R.; Engler, R.M.
This technical note describes procedures for determining mean hydraulic retention time and efficiency of a confined disposal facility (CDF) from a dye tracer slug test. These parameters are required to properly design a CDF for solids retention and for effluent quality considerations. Detailed information on conduct and analysis of dye tracer studies can be found in Engineer Manual 1110-2-5027, Confined Dredged Material Disposal. This technical note documents the DYECON computer program which facilitates the analysis of dye tracer concentration data and computes the hydraulic efficiency of a CDF as part of the Automated Dredging and Disposal Alternatives Management System (ADDAMS).
Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai
2015-02-01
Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.
HPC on Competitive Cloud Resources
NASA Astrophysics Data System (ADS)
Bientinesi, Paolo; Iakymchuk, Roman; Napper, Jeff
Computing as a utility has reached the mainstream. Scientists can now easily rent time on large commercial clusters that can be expanded and reduced on-demand in real-time. However, current commercial cloud computing performance falls short of systems specifically designed for scientific applications. Scientific computing needs are quite different from those of the web applications that have been the focus of cloud computing vendors. In this chapter we demonstrate through empirical evaluation the computational efficiency of high-performance numerical applications in a commercial cloud environment when resources are shared under high contention. Using the Linpack benchmark as a case study, we show that cache utilization becomes highly unpredictable and similarly affects computation time. For some problems, not only is it more efficient to underutilize resources, but the solution can be reached sooner in realtime (wall-time). We also show that the smallest, cheapest (64-bit) instance on the studied environment is the best for price to performance ration. In light of the high-contention we witness, we believe that alternative definitions of efficiency for commercial cloud environments should be introduced where strong performance guarantees do not exist. Concepts like average, expected performance and execution time, expected cost to completion, and variance measures--traditionally ignored in the high-performance computing context--now should complement or even substitute the standard definitions of efficiency.
Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter
Loganathan, Shyamala; Mukherjee, Saswati
2015-01-01
Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166
Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.
Loganathan, Shyamala; Mukherjee, Saswati
2015-01-01
Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.
A boundary element alternating method for two-dimensional mixed-mode fracture problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Krishnamurthy, T.
1992-01-01
A boundary element alternating method, denoted herein as BEAM, is presented for two dimensional fracture problems. This is an iterative method which alternates between two solutions. An analytical solution for arbitrary polynomial normal and tangential pressure distributions applied to the crack faces of an embedded crack in an infinite plate is used as the fundamental solution in the alternating method. A boundary element method for an uncracked finite plate is the second solution. For problems of edge cracks a technique of utilizing finite elements with BEAM is presented to overcome the inherent singularity in boundary element stress calculation near the boundaries. Several computational aspects that make the algorithm efficient are presented. Finally, the BEAM is applied to a variety of two dimensional crack problems with different configurations and loadings to assess the validity of the method. The method gives accurate stress intensity factors with minimal computing effort.
Computer-aided design studies of the homopolar linear synchronous motor
NASA Astrophysics Data System (ADS)
Dawson, G. E.; Eastham, A. R.; Ong, R.
1984-09-01
The linear induction motor (LIM), as an urban transit drive, can provide good grade-climbing capabilities and propulsion/braking performance that is independent of steel wheel-rail adhesion. In view of its 10-12 mm airgap, the LIM is characterized by a low power factor-efficiency product of order 0.4. A synchronous machine offers high efficiency and controllable power factor. An assessment of the linear homopolar configuration of this machine is presented as an alternative to the LIM. Computer-aided design studies using the finite element technique have been conducted to identify a suitable machine design for urban transit propulsion.
Tensor Factorization for Low-Rank Tensor Completion.
Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao
2018-03-01
Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.
LeDell, Erin; Petersen, Maya; van der Laan, Mark
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.
Petersen, Maya; van der Laan, Mark
2015-01-01
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737
A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.
NASA Astrophysics Data System (ADS)
Wehner, M. F.; Oliker, L.; Shalf, J.
2008-12-01
Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.
Distributed Computer Networks in Support of Complex Group Practices
Wess, Bernard P.
1978-01-01
The economics of medical computer networks are presented in context with the patient care and administrative goals of medical networks. Design alternatives and network topologies are discussed with an emphasis on medical network design requirements in distributed data base design, telecommunications, satellite systems, and software engineering. The success of the medical computer networking technology is predicated on the ability of medical and data processing professionals to design comprehensive, efficient, and virtually impenetrable security systems to protect data bases, network access and services, and patient confidentiality.
On the Use of Electrooculogram for Efficient Human Computer Interfaces
Usakli, A. B.; Gurkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F.
2010-01-01
The aim of this study is to present electrooculogram signals that can be used for human computer interface efficiently. Establishing an efficient alternative channel for communication without overt speech and hand movements is important to increase the quality of life for patients suffering from Amyotrophic Lateral Sclerosis or other illnesses that prevent correct limb and facial muscular responses. We have made several experiments to compare the P300-based BCI speller and EOG-based new system. A five-letter word can be written on average in 25 seconds and in 105 seconds with the EEG-based device. Giving message such as “clean-up” could be performed in 3 seconds with the new system. The new system is more efficient than P300-based BCI system in terms of accuracy, speed, applicability, and cost efficiency. Using EOG signals, it is possible to improve the communication abilities of those patients who can move their eyes. PMID:19841687
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
NASA Astrophysics Data System (ADS)
Marhadi, Kun Saptohartyadi
Structural optimization for damage tolerance under various unforeseen damage scenarios is computationally challenging. It couples non-linear progressive failure analysis with sampling-based stochastic analysis of random damage. The goal of this research was to understand the relationship between alternate load paths available in a structure and its damage tolerance, and to use this information to develop computationally efficient methods for designing damage tolerant structures. Progressive failure of a redundant truss structure subjected to small random variability was investigated to identify features that correlate with robustness and predictability of the structure's progressive failure. The identified features were used to develop numerical surrogate measures that permit computationally efficient deterministic optimization to achieve robustness and predictability of progressive failure. Analysis of damage tolerance on designs with robust progressive failure indicated that robustness and predictability of progressive failure do not guarantee damage tolerance. Damage tolerance requires a structure to redistribute its load to alternate load paths. In order to investigate the load distribution characteristics that lead to damage tolerance in structures, designs with varying degrees of damage tolerance were generated using brute force stochastic optimization. A method based on principal component analysis was used to describe load distributions (alternate load paths) in the structures. Results indicate that a structure that can develop alternate paths is not necessarily damage tolerant. The alternate load paths must have a required minimum load capability. Robustness analysis of damage tolerant optimum designs indicates that designs are tailored to specified damage. A design Optimized under one damage specification can be sensitive to other damages not considered. Effectiveness of existing load path definitions and characterizations were investigated for continuum structures. A load path definition using a relative compliance change measure (U* field) was demonstrated to be the most useful measure of load path. This measure provides quantitative information on load path trajectories and qualitative information on the effectiveness of the load path. The use of the U* description of load paths in optimizing structures for effective load paths was investigated.
Unstructured mesh methods for CFD
NASA Technical Reports Server (NTRS)
Peraire, J.; Morgan, K.; Peiro, J.
1990-01-01
Mesh generation methods for Computational Fluid Dynamics (CFD) are outlined. Geometric modeling is discussed. An advancing front method is described. Flow past a two engine Falcon aeroplane is studied. An algorithm and associated data structure called the alternating digital tree, which efficiently solves the geometric searching problem is described. The computation of an initial approximation to the steady state solution of a given poblem is described. Mesh generation for transient flows is described.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D
2008-05-01
Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.
USDA-ARS?s Scientific Manuscript database
Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...
NASA Astrophysics Data System (ADS)
Joost, William J.
2012-09-01
Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.
SAGE: The Self-Adaptive Grid Code. 3
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1999-01-01
The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.
Barker, Graeme; Johnson, David G; Young, Paul C; Macgregor, Stuart A; Lee, Ai-Lan
2015-01-01
Gold(I)-catalysed direct allylic etherifications have been successfully carried out with chirality transfer to yield enantioenriched, γ-substituted secondary allylic ethers. Our investigations include a full substrate-scope screen to ascertain substituent effects on the regioselectivity, stereoselectivity and efficiency of chirality transfer, as well as control experiments to elucidate the mechanistic subtleties of the chirality-transfer process. Crucially, addition of molecular sieves was found to be necessary to ensure efficient and general chirality transfer. Computational studies suggest that the efficiency of chirality transfer is linked to the aggregation of the alcohol nucleophile around the reactive π-bound Au–allylic ether complex. With a single alcohol nucleophile, a high degree of chirality transfer is predicted. However, if three alcohols are present, alternative proton transfer chain mechanisms that erode the efficiency of chirality transfer become competitive. PMID:26248980
Evaluation of Generation Alternation Models in Evolutionary Robotics
NASA Astrophysics Data System (ADS)
Oiso, Masashi; Matsumura, Yoshiyuki; Yasuda, Toshiyuki; Ohkura, Kazuhiro
For efficient implementation of Evolutionary Algorithms (EA) to a desktop grid computing environment, we propose a new generation alternation model called Grid-Oriented-Deletion (GOD) based on comparison with the conventional techniques. In previous research, generation alternation models are generally evaluated by using test functions. However, their exploration performance on the real problems such as Evolutionary Robotics (ER) has not been made very clear yet. Therefore we investigate the relationship between the exploration performance of EA on an ER problem and its generation alternation model. We applied four generation alternation models to the Evolutionary Multi-Robotics (EMR), which is the package-pushing problem to investigate their exploration performance. The results show that GOD is more effective than the other conventional models.
Shulaker, Max M; Hills, Gage; Patil, Nishant; Wei, Hai; Chen, Hong-Yu; Wong, H-S Philip; Mitra, Subhasish
2013-09-26
The miniaturization of electronic devices has been the principal driving force behind the semiconductor industry, and has brought about major improvements in computational power and energy efficiency. Although advances with silicon-based electronics continue to be made, alternative technologies are being explored. Digital circuits based on transistors fabricated from carbon nanotubes (CNTs) have the potential to outperform silicon by improving the energy-delay product, a metric of energy efficiency, by more than an order of magnitude. Hence, CNTs are an exciting complement to existing semiconductor technologies. Owing to substantial fundamental imperfections inherent in CNTs, however, only very basic circuit blocks have been demonstrated. Here we show how these imperfections can be overcome, and demonstrate the first computer built entirely using CNT-based transistors. The CNT computer runs an operating system that is capable of multitasking: as a demonstration, we perform counting and integer-sorting simultaneously. In addition, we implement 20 different instructions from the commercial MIPS instruction set to demonstrate the generality of our CNT computer. This experimental demonstration is the most complex carbon-based electronic system yet realized. It is a considerable advance because CNTs are prominent among a variety of emerging technologies that are being considered for the next generation of highly energy-efficient electronic systems.
Conceptual design and cost analysis of hydraulic output unit for 15 kW free-piston Stirling engine
NASA Technical Reports Server (NTRS)
White, M. A.
1982-01-01
A long-life hydraulic converter with unique features was conceptually designed to interface with a specified 15 kW(e) free-piston Stirling engine in a solar thermal dish application. Hydraulic fluid at 34.5 MPa (5000 psi) is produced to drive a conventional hydraulic motor and rotary alternator. Efficiency of the low-maintenance converter design was calculated at 93.5% for a counterbalanced version and 97.0% without the counterbalance feature. If the converter were coupled to a Stirling engine with design parameters more typcial of high-technology Stirling engines, counterbalanced converter efficiency could be increased to 99.6%. Dynamic computer simulation studies were conducted to evaluate performance and system sensitivities. Production costs of the complete Stirling hydraulic/electric power system were evaluated at $6506 which compared with $8746 for an alternative Stirling engine/linear alternator system.
The demodulated band transform
Kovach, Christopher K.; Gander, Phillip E.
2016-01-01
Background Windowed Fourier decompositions (WFD) are widely used in measuring stationary and non-stationary spectral phenomena and in describing pairwise relationships among multiple signals. Although a variety of WFDs see frequent application in electrophysiological research, including the short-time Fourier transform, continuous wavelets, band-pass filtering and multitaper-based approaches, each carries certain drawbacks related to computational efficiency and spectral leakage. This work surveys the advantages of a WFD not previously applied in electrophysiological settings. New Methods A computationally efficient form of complex demodulation, the demodulated band transform (DBT), is described. Results DBT is shown to provide an efficient approach to spectral estimation with minimal susceptibility to spectral leakage. In addition, it lends itself well to adaptive filtering of non-stationary narrowband noise. Comparison with existing methods A detailed comparison with alternative WFDs is offered, with an emphasis on the relationship between DBT and Thomson's multitaper. DBT is shown to perform favorably in combining computational efficiency with minimal introduction of spectral leakage. Conclusion DBT is ideally suited to efficient estimation of both stationary and non-stationary spectral and cross-spectral statistics with minimal susceptibility to spectral leakage. These qualities are broadly desirable in many settings. PMID:26711370
Advanced propulsion for LEO-Moon transport. 3: Transportation model. M.S. Thesis - California Univ.
NASA Technical Reports Server (NTRS)
Henley, Mark W.
1992-01-01
A simplified computational model of low Earth orbit-Moon transportation system has been developed to provide insight into the benefits of new transportation technologies. A reference transportation infrastructure, based upon near-term technology developments, is used as a departure point for assessing other, more advanced alternatives. Comparison of the benefits of technology application, measured in terms of a mass payback ratio, suggests that several of the advanced technology alternatives could substantially improve the efficiency of low Earth orbit-Moon transportation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir
Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Errormore » analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.« less
NASA Astrophysics Data System (ADS)
Tong, Minh Q.; Hasan, M. Monirul; Gregory, Patrick D.; Shah, Jasmine; Park, B. Hyle; Hirota, Koji; Liu, Junze; Choi, Andy; Low, Karen; Nam, Jin
2017-02-01
We demonstrate a computationally-efficient optical coherence elastography (OCE) method based on fringe washout. By introducing ultrasound in alternating depth profile, we can obtain information on the mechanical properties of a sample within acquisition of a single image. This can be achieved by simply comparing the intensity in adjacent depth profiles in order to quantify the degree of fringe washout. Phantom agar samples with various densities were measured and quantified by our OCE technique, the correlation to Young's modulus measurement by atomic force micrscopy (AFM) were observed. Knee cartilage samples of monoiodo acetate-induced arthiritis (MIA) rat models were utilized to replicate cartilage damages where our proposed OCE technique along with intensity and birefringence analyses and AFM measurements were applied. The results indicate that our OCE technique shows a correlation to the techniques as polarization-sensitive OCT, AFM Young's modulus measurements and histology were promising. Our OCE is applicable to any of existing OCT systems and demonstrated to be computationally-efficient.
The Problem With the Placement Study.
ERIC Educational Resources Information Center
Miner, Norris
This study compared the effectiveness and efficiency of two alternative methods for determining the status of graduates of Seminole Community College. The first method involved the identification of graduates, design and mailing of a questionnaire, and analysis of response data, as mandated by the state. The second method compared computer data…
ERIC Educational Resources Information Center
Nuzzo, David
1999-01-01
Discusses outsourcing in library technical-services departments and how to make the department more cost-effective to limit the need for outsourcing as a less expensive alternative. Topics include experiences at State University of New York at Buffalo; efficient use of computers for in-house programs; and staff participation. (LRW)
Polymorphous Computing Architectures
2007-12-12
provide a multiprocessor implementation. In this work, we introduce the Atomos transactional programming language, which is the first to include...implicit transactions, strong atomicity, and a scalable multiprocessor implementation [47]. Atomos is derived from Java, but replaces its synchronization...and conditional waiting constructs with transactional alternatives. The Atomos conditional waiting proposal is tailored to allow efficient
A stochastic method for computing hadronic matrix elements
Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...
2014-01-24
In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.
Heuristic Scheduling in Grid Environments: Reducing the Operational Energy Demand
NASA Astrophysics Data System (ADS)
Bodenstein, Christian
In a world where more and more businesses seem to trade in an online market, the supply of online services to the ever-growing demand could quickly reach its capacity limits. Online service providers may find themselves maxed out at peak operation levels during high-traffic timeslots but too little demand during low-traffic timeslots, although the latter is becoming less frequent. At this point deciding which user is allocated what level of service becomes essential. The concept of Grid computing could offer a meaningful alternative to conventional super-computing centres. Not only can Grids reach the same computing speeds as some of the fastest supercomputers, but distributed computing harbors a great energy-saving potential. When scheduling projects in such a Grid environment however, simply assigning one process to a system becomes so complex in calculation that schedules are often too late to execute, rendering their optimizations useless. Current schedulers attempt to maximize the utility, given some sort of constraint, often reverting to heuristics. This optimization often comes at the cost of environmental impact, in this case CO 2 emissions. This work proposes an alternate model of energy efficient scheduling while keeping a respectable amount of economic incentives untouched. Using this model, it is possible to reduce the total energy consumed by a Grid environment using 'just-in-time' flowtime management, paired with ranking nodes by efficiency.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410
Improved dynamic analysis method using load-dependent Ritz vectors
NASA Technical Reports Server (NTRS)
Escobedo-Torres, J.; Ricles, J. M.
1993-01-01
The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.
Alternate Multilayer Gratings with Enhanced Diffraction Efficiency in the 500-5000 eV Energy Domain
NASA Astrophysics Data System (ADS)
Polack, François; Lagarde, Bruno; Idir, Mourad; Cloup, Audrey Liard; Jourdain, Erick; Roulliay, Marc; Delmotte, Franck; Gautier, Julien; Ravet-Krill, Marie-Françoise
2007-01-01
An alternate multilayer (AML) grating is a 2 dimensional diffraction structure formed on an optical surface, having a 0.5 duty cycle in the in-plane and in the in-depth direction. It can be made by covering a shallow depth laminar grating with a multilayer stack. We show here that their 2D structure confer AML gratings a high angular and energetic selectivity and therefore enhanced diffraction properties, when used in grazing incidence. In the tender X-ray range (500eV - 5000 eV) they behave much like blazed gratings. Over 15% efficiency has been measured on a 1200 lines/mm Mo/Si AML grating in the 1.2 - 1.5 keV energy range. Computer simulations show that selected multilayer materials such as Cr/C should allow diffraction efficiency over 50% at photon energies over 3 keV.
IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
William M. Bond; Salih Ersayin
2007-03-30
This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency ofmore » individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern Minnesota, and future proposals are pending with non-taconite mineral processing applications.« less
An alternative low-loss stack topology for vanadium redox flow battery: Comparative assessment
NASA Astrophysics Data System (ADS)
Moro, Federico; Trovò, Andrea; Bortolin, Stefano; Del, Davide, , Col; Guarnieri, Massimo
2017-02-01
Two vanadium redox flow battery topologies have been compared. In the conventional series stack, bipolar plates connect cells electrically in series and hydraulically in parallel. The alternative topology consists of cells connected in parallel inside stacks by means of monopolar plates in order to reduce shunt currents along channels and manifolds. Channelled and flat current collectors interposed between cells were considered in both topologies. In order to compute the stack losses, an equivalent circuit model of a VRFB cell was built from a 2D FEM multiphysics numerical model based on Comsol®, accounting for coupled electrical, electrochemical, and charge and mass transport phenomena. Shunt currents were computed inside the cells with 3D FEM models and in the piping and manifolds by means of equivalent circuits solved with Matlab®. Hydraulic losses were computed with analytical models in piping and manifolds and with 3D numerical analyses based on ANSYS Fluent® in the cell porous electrodes. Total losses in the alternative topology resulted one order of magnitude lower than in an equivalent conventional battery. The alternative topology with channelled current collectors exhibits the lowest shunt currents and hydraulic losses, with round-trip efficiency higher by about 10%, as compared to the conventional topology.
Bistatic passive radar simulator with spatial filtering subsystem
NASA Astrophysics Data System (ADS)
Hossa, Robert; Szlachetko, Boguslaw; Lewandowski, Andrzej; Górski, Maksymilian
2009-06-01
The purpose of this paper is to briefly introduce the structure and features of the developed virtual passive FM radar implemented in Matlab system of numerical computations and to present many alternative ways of its performance. An idea of the proposed solution is based on analytic representation of transmitted direct signals and reflected echo signals. As a spatial filtering subsystem a beamforming network of ULA and UCA dipole configuration dedicated to bistatic radar concept is considered and computationally efficient procedures are presented in details. Finally, exemplary results of the computer simulations of the elaborated virtual simulator are provided and discussed.
n-body simulations using message passing parallel computers.
NASA Astrophysics Data System (ADS)
Grama, A. Y.; Kumar, V.; Sameh, A.
The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.
NASA Astrophysics Data System (ADS)
Fulton, J. W.; Bjerklie, D. M.; Jones, J. W.; Minear, J. T.
2015-12-01
Measuring streamflow, developing, and maintaining rating curves at new streamgaging stations is both time-consuming and problematic. Hydro 21 was an initiative by the U.S. Geological Survey to provide vision and leadership to identify and evaluate new technologies and methods that had the potential to change the way in which streamgaging is conducted. Since 2014, additional trials have been conducted to evaluate some of the methods promoted by the Hydro 21 Committee. Emerging technologies such as continuous-wave radars and computationally-efficient methods such as the Probability Concept require significantly less field time, promote real-time velocity and streamflow measurements, and apply to unsteady flow conditions such as looped ratings and unsteady-flood flows. Portable and fixed-mount radars have advanced beyond the development phase, are cost effective, and readily available in the marketplace. The Probability Concept is based on an alternative velocity-distribution equation developed by C.-L. Chiu, who pioneered the concept. By measuring the surface-water velocity and correcting for environmental influences such as wind drift, radars offer a reliable alternative for measuring and computing real-time streamflow for a variety of hydraulic conditions. If successful, these tools may allow us to establish ratings more efficiently, assess unsteady flow conditions, and report real-time streamflow at new streamgaging stations.
NASA Astrophysics Data System (ADS)
Song, Wanjun; Zhang, Hou
2017-11-01
Through introducing the alternating direction implicit (ADI) technique and the memory-optimized algorithm to the shift operator (SO) finite difference time domain (FDTD) method, the memory-optimized SO-ADI FDTD for nonmagnetized collisional plasma is proposed and the corresponding formulae of the proposed method for programming are deduced. In order to further the computational efficiency, the iteration method rather than Gauss elimination method is employed to solve the equation set in the derivation of the formulae. Complicated transformations and convolutions are avoided in the proposed method compared with the Z transforms (ZT) ADI FDTD method and the piecewise linear JE recursive convolution (PLJERC) ADI FDTD method. The numerical dispersion of the SO-ADI FDTD method with different plasma frequencies and electron collision frequencies is analyzed and the appropriate ratio of grid size to the minimum wavelength is given. The accuracy of the proposed method is validated by the reflection coefficient test on a nonmagnetized collisional plasma sheet. The testing results show that the proposed method is advantageous for improving computational efficiency and saving computer memory. The reflection coefficient of a perfect electric conductor (PEC) sheet covered by multilayer plasma and the RCS of the objects coated by plasma are calculated by the proposed method and the simulation results are analyzed.
NASA Technical Reports Server (NTRS)
Jaffe, L. D.
1984-01-01
The CONC/11 computer program designed for calculating the performance of dish-type solar thermal collectors and power systems is discussed. This program is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. The CONC/11 is written in Athena Extended FORTRAN (similar to FORTRAN 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers. A user's manual is also provided for this program.
NASA Astrophysics Data System (ADS)
Zhang, H.-m.; Chen, X.-f.; Chang, S.
- It is difficult to compute synthetic seismograms for a layered half-space with sources and receivers at close to or the same depths using the generalized R/T coefficient method (Kennett, 1983; Luco and Apsel, 1983; Yao and Harkrider, 1983; Chen, 1993), because the wavenumber integration converges very slowly. A semi-analytic method for accelerating the convergence, in which part of the integration is implemented analytically, was adopted by some authors (Apsel and Luco, 1983; Hisada, 1994, 1995). In this study, based on the principle of the Repeated Averaging Method (Dahlquist and Björck, 1974; Chang, 1988), we propose an alternative, efficient, numerical method, the peak-trough averaging method (PTAM), to overcome the difficulty mentioned above. Compared with the semi-analytic method, PTAM is not only much simpler mathematically and easier to implement in practice, but also more efficient. Using numerical examples, we illustrate the validity, accuracy and efficiency of the new method.
A variational eigenvalue solver on a photonic quantum processor
Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O’Brien, Jeremy L.
2014-01-01
Quantum computers promise to efficiently solve important problems that are intractable on a conventional computer. For quantum systems, where the physical dimension grows exponentially, finding the eigenvalues of certain operators is one such intractable problem and remains a fundamental challenge. The quantum phase estimation algorithm efficiently finds the eigenvalue of a given eigenvector but requires fully coherent evolution. Here we present an alternative approach that greatly reduces the requirements for coherent evolution and combine this method with a new approach to state preparation based on ansätze and classical optimization. We implement the algorithm by combining a highly reconfigurable photonic quantum processor with a conventional computer. We experimentally demonstrate the feasibility of this approach with an example from quantum chemistry—calculating the ground-state molecular energy for He–H+. The proposed approach drastically reduces the coherence time requirements, enhancing the potential of quantum resources available today and in the near future. PMID:25055053
Seismic data restoration with a fast L1 norm trust region method
NASA Astrophysics Data System (ADS)
Cao, Jingjie; Wang, Yanfei
2014-08-01
Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.
Sundar, Vikram; Gelbwaser-Klimovsky, David; Aspuru-Guzik, Alán
2018-04-05
Modeling nuclear quantum effects is required for accurate molecular dynamics (MD) simulations of molecules. The community has paid special attention to water and other biomolecules that show hydrogen bonding. Standard methods of modeling nuclear quantum effects like Ring Polymer Molecular Dynamics (RPMD) are computationally costlier than running classical trajectories. A force-field functor (FFF) is an alternative method that computes an effective force field that replicates quantum properties of the original force field. In this work, we propose an efficient method of computing FFF using the Wigner-Kirkwood expansion. As a test case, we calculate a range of thermodynamic properties of Neon, obtaining the same level of accuracy as RPMD, but with the shorter runtime of classical simulations. By modifying existing MD programs, the proposed method could be used in the future to increase the efficiency and accuracy of MD simulations involving water and proteins.
PRESAGE: PRivacy-preserving gEnetic testing via SoftwAre Guard Extension.
Chen, Feng; Wang, Chenghong; Dai, Wenrui; Jiang, Xiaoqian; Mohammed, Noman; Al Aziz, Md Momin; Sadat, Md Nazmus; Sahinalp, Cenk; Lauter, Kristin; Wang, Shuang
2017-07-26
Advances in DNA sequencing technologies have prompted a wide range of genomic applications to improve healthcare and facilitate biomedical research. However, privacy and security concerns have emerged as a challenge for utilizing cloud computing to handle sensitive genomic data. We present one of the first implementations of Software Guard Extension (SGX) based securely outsourced genetic testing framework, which leverages multiple cryptographic protocols and minimal perfect hash scheme to enable efficient and secure data storage and computation outsourcing. We compared the performance of the proposed PRESAGE framework with the state-of-the-art homomorphic encryption scheme, as well as the plaintext implementation. The experimental results demonstrated significant performance over the homomorphic encryption methods and a small computational overhead in comparison to plaintext implementation. The proposed PRESAGE provides an alternative solution for secure and efficient genomic data outsourcing in an untrusted cloud by using a hybrid framework that combines secure hardware and multiple crypto protocols.
Heterogeneous concurrent computing with exportable services
NASA Technical Reports Server (NTRS)
Sunderam, Vaidy
1995-01-01
Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.
Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model
NASA Astrophysics Data System (ADS)
Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.
2017-10-01
We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
Krylov subspace methods on supercomputers
NASA Technical Reports Server (NTRS)
Saad, Youcef
1988-01-01
A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.
Zhou, Xiang
2017-12-01
Linear mixed models (LMMs) are among the most commonly used tools for genetic association studies. However, the standard method for estimating variance components in LMMs-the restricted maximum likelihood estimation method (REML)-suffers from several important drawbacks: REML requires individual-level genotypes and phenotypes from all samples in the study, is computationally slow, and produces downward-biased estimates in case control studies. To remedy these drawbacks, we present an alternative framework for variance component estimation, which we refer to as MQS. MQS is based on the method of moments (MoM) and the minimal norm quadratic unbiased estimation (MINQUE) criterion, and brings two seemingly unrelated methods-the renowned Haseman-Elston (HE) regression and the recent LD score regression (LDSC)-into the same unified statistical framework. With this new framework, we provide an alternative but mathematically equivalent form of HE that allows for the use of summary statistics. We provide an exact estimation form of LDSC to yield unbiased and statistically more efficient estimates. A key feature of our method is its ability to pair marginal z -scores computed using all samples with SNP correlation information computed using a small random subset of individuals (or individuals from a proper reference panel), while capable of producing estimates that can be almost as accurate as if both quantities are computed using the full data. As a result, our method produces unbiased and statistically efficient estimates, and makes use of summary statistics, while it is computationally efficient for large data sets. Using simulations and applications to 37 phenotypes from 8 real data sets, we illustrate the benefits of our method for estimating and partitioning SNP heritability in population studies as well as for heritability estimation in family studies. Our method is implemented in the GEMMA software package, freely available at www.xzlab.org/software.html.
NASA Astrophysics Data System (ADS)
Oh, Su-ji; Choi, Eunju; Choi, Nagchoul; Park, Cheonyoung
2013-04-01
Recently, due to the realization of environmental problems of cyanide, it is a worldwide quest to find viable alternatives. One of the alternatives is a chloride solvent(chlorine-hypochlorite acid) with an appropriate oxidizing agent. The rate of dissolution of Au by chloride solvent is much faster than that by cyanide. Also, due to presence of chloride ions, there is no passivation of gold surfaces during chlorination. The objective of this work was to investigate the effect of Au extraction efficiency under various experimental conditions(pulp density, chlorine-hypochlorite ratio and concentration of NaCl) from scrap of the used computer by chloride solvent. In addition, the recovery experiment was conducted to examine of the precipitation efficiency of Au under various metabisulfite concentration from extracted solution. In an EDS analysis, valuable metals such as Cu, Sn, Sb, Al, Ni, Pb and Au were observed in scrap of the used computer. The result of extraction experiment showed that the highest extraction rate was obtained under 1% of pulp density with a chlorine-hypochlorite ratio of 2:1, and a concentration of NaCl at 2M. The highest Au recovery(precipitation) rate was observed the addition of sodium metabisulfite at 2M concentration. Under these conditions, chlorine-hypochlorite could effectively Au extraction from scrap of the used computer sections and the additive reagent using sodium metabisulfite could easily precipitate the Au from the chlorine-hypochlorite solution.
NASA Astrophysics Data System (ADS)
Shaat, Musbah; Bader, Faouzi
2010-12-01
Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.
GPU-based Parallel Application Design for Emerging Mobile Devices
NASA Astrophysics Data System (ADS)
Gupta, Kshitij
A revolution is underway in the computing world that is causing a fundamental paradigm shift in device capabilities and form-factor, with a move from well-established legacy desktop/laptop computers to mobile devices in varying sizes and shapes. Amongst all the tasks these devices must support, graphics has emerged as the 'killer app' for providing a fluid user interface and high-fidelity game rendering, effectively making the graphics processor (GPU) one of the key components in (present and future) mobile systems. By utilizing the GPU as a general-purpose parallel processor, this dissertation explores the GPU computing design space from an applications standpoint, in the mobile context, by focusing on key challenges presented by these devices---limited compute, memory bandwidth, and stringent power consumption requirements---while improving the overall application efficiency of the increasingly important speech recognition workload for mobile user interaction. We broadly partition trends in GPU computing into four major categories. We analyze hardware and programming model limitations in current-generation GPUs and detail an alternate programming style called Persistent Threads, identify four use case patterns, and propose minimal modifications that would be required for extending native support. We show how by manually extracting data locality and altering the speech recognition pipeline, we are able to achieve significant savings in memory bandwidth while simultaneously reducing the compute burden on GPU-like parallel processors. As we foresee GPU computing to evolve from its current 'co-processor' model into an independent 'applications processor' that is capable of executing complex work independently, we create an alternate application framework that enables the GPU to handle all control-flow dependencies autonomously at run-time while minimizing host involvement to just issuing commands, that facilitates an efficient application implementation. Finally, as compute and communication capabilities of mobile devices improve, we analyze energy implications of processing speech recognition locally (on-chip) and offloading it to servers (in-cloud).
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
News on Seeking Gaia's Astrometric Core Solution with AGIS
NASA Astrophysics Data System (ADS)
Lammers, U.; Lindegren, L.
We report on recent new developments around the Astrometric Global Iterative Solution system. This includes the availability of an efficient Conjugate Gradient solver and the Generic Astrometric Calibration scheme that had been proposed a while ago. The number of primary stars to be included in the core solution is now believed to be significantly higher than the 100 Million that served as baseline until now. Cloud computing services are being studied as a possible cost-effective alternative to running AGIS on dedicated computing hardware at ESAC during the operational phase.
NASA Astrophysics Data System (ADS)
Balac, Stéphane; Fernandez, Arnaud
2016-02-01
The computer program SPIP is aimed at solving the Generalized Non-Linear Schrödinger equation (GNLSE), involved in optics e.g. in the modelling of light-wave propagation in an optical fibre, by the Interaction Picture method, a new efficient alternative method to the Symmetric Split-Step method. In the SPIP program a dedicated costless adaptive step-size control based on the use of a 4th order embedded Runge-Kutta method is implemented in order to speed up the resolution.
NASA Technical Reports Server (NTRS)
Kao, M. H.; Bodenheimer, R. E.
1976-01-01
The tse computer's capability of achieving image congruence between temporal and multiple images with misregistration due to rotational differences is reported. The coordinate transformations are obtained and a general algorithms is devised to perform image rotation using tse operations very efficiently. The details of this algorithm as well as its theoretical implications are presented. Step by step procedures of image registration are described in detail. Numerous examples are also employed to demonstrate the correctness and the effectiveness of the algorithms and conclusions and recommendations are made.
On automating domain connectivity for overset grids
NASA Technical Reports Server (NTRS)
Chiu, Ing-Tsau
1994-01-01
An alternative method for domain connectivity among systems of overset grids is presented. Reference uniform Cartesian systems of points are used to achieve highly efficient domain connectivity, and form the basis for a future fully automated system. The Cartesian systems are used to approximated body surfaces and to map the computational space of component grids. By exploiting the characteristics of Cartesian Systems, Chimera type hole-cutting and identification of donor elements for intergrid boundary points can be carried out very efficiently. The method is tested for a range of geometrically complex multiple-body overset grid systems.
NASA Technical Reports Server (NTRS)
Holman, R. R.; Lippert, T. E.
1976-01-01
Electric Power Plant costs and efficiencies are presented for two basic liquid-metal cycles corresponding to 922 and 1089 K (1200 and 1500 F) for a commercial applications using direct coal firing. Sixteen plant designs are considered for which major component equipment were sized and costed. The design basis for each major component is discussed. Also described is the overall systems computer model that was developed to analyze the thermodynamics of the various cycle configurations that were considered.
Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models
NASA Astrophysics Data System (ADS)
Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo
2014-04-01
We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.
Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation
NASA Technical Reports Server (NTRS)
Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.
2010-01-01
Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.
Bypassing the malfunction junction in warm dense matter simulations
NASA Astrophysics Data System (ADS)
Cangi, Attila; Pribram-Jones, Aurora
2015-03-01
Simulation of warm dense matter requires computational methods that capture both quantum and classical behavior efficiently under high-temperature and high-density conditions. The state-of-the-art approach to model electrons and ions under those conditions is density functional theory molecular dynamics, but this method's computational cost skyrockets as temperatures and densities increase. We propose finite-temperature potential functional theory as an in-principle-exact alternative that suffers no such drawback. In analogy to the zero-temperature theory developed previously, we derive an orbital-free free energy approximation through a coupling-constant formalism. Our density approximation and its associated free energy approximation demonstrate the method's accuracy and efficiency. A.C. has been partially supported by NSF Grant CHE-1112442. A.P.J. is supported by DOE Grant DE-FG02-97ER25308.
Vectorization on the star computer of several numerical methods for a fluid flow problem
NASA Technical Reports Server (NTRS)
Lambiotte, J. J., Jr.; Howser, L. M.
1974-01-01
A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.
Redundant interferometric calibration as a complex optimization problem
NASA Astrophysics Data System (ADS)
Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.
2018-05-01
Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.
Resonant transition-based quantum computation
NASA Astrophysics Data System (ADS)
Chiang, Chen-Fu; Hsieh, Chang-Yu
2017-05-01
In this article we assess a novel quantum computation paradigm based on the resonant transition (RT) phenomenon commonly associated with atomic and molecular systems. We thoroughly analyze the intimate connections between the RT-based quantum computation and the well-established adiabatic quantum computation (AQC). Both quantum computing frameworks encode solutions to computational problems in the spectral properties of a Hamiltonian and rely on the quantum dynamics to obtain the desired output state. We discuss how one can adapt any adiabatic quantum algorithm to a corresponding RT version and the two approaches are limited by different aspects of Hamiltonians' spectra. The RT approach provides a compelling alternative to the AQC under various circumstances. To better illustrate the usefulness of the novel framework, we analyze the time complexity of an algorithm for 3-SAT problems and discuss straightforward methods to fine tune its efficiency.
Multi-tasking computer control of video related equipment
NASA Technical Reports Server (NTRS)
Molina, Rod; Gilbert, Bob
1989-01-01
The flexibility, cost-effectiveness and widespread availability of personal computers now makes it possible to completely integrate the previously separate elements of video post-production into a single device. Specifically, a personal computer, such as the Commodore-Amiga, can perform multiple and simultaneous tasks from an individual unit. Relatively low cost, minimal space requirements and user-friendliness, provides the most favorable environment for the many phases of video post-production. Computers are well known for their basic abilities to process numbers, text and graphics and to reliably perform repetitive and tedious functions efficiently. These capabilities can now apply as either additions or alternatives to existing video post-production methods. A present example of computer-based video post-production technology is the RGB CVC (Computer and Video Creations) WorkSystem. A wide variety of integrated functions are made possible with an Amiga computer existing at the heart of the system.
NASA Astrophysics Data System (ADS)
Ercan, İlke; Suyabatmaz, Enes
2018-06-01
The saturation in the efficiency and performance scaling of conventional electronic technologies brings about the development of novel computational paradigms. Brownian circuits are among the promising alternatives that can exploit fluctuations to increase the efficiency of information processing in nanocomputing. A Brownian cellular automaton, where signals propagate randomly and are driven by local transition rules, can be made computationally universal by embedding arbitrary asynchronous circuits on it. One of the potential realizations of such circuits is via single electron tunneling (SET) devices since SET technology enable simulation of noise and fluctuations in a fashion similar to Brownian search. In this paper, we perform a physical-information-theoretic analysis on the efficiency limitations in a Brownian NAND and half-adder circuits implemented using SET technology. The method we employed here establishes a solid ground that enables studying computational and physical features of this emerging technology on an equal footing, and yield fundamental lower bounds that provide valuable insights into how far its efficiency can be improved in principle. In order to provide a basis for comparison, we also analyze a NAND gate and half-adder circuit implemented in complementary metal oxide semiconductor technology to show how the fundamental bound of the Brownian circuit compares against a conventional paradigm.
Yeh, Chia-Nan; Chai, Jeng-Da
2016-01-01
We investigate the role of Kekulé and non-Kekulé structures in the radical character of alternant polycyclic aromatic hydrocarbons (PAHs) using thermally-assisted-occupation density functional theory (TAO-DFT), an efficient electronic structure method for the study of large ground-state systems with strong static correlation effects. Our results reveal that the studies of Kekulé and non-Kekulé structures qualitatively describe the radical character of alternant PAHs, which could be useful when electronic structure calculations are infeasible due to the expensive computational cost. In addition, our results support previous findings on the increase in radical character with increasing system size. For alternant PAHs with the same number of aromatic rings, the geometrical arrangements of aromatic rings are responsible for their radical character. PMID:27457289
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-25
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krogel, Jaron T.; Reboredo, Fernando A.
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less
EMILiO: a fast algorithm for genome-scale strain design.
Yang, Laurence; Cluett, William R; Mahadevan, Radhakrishnan
2011-05-01
Systems-level design of cell metabolism is becoming increasingly important for renewable production of fuels, chemicals, and drugs. Computational models are improving in the accuracy and scope of predictions, but are also growing in complexity. Consequently, efficient and scalable algorithms are increasingly important for strain design. Previous algorithms helped to consolidate the utility of computational modeling in this field. To meet intensifying demands for high-performance strains, both the number and variety of genetic manipulations involved in strain construction are increasing. Existing algorithms have experienced combinatorial increases in computational complexity when applied toward the design of such complex strains. Here, we present EMILiO, a new algorithm that increases the scope of strain design to include reactions with individually optimized fluxes. Unlike existing approaches that would experience an explosion in complexity to solve this problem, we efficiently generated numerous alternate strain designs producing succinate, l-glutamate and l-serine. This was enabled by successive linear programming, a technique new to the area of computational strain design. Copyright © 2011 Elsevier Inc. All rights reserved.
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-01
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Barker, Graeme; Johnson, David G; Young, Paul C; Macgregor, Stuart A; Lee, Ai-Lan
2015-09-21
Gold(I)-catalysed direct allylic etherifications have been successfully carried out with chirality transfer to yield enantioenriched, γ-substituted secondary allylic ethers. Our investigations include a full substrate-scope screen to ascertain substituent effects on the regioselectivity, stereoselectivity and efficiency of chirality transfer, as well as control experiments to elucidate the mechanistic subtleties of the chirality-transfer process. Crucially, addition of molecular sieves was found to be necessary to ensure efficient and general chirality transfer. Computational studies suggest that the efficiency of chirality transfer is linked to the aggregation of the alcohol nucleophile around the reactive π-bound Au-allylic ether complex. With a single alcohol nucleophile, a high degree of chirality transfer is predicted. However, if three alcohols are present, alternative proton transfer chain mechanisms that erode the efficiency of chirality transfer become competitive. © 2015 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
NASA Astrophysics Data System (ADS)
Leamy, Michael J.; Springer, Adam C.
In this research we report parallel implementation of a Cellular Automata-based simulation tool for computing elastodynamic response on complex, two-dimensional domains. Elastodynamic simulation using Cellular Automata (CA) has recently been presented as an alternative, inherently object-oriented technique for accurately and efficiently computing linear and nonlinear wave propagation in arbitrarily-shaped geometries. The local, autonomous nature of the method should lead to straight-forward and efficient parallelization. We address this notion on symmetric multiprocessor (SMP) hardware using a Java-based object-oriented CA code implementing triangular state machines (i.e., automata) and the MPI bindings written in Java (MPJ Express). We use MPJ Express to reconfigure our existing CA code to distribute a domain's automata to cores present on a dual quad-core shared-memory system (eight total processors). We note that this message passing parallelization strategy is directly applicable to computer clustered computing, which will be the focus of follow-on research. Results on the shared memory platform indicate nearly-ideal, linear speed-up. We conclude that the CA-based elastodynamic simulator is easily configured to run in parallel, and yields excellent speed-up on SMP hardware.
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1991-01-01
Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in a high gradient region or the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational costs. A detailed description is given of the enrichment and coarsening procedures and comparisons with alternative results and experimental data are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.
Neilson, Matthew P; Mackenzie, John A; Webb, Steven D; Insall, Robert H
2010-11-01
In this paper we present a computational tool that enables the simulation of mathematical models of cell migration and chemotaxis on an evolving cell membrane. Recent models require the numerical solution of systems of reaction-diffusion equations on the evolving cell membrane and then the solution state is used to drive the evolution of the cell edge. Previous work involved moving the cell edge using a level set method (LSM). However, the LSM is computationally very expensive, which severely limits the practical usefulness of the algorithm. To address this issue, we have employed the parameterised finite element method (PFEM) as an alternative method for evolving a cell boundary. We show that the PFEM is far more efficient and robust than the LSM. We therefore suggest that the PFEM potentially has an essential role to play in computational modelling efforts towards the understanding of many of the complex issues related to chemotaxis.
Tools for Atmospheric Radiative Transfer: Streamer and FluxNet. Revised
NASA Technical Reports Server (NTRS)
Key, Jeffrey R.; Schweiger, Axel J.
1998-01-01
Two tools for the solution of radiative transfer problems are presented. Streamer is a highly flexible medium spectral resolution radiative transfer model based on the plane-parallel theory of radiative transfer. Capable of computing either fluxes or radiances, it is suitable for studying radiative processes at the surface or within the atmosphere and for the development of remote-sensing algorithms. FluxNet is a fast neural network-based implementation of Streamer for computing surface fluxes. It allows for a sophisticated treatment of radiative processes in the analysis of large data sets and potential integration into geophysical models where computational efficiency is an issue. Documentation and tools for the development of alternative versions of Fluxnet are available. Collectively, Streamer and FluxNet solve a wide variety of problems related to radiative transfer: Streamer provides the detail and sophistication needed to perform basic research on most aspects of complex radiative processes while the efficiency and simplicity of FluxNet make it ideal for operational use.
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Yang, Henry T. Y.; Batina, John T.
1991-01-01
Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with alternative results and experimental data to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard
Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less
Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...
2017-06-06
Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less
Computing exact bundle compliance control charts via probability generating functions.
Chen, Binchao; Matis, Timothy; Benneyan, James
2016-06-01
Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.
Quantum simulation of quantum field theory using continuous variables
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George; ...
2015-12-14
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Quantum simulation of quantum field theory using continuous variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Development of an efficient procedure for calculating the aerodynamic effects of planform variation
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Geller, E. W.
1981-01-01
Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.
NASA Astrophysics Data System (ADS)
Lanciotti, E.; Merino, G.; Bria, A.; Blomer, J.
2011-12-01
In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.
NASA Astrophysics Data System (ADS)
Gómez, Carlos M.; Tirado, Dolores; Rey-Maquieira, Javier
2004-10-01
We present a computable general equilibrium model (CGE) for the Balearic Islands, specifically performed to analyze the welfare gains associated with an improvement in the allocation of water rights through voluntary water exchanges (mainly between the agriculture and urban sectors). For the implementation of the empirical model we built the social accounting matrix (SAM) from the last available input-output table of the islands (for the year 1997). Water exchanges provide an important alternative to make the allocation of water flexible enough to cope with the cyclical droughts that characterize the natural water regime on the islands. The main conclusion is that the increased efficiency provided by ``water markets'' makes this option more advantageous than the popular alternative of building new desalinization plants. Contrary to common opinion, a ``water market'' can also have positive and significant impacts on the agricultural income.
Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation
NASA Technical Reports Server (NTRS)
Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.
2012-01-01
Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.
Learning to assign binary weights to binary descriptor
NASA Astrophysics Data System (ADS)
Huang, Zhoudi; Wei, Zhenzhong; Zhang, Guangjun
2016-10-01
Constructing robust binary local feature descriptors are receiving increasing interest due to their binary nature, which can enable fast processing while requiring significantly less memory than their floating-point competitors. To bridge the performance gap between the binary and floating-point descriptors without increasing the computational cost of computing and matching, optimal binary weights are learning to assign to binary descriptor for considering each bit might contribute differently to the distinctiveness and robustness. Technically, a large-scale regularized optimization method is applied to learn float weights for each bit of the binary descriptor. Furthermore, binary approximation for the float weights is performed by utilizing an efficient alternatively greedy strategy, which can significantly improve the discriminative power while preserve fast matching advantage. Extensive experimental results on two challenging datasets (Brown dataset and Oxford dataset) demonstrate the effectiveness and efficiency of the proposed method.
On the use of reverse Brownian motion to accelerate hybrid simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakarji, Joseph; Tartakovsky, Daniel M., E-mail: tartakovsky@stanford.edu
Multiscale and multiphysics simulations are two rapidly developing fields of scientific computing. Efficient coupling of continuum (deterministic or stochastic) constitutive solvers with their discrete (stochastic, particle-based) counterparts is a common challenge in both kinds of simulations. We focus on interfacial, tightly coupled simulations of diffusion that combine continuum and particle-based solvers. The latter employs the reverse Brownian motion (rBm), a Monte Carlo approach that allows one to enforce inhomogeneous Dirichlet, Neumann, or Robin boundary conditions and is trivially parallelizable. We discuss numerical approaches for improving the accuracy of rBm in the presence of inhomogeneous Neumann boundary conditions and alternative strategiesmore » for coupling the rBm solver with its continuum counterpart. Numerical experiments are used to investigate the convergence, stability, and computational efficiency of the proposed hybrid algorithm.« less
Evaluation of a Multigrid Scheme for the Incompressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.
2004-01-01
A fast multigrid solver for the steady, incompressible Navier-Stokes equations is presented. The multigrid solver is based upon a factorizable discrete scheme for the velocity-pressure form of the Navier-Stokes equations. This scheme correctly distinguishes between the advection-diffusion and elliptic parts of the operator, allowing efficient smoothers to be constructed. To evaluate the multigrid algorithm, solutions are computed for flow over a flat plate, parabola, and a Karman-Trefftz airfoil. Both nonlifting and lifting airfoil flows are considered, with a Reynolds number range of 200 to 800. Convergence and accuracy of the algorithm are discussed. Using Gauss-Seidel line relaxation in alternating directions, multigrid convergence behavior approaching that of O(N) methods is achieved. The computational efficiency of the numerical scheme is compared with that of Runge-Kutta and implicit upwind based multigrid methods.
Sensor network based vehicle classification and license plate identification system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frigo, Janette Rose; Brennan, Sean M; Rosten, Edward J
Typically, for energy efficiency and scalability purposes, sensor networks have been used in the context of environmental and traffic monitoring applications in which operations at the sensor level are not computationally intensive. But increasingly, sensor network applications require data and compute intensive sensors such video cameras and microphones. In this paper, we describe the design and implementation of two such systems: a vehicle classifier based on acoustic signals and a license plate identification system using a camera. The systems are implemented in an energy-efficient manner to the extent possible using commercially available hardware, the Mica motes and the Stargate platform.more » Our experience in designing these systems leads us to consider an alternate more flexible, modular, low-power mote architecture that uses a combination of FPGAs, specialized embedded processing units and sensor data acquisition systems.« less
Alternative stable qP wave equations in TTI media with their applications for reverse time migration
NASA Astrophysics Data System (ADS)
Zhou, Yang; Wang, Huazhong; Liu, Wenqing
2015-10-01
Numerical instabilities may arise if the spatial variation of symmetry axis is handled improperly when implementing P-wave modeling and reverse time migration in heterogeneous tilted transversely isotropic (TTI) media, especially in the cases where fast changes exist in TTI symmetry axis’ directions. Based on the pseudo-acoustic approximation to anisotropic elastic wave equations in Cartesian coordinates, alternative second order qP (quasi-P) wave equations in TTI media are derived in this paper. Compared with conventional stable qP wave equations, the proposed equations written in stress components contain only spatial derivatives of wavefield variables (stress components) and are free from spatial derivatives involving media parameters. These lead to an easy and efficient implementation for stable P-wave modeling and imaging. Numerical experiments demonstrate the stability and computational efficiency of the presented equations in complex TTI media.
NASA Technical Reports Server (NTRS)
Kurtz, L. A.; Smith, R. E.; Parks, C. L.; Boney, L. R.
1978-01-01
Steady state solutions to two time dependent partial differential systems have been obtained by the Method of Lines (MOL) and compared to those obtained by efficient standard finite difference methods: (1) Burger's equation over a finite space domain by a forward time central space explicit method, and (2) the stream function - vorticity form of viscous incompressible fluid flow in a square cavity by an alternating direction implicit (ADI) method. The standard techniques were far more computationally efficient when applicable. In the second example, converged solutions at very high Reynolds numbers were obtained by MOL, whereas solution by ADI was either unattainable or impractical. With regard to 'set up' time, solution by MOL is an attractive alternative to techniques with complicated algorithms, as much of the programming difficulty is eliminated.
Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks.
Zhang, Jing; Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho
2017-09-15
In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity.
Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks
Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho
2017-01-01
In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity. PMID:28914818
A hydrological emulator for global applications - HE v1.0.0
NASA Astrophysics Data System (ADS)
Liu, Yaling; Hejazi, Mohamad; Li, Hongyi; Zhang, Xuesong; Leng, Guoyong
2018-03-01
While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluated in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling-Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.
Stringent and efficient assessment of boson-sampling devices.
Tichy, Malte C; Mayer, Klaus; Buchleitner, Andreas; Mølmer, Klaus
2014-07-11
Boson sampling holds the potential to experimentally falsify the extended Church-Turing thesis. The computational hardness of boson sampling, however, complicates the certification that an experimental device yields correct results in the regime in which it outmatches classical computers. To certify a boson sampler, one needs to verify quantum predictions and rule out models that yield these predictions without true many-boson interference. We show that a semiclassical model for many-boson propagation reproduces coarse-grained observables that are proposed as witnesses of boson sampling. A test based on Fourier matrices is demonstrated to falsify physically plausible alternatives to coherent many-boson propagation.
Coleman, Mari Beth; Cherry, Rebecca A; Moore, Tara C; Park, Yujeong; Cihak, David F
2015-06-01
The purpose of this study was to compare the effects of teacher-directed simultaneous prompting to computer-assisted simultaneous prompting for teaching sight words to 3 elementary school students with intellectual disability. Activities in the computer-assisted condition were designed with Intellitools Classroom Suite software whereas traditional materials (i.e., flashcards) were used in the teacher-directed condition. Treatment conditions were compared using an adapted alternating treatments design. Acquisition of sight words occurred in both conditions for all 3 participants; however, each participant either clearly responded better in the teacher-directed condition or reported a preference for the teacher-directed condition when performance was similar with computer-assisted instruction being more efficient. Practical implications and directions for future research are discussed.
Aspects of Unstructured Grids and Finite-Volume Solvers for the Euler and Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
1992-01-01
One of the major achievements in engineering science has been the development of computer algorithms for solving nonlinear differential equations such as the Navier-Stokes equations. In the past, limited computer resources have motivated the development of efficient numerical schemes in computational fluid dynamics (CFD) utilizing structured meshes. The use of structured meshes greatly simplifies the implementation of CFD algorithms on conventional computers. Unstructured grids on the other hand offer an alternative to modeling complex geometries. Unstructured meshes have irregular connectivity and usually contain combinations of triangles, quadrilaterals, tetrahedra, and hexahedra. The generation and use of unstructured grids poses new challenges in CFD. The purpose of this note is to present recent developments in the unstructured grid generation and flow solution technology.
Data Structures for Extreme Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahan, Simon
As computing problems of national importance grow, the government meets the increased demand by funding the development of ever larger systems. The overarching goal of the work supported in part by this grant is to increase efficiency of programming and performing computations on these large computing systems. In past work, we have demonstrated that some of these computations once thought to require expensive hardware designs and/or complex, special-purpose programming may be executed efficiently on low-cost commodity cluster computing systems using a general-purpose “latency-tolerant” programming framework. One important developed application of the ideas underlying this framework is graph database technology supportingmore » social network pattern matching used by US intelligence agencies to more quickly identify potential terrorist threats. This database application has been spun out by the Pacific Northwest National Laboratory, a Department of Energy Laboratory, into a commercial start-up, Trovares Inc. We explore an alternative application of the same underlying ideas to a well-studied challenge arising in engineering: solving unstructured sparse linear equations. Solving these equations is key to predicting the behavior of large electronic circuits before they are fabricated. Predicting that behavior ahead of fabrication means that designs can optimized and errors corrected ahead of the expense of manufacture.« less
SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions
Poeter, Eileen P.; Hill, Mary C.
2008-01-01
This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.
Efficient fuzzy C-means architecture for image segmentation.
Li, Hui-Ya; Hwang, Wen-Jyi; Chang, Chia-Yen
2011-01-01
This paper presents a novel VLSI architecture for image segmentation. The architecture is based on the fuzzy c-means algorithm with spatial constraint for reducing the misclassification rate. In the architecture, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. In addition, an efficient pipelined circuit is used for the updating process for accelerating the computational speed. Experimental results show that the the proposed circuit is an effective alternative for real-time image segmentation with low area cost and low misclassification rate.
Quantified Event Automata: Towards Expressive and Efficient Runtime Monitors
NASA Technical Reports Server (NTRS)
Barringer, Howard; Falcone, Ylies; Havelund, Klaus; Reger, Giles; Rydeheard, David
2012-01-01
Runtime verification is the process of checking a property on a trace of events produced by the execution of a computational system. Runtime verification techniques have recently focused on parametric specifications where events take data values as parameters. These techniques exist on a spectrum inhabited by both efficient and expressive techniques. These characteristics are usually shown to be conflicting - in state-of-the-art solutions, efficiency is obtained at the cost of loss of expressiveness and vice-versa. To seek a solution to this conflict we explore a new point on the spectrum by defining an alternative runtime verification approach.We introduce a new formalism for concisely capturing expressive specifications with parameters. Our technique is more expressive than the currently most efficient techniques while at the same time allowing for optimizations.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-30
...-AC46 Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating Methods: Public Meeting AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy... regulations authorizing the use of alternative methods of determining energy efficiency or energy consumption...
Development of a Low Inductance Linear Alternator for Stirling Power Convertors
NASA Technical Reports Server (NTRS)
Geng, Steven M.; Schifer, Nicholas A.
2017-01-01
The free-piston Stirling power convertor is a promising technology for high efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper, eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations, and compares the predictions with experimental data for one of the configurations that has been built and is currently being tested.
Development of a Low-Inductance Linear Alternator for Stirling Power Convertors
NASA Technical Reports Server (NTRS)
Geng, Steven M.; Schifer, Nicholas A.
2017-01-01
The free-piston Stirling power convertor is a promising technology for high-efficiency heat-to-electricity power conversion in space. Stirling power convertors typically utilize linear alternators for converting mechanical motion into electricity. The linear alternator is one of the heaviest components of modern Stirling power convertors. In addition, state-of-the-art Stirling linear alternators usually require the use of tuning capacitors or active power factor correction controllers to maximize convertor output power. The linear alternator to be discussed in this paper eliminates the need for tuning capacitors and delivers electrical power output in which current is inherently in phase with voltage. No power factor correction is needed. In addition, the linear alternator concept requires very little iron, so core loss has been virtually eliminated. This concept is a unique moving coil design where the magnetic flux path is defined by the magnets themselves. This paper presents computational predictions for two different low inductance alternator configurations. Additionally, one of the configurations was built and tested at GRC, and the experimental data is compared with the predictions.
A novel three-stage distance-based consensus ranking method
NASA Astrophysics Data System (ADS)
Aghayi, Nazila; Tavana, Madjid
2018-05-01
In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.
Plane Smoothers for Multiblock Grids: Computational Aspects
NASA Technical Reports Server (NTRS)
Llorente, Ignacio M.; Diskin, Boris; Melson, N. Duane
1999-01-01
Standard multigrid methods are not well suited for problems with anisotropic discrete operators, which can occur, for example, on grids that are stretched in order to resolve a boundary layer. One of the most efficient approaches to yield robust methods is the combination of standard coarsening with alternating-direction plane relaxation in the three dimensions. However, this approach may be difficult to implement in codes with multiblock structured grids because there may be no natural definition of global lines or planes. This inherent obstacle limits the range of an implicit smoother to only the portion of the computational domain in the current block. This report studies in detail, both numerically and analytically, the behavior of blockwise plane smoothers in order to provide guidance to engineers who use block-structured grids. The results obtained so far show alternating-direction plane smoothers to be very robust, even on multiblock grids. In common computational fluid dynamics multiblock simulations, where the number of subdomains crossed by the line of a strong anisotropy is low (up to four), textbook multigrid convergence rates can be obtained with a small overlap of cells between neighboring blocks.
An alternative approach for computing seismic response with accidental eccentricity
NASA Astrophysics Data System (ADS)
Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu
2014-09-01
Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.
Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P
2010-12-22
Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.
Unified commutation-pruning technique for efficient computation of composite DFTs
NASA Astrophysics Data System (ADS)
Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.
2015-12-01
An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.
Benchmarking undedicated cloud computing providers for analysis of genomic datasets.
Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W
2014-01-01
A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.
Benchmarking Undedicated Cloud Computing Providers for Analysis of Genomic Datasets
Yazar, Seyhan; Gooden, George E. C.; Mackey, David A.; Hewitt, Alex W.
2014-01-01
A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5–78.2) for E.coli and 53.5% (95% CI: 34.4–72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5–303.1) and 173.9% (95% CI: 134.6–213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE. PMID:25247298
MSIX - A general and user-friendly platform for RAM analysis
NASA Astrophysics Data System (ADS)
Pan, Z. J.; Blemel, Peter
The authors present a CAD (computer-aided design) platform supporting RAM (reliability, availability, and maintainability) analysis with efficient system description and alternative evaluation. The design concepts, implementation techniques, and application results are described. This platform is user-friendly because of its graphic environment, drawing facilities, object orientation, self-tutoring, and access to the operating system. The programs' independency and portability make them generally applicable to various analysis tasks.
Resource Analysis of Cognitive Process Flow Used to Achieve Autonomy
2016-03-01
to be used as a decision - making aid to guide system designers and program managers not necessarily familiar with cognitive pro- cessing, or resource...implementing end-to-end cognitive processing flows multiplies and the impact of these design decisions on efficiency and effectiveness increases [1]. The...end-to-end cognitive systems and alternative computing technologies, then system design and acquisition personnel could make systematic analyses and
Natural Communication with Computers. Volume 1. Speech Understanding Research at BBN
1974-12-01
Signal Processing Concurrently with the incremental simulation experiments used to develop insights into the organization of the control component and...us to seek an alternative organization for our phonemic dictionary. There is also the potential problem of new words being used to name new places... organize the lexicon for maximization of efficient retrieval by taking advantage of phonetic, syntactic and semantic relationsnips. Work has already
Desired Precision in Multi-Objective Optimization: Epsilon Archiving or Rounding Objectives?
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Sahraei, S.
2016-12-01
Multi-objective optimization (MO) aids in supporting the decision making process in water resources engineering and design problems. One of the main goals of solving a MO problem is to archive a set of solutions that is well-distributed across a wide range of all the design objectives. Modern MO algorithms use the epsilon dominance concept to define a mesh with pre-defined grid-cell size (often called epsilon) in the objective space and archive at most one solution at each grid-cell. Epsilon can be set to the desired precision level of each objective function to make sure that the difference between each pair of archived solutions is meaningful. This epsilon archiving process is computationally expensive in problems that have quick-to-evaluate objective functions. This research explores the applicability of a similar but computationally more efficient approach to respect the desired precision level of all objectives in the solution archiving process. In this alternative approach each objective function is rounded to the desired precision level before comparing any new solution to the set of archived solutions that already have rounded objective function values. This alternative solution archiving approach is compared to the epsilon archiving approach in terms of efficiency and quality of archived solutions for solving mathematical test problems and hydrologic model calibration problems.
Monitoring SLAC High Performance UNIX Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC
2005-12-15
Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less
NASA Astrophysics Data System (ADS)
Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.
2012-10-01
Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.
Extension of a streamwise upwind algorithm to a moving grid system
NASA Technical Reports Server (NTRS)
Obayashi, Shigeru; Goorjian, Peter M.; Guruswamy, Guru P.
1990-01-01
A new streamwise upwind algorithm was derived to compute unsteady flow fields with the use of a moving-grid system. The temporally nonconservative LU-ADI (lower-upper-factored, alternating-direction-implicit) method was applied for time marching computations. A comparison of the temporally nonconservative method with a time-conservative implicit upwind method indicates that the solutions are insensitive to the conservative properties of the implicit solvers when practical time steps are used. Using this new method, computations were made for an oscillating wing at a transonic Mach number. The computed results confirm that the present upwind scheme captures the shock motion better than the central-difference scheme based on the beam-warming algorithm. The new upwind option of the code allows larger time-steps and thus is more efficient, even though it requires slightly more computational time per time step than the central-difference option.
Dunne, Paul; Tasker, Gary
1996-01-01
The reservoirs and pumping stations that comprise the Raritan River Basin water-supply system and its interconnections to the Delaware-Raritan Canal water-supply system, operated by the New Jersey Water Supply Authority (NJWSA), provide potable water to central New Jersey communities. The water reserve of this combined system can easily be depleted by an extended period of below-normal precipitation. Efficient operation of the combined system is vital to meeting the water-supply needs of central New Jersey. In an effort to improve the efficiency of the system operation, the U.S. Geological Survey (USGS), in cooperation with the NJWSA, has developed a computer model that provides a technical basis for evaluating the effects of alternative patterns of operation of the Raritan River Basin water-supply system. This fact sheet describes the model, its technical basis, and its operation.
NASA Lewis Stirling SPRE testing and analysis with reduced number of cooler tubes
NASA Technical Reports Server (NTRS)
Wong, Wayne A.; Cairelli, James E.; Swec, Diane M.; Doeberling, Thomas J.; Lakatos, Thomas F.; Madi, Frank J.
1992-01-01
Free-piston Stirling power converters are candidates for high capacity space power applications. The Space Power Research Engine (SPRE), a free-piston Stirling engine coupled with a linear alternator, is being tested at the NASA Lewis Research Center in support of the Civil Space Technology Initiative. The SPRE is used as a test bed for evaluating converter modifications which have the potential to improve the converter performance and for validating computer code predictions. Reducing the number of cooler tubes on the SPRE has been identified as a modification with the potential to significantly improve power and efficiency. Experimental tests designed to investigate the effects of reducing the number of cooler tubes on converter power, efficiency and dynamics are described. Presented are test results from the converter operating with a reduced number of cooler tubes and comparisons between this data and both baseline test data and computer code predictions.
A Hydraulic Motor-Alternator System for Ocean-Submersible Vehicles
NASA Technical Reports Server (NTRS)
Aintablian, Harry O.; Valdez, Thomas I.; Jones, Jack A.
2012-01-01
An ocean-submersible vehicle has been developed at JPL that moves back and forth between sea level and a depth of a few hundred meters. A liquid volumetric change at a pressure of 70 bars is created by means of thermal phase change. During vehicle ascent, the phase-change material (PCM) is melted by the circulation of warm water and thus pressure is increased. During vehicle descent, the PCM is cooled resulting in reduced pressure. This pressure change is used to generate electric power by means of a hydraulic pump that drives a permanent magnet (PM) alternator. The output energy of the alternator is stored in a rechargeable battery that powers an on-board computer, instrumentation and other peripherals.The focus of this paper is the performance evaluation of a specific hydraulic motor-alternator system. Experimental and theoretical efficiency data of the hydraulic motor and the alternator are presented. The results are used to evaluate the optimization of the hydraulic motor-alternator system. The integrated submersible vehicle was successfully operated in the Pacific Ocean near Hawaii. A brief overview of the actual test results is presented.
Sparse networks of directly coupled, polymorphic, and functional side chains in allosteric proteins.
Soltan Ghoraie, Laleh; Burkowski, Forbes; Zhu, Mu
2015-03-01
Recent studies have highlighted the role of coupled side-chain fluctuations alone in the allosteric behavior of proteins. Moreover, examination of X-ray crystallography data has recently revealed new information about the prevalence of alternate side-chain conformations (conformational polymorphism), and attempts have been made to uncover the hidden alternate conformations from X-ray data. Hence, new computational approaches are required that consider the polymorphic nature of the side chains, and incorporate the effects of this phenomenon in the study of information transmission and functional interactions of residues in a molecule. These studies can provide a more accurate understanding of the allosteric behavior. In this article, we first present a novel approach to generate an ensemble of conformations and an efficient computational method to extract direct couplings of side chains in allosteric proteins, and provide sparse network representations of the couplings. We take the side-chain conformational polymorphism into account, and show that by studying the intrinsic dynamics of an inactive structure, we are able to construct a network of functionally crucial residues. Second, we show that the proposed method is capable of providing a magnified view of the coupled and conformationally polymorphic residues. This model reveals couplings between the alternate conformations of a coupled residue pair. To the best of our knowledge, this is the first computational method for extracting networks of side chains' alternate conformations. Such networks help in providing a detailed image of side-chain dynamics in functionally important and conformationally polymorphic sites, such as binding and/or allosteric sites. © 2014 Wiley Periodicals, Inc.
Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo
2018-06-08
Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.
NASA Technical Reports Server (NTRS)
Metcalfe, A. G.; Bodenheimer, R. E.
1976-01-01
A parallel algorithm for counting the number of logic-l elements in a binary array or image developed during preliminary investigation of the Tse concept is described. The counting algorithm is implemented using a basic combinational structure. Modifications which improve the efficiency of the basic structure are also presented. A programmable Tse computer structure is proposed, along with a hardware control unit, Tse instruction set, and software program for execution of the counting algorithm. Finally, a comparison is made between the different structures in terms of their more important characteristics.
A universal quantum module for quantum communication, computation, and metrology
NASA Astrophysics Data System (ADS)
Hanks, Michael; Lo Piparo, Nicolò; Trupke, Michael; Schmiedmayer, Jorg; Munro, William J.; Nemoto, Kae
2017-08-01
In this work, we describe a simple module that could be ubiquitous for quantum information based applications. The basic modules comprises a single NV- center in diamond embedded in an optical cavity, where the cavity mediates interactions between photons and the electron spin (enabling entanglement distribution and efficient readout), while the nuclear spins constitutes a long-lived quantum memories capable of storing and processing quantum information. We discuss how a network of connected modules can be used for distributed metrology, communication and computation applications. Finally, we investigate the possible use of alternative diamond centers (SiV/GeV) within the module and illustrate potential advantages.
γ5 in the four-dimensional helicity scheme
NASA Astrophysics Data System (ADS)
Gnendiger, C.; Signer, A.
2018-05-01
We investigate the regularization-scheme dependent treatment of γ5 in the framework of dimensional regularization, mainly focusing on the four-dimensional helicity scheme (fdh). Evaluating distinctive examples, we find that for one-loop calculations, the recently proposed four-dimensional formulation (fdf) of the fdh scheme constitutes a viable and efficient alternative compared to more traditional approaches. In addition, we extend the considerations to the two-loop level and compute the pseudoscalar form factors of quarks and gluons in fdh. We provide the necessary operator renormalization and discuss at a practical level how the complexity of intermediate calculational steps can be reduced in an efficient way.
Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy
Ackermann, Marko; van den Bogert, Antonie J.
2012-01-01
The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. PMID:22365845
Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy.
Ackermann, Marko; van den Bogert, Antonie J
2012-04-30
The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. Copyright © 2012 Elsevier Ltd. All rights reserved.
Biyikli, Emre; To, Albert C.
2015-01-01
A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849
An installed nacelle design code using a multiblock Euler solver. Volume 1: Theory document
NASA Technical Reports Server (NTRS)
Chen, H. C.
1992-01-01
An efficient multiblock Euler design code was developed for designing a nacelle installed on geometrically complex airplane configurations. This approach employed a design driver based on a direct iterative surface curvature method developed at LaRC. A general multiblock Euler flow solver was used for computing flow around complex geometries. The flow solver used a finite-volume formulation with explicit time-stepping to solve the Euler Equations. It used a multiblock version of the multigrid method to accelerate the convergence of the calculations. The design driver successively updated the surface geometry to reduce the difference between the computed and target pressure distributions. In the flow solver, the change in surface geometry was simulated by applying surface transpiration boundary conditions to avoid repeated grid generation during design iterations. Smoothness of the designed surface was ensured by alternate application of streamwise and circumferential smoothings. The capability and efficiency of the code was demonstrated through the design of both an isolated nacelle and an installed nacelle at various flow conditions. Information on the execution of the computer program is provided in volume 2.
Artificial Neuron Based on Integrated Semiconductor Quantum Dot Mode-Locked Lasers
NASA Astrophysics Data System (ADS)
Mesaritakis, Charis; Kapsalis, Alexandros; Bogris, Adonis; Syvridis, Dimitris
2016-12-01
Neuro-inspired implementations have attracted strong interest as a power efficient and robust alternative to the digital model of computation with a broad range of applications. Especially, neuro-mimetic systems able to produce and process spike-encoding schemes can offer merits like high noise-resiliency and increased computational efficiency. Towards this direction, integrated photonics can be an auspicious platform due to its multi-GHz bandwidth, its high wall-plug efficiency and the strong similarity of its dynamics under excitation with biological spiking neurons. Here, we propose an integrated all-optical neuron based on an InAs/InGaAs semiconductor quantum-dot passively mode-locked laser. The multi-band emission capabilities of these lasers allows, through waveband switching, the emulation of the excitation and inhibition modes of operation. Frequency-response effects, similar to biological neural circuits, are observed just as in a typical two-section excitable laser. The demonstrated optical building block can pave the way for high-speed photonic integrated systems able to address tasks ranging from pattern recognition to cognitive spectrum management and multi-sensory data processing.
Artificial Neuron Based on Integrated Semiconductor Quantum Dot Mode-Locked Lasers
Mesaritakis, Charis; Kapsalis, Alexandros; Bogris, Adonis; Syvridis, Dimitris
2016-01-01
Neuro-inspired implementations have attracted strong interest as a power efficient and robust alternative to the digital model of computation with a broad range of applications. Especially, neuro-mimetic systems able to produce and process spike-encoding schemes can offer merits like high noise-resiliency and increased computational efficiency. Towards this direction, integrated photonics can be an auspicious platform due to its multi-GHz bandwidth, its high wall-plug efficiency and the strong similarity of its dynamics under excitation with biological spiking neurons. Here, we propose an integrated all-optical neuron based on an InAs/InGaAs semiconductor quantum-dot passively mode-locked laser. The multi-band emission capabilities of these lasers allows, through waveband switching, the emulation of the excitation and inhibition modes of operation. Frequency-response effects, similar to biological neural circuits, are observed just as in a typical two-section excitable laser. The demonstrated optical building block can pave the way for high-speed photonic integrated systems able to address tasks ranging from pattern recognition to cognitive spectrum management and multi-sensory data processing. PMID:27991574
Molecular Sticker Model Stimulation on Silicon for a Maximum Clique Problem
Ning, Jianguo; Li, Yanmei; Yu, Wen
2015-01-01
Molecular computers (also called DNA computers), as an alternative to traditional electronic computers, are smaller in size but more energy efficient, and have massive parallel processing capacity. However, DNA computers may not outperform electronic computers owing to their higher error rates and some limitations of the biological laboratory. The stickers model, as a typical DNA-based computer, is computationally complete and universal, and can be viewed as a bit-vertically operating machine. This makes it attractive for silicon implementation. Inspired by the information processing method on the stickers computer, we propose a novel parallel computing model called DEM (DNA Electronic Computing Model) on System-on-a-Programmable-Chip (SOPC) architecture. Except for the significant difference in the computing medium—transistor chips rather than bio-molecules—the DEM works similarly to DNA computers in immense parallel information processing. Additionally, a plasma display panel (PDP) is used to show the change of solutions, and helps us directly see the distribution of assignments. The feasibility of the DEM is tested by applying it to compute a maximum clique problem (MCP) with eight vertices. Owing to the limited computing sources on SOPC architecture, the DEM could solve moderate-size problems in polynomial time. PMID:26075867
Processing mechanics of alternate twist ply (ATP) yarn technology
NASA Astrophysics Data System (ADS)
Elkhamy, Donia Said
Ply yarns are important in many textile manufacturing processes and various applications. The primary process used for producing ply yarns is cabling. The speed of cabling is limited to about 35m/min. With the world's increasing demands of ply yarn supply, cabling is incompatible with today's demand activated manufacturing strategies. The Alternate Twist Ply (ATP) yarn technology is a relatively new process for producing ply yarns with improved productivity and flexibility. This technology involves self plying of twisted singles yarn to produce ply yarn. The ATP process can run more than ten times faster than cabling. To implement the ATP process to produce ply yarns there are major quality issues; uniform Twist Profile and yarn Twist Efficiency. The goal of this thesis is to improve these issues through process modeling based on understanding the physics and processing mechanics of the ATP yarn system. In our study we determine the main parameters that control the yarn twist profile. Process modeling of the yarn twist across different process zones was done. A computational model was designed to predict the process parameters required to achieve a square wave twist profile. Twist efficiency, a measure of yarn torsional stability and bulk, is determined by the ratio of ply yarn twist to singles yarn twist. Response Surface Methodology was used to develop the processing window that can reproduce ATP yarns with high twist efficiency. Equilibrium conditions of tensions and torques acting on the yarns at the self ply point were analyzed and determined the pathway for achieving higher twist efficiency. Mechanistic modeling relating equilibrium conditions to the twist efficiency was developed. A static tester was designed to zoom into the self ply zone of the ATP yarn. A computer controlled, prototypic ATP machine was constructed and confirmed the mechanistic model results. Optimum parameters achieving maximum twist efficiency were determined in this study. The successful results of this work have led to the filing of a US patent disclosing the method for producing ATP yarns with high yarn twist efficiency using a high convergence angle at the self ply point together with applying ply torque.
Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin
2014-01-01
Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.
NASA Technical Reports Server (NTRS)
Daywitt, J.; Kutler, P.; Anderson, D.
1977-01-01
The technique of floating shock fitting is adapted to the computation of the inviscid flowfield about circular cones in a supersonic free stream at angles of attack that exceed the cone half-angle. The resulting equations are applicable over the complete range of free-stream Mach numbers, angles of attack and cone half-angles for which the bow shock is attached. A finite difference algorithm is used to obtain the solution by an unsteady relaxation approach. The bow shock, embedded cross-flow shock, and vortical singularity in the leeward symmetry plane are treated as floating discontinuities in a fixed computational mesh. Where possible, the flowfield is partitioned into windward, shoulder, and leeward regions with each region computed separately to achieve maximum computational efficiency. An alternative shock fitting technique which treats the bow shock as a computational boundary is developed and compared with the floating-fitting approach. Several surface boundary condition schemes are also analyzed.
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
A hydrological emulator for global applications – HE v1.0.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yaling; Hejazi, Mohamad; Li, Hongyi
While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluatedmore » in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling–Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Lastly, our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.« less
Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David
2013-01-01
Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Discrete Fourier Transform Analysis in a Complex Vector Space
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2009-01-01
Alternative computational strategies for the Discrete Fourier Transform (DFT) have been developed using analysis of geometric manifolds. This approach provides a general framework for performing DFT calculations, and suggests a more efficient implementation of the DFT for applications using iterative transform methods, particularly phase retrieval. The DFT can thus be implemented using fewer operations when compared to the usual DFT counterpart. The software decreases the run time of the DFT in certain applications such as phase retrieval that iteratively call the DFT function. The algorithm exploits a special computational approach based on analysis of the DFT as a transformation in a complex vector space. As such, this approach has the potential to realize a DFT computation that approaches N operations versus Nlog(N) operations for the equivalent Fast Fourier Transform (FFT) calculation.
Lattice surgery on the Raussendorf lattice
NASA Astrophysics Data System (ADS)
Herr, Daniel; Paler, Alexandru; Devitt, Simon J.; Nori, Franco
2018-07-01
Lattice surgery is a method to perform quantum computation fault-tolerantly by using operations on boundary qubits between different patches of the planar code. This technique allows for universal planar code computation without eliminating the intrinsic two-dimensional nearest-neighbor properties of the surface code that eases physical hardware implementations. Lattice surgery approaches to algorithmic compilation and optimization have been demonstrated to be more resource efficient for resource-intensive components of a fault-tolerant algorithm, and consequently may be preferable over braid-based logic. Lattice surgery can be extended to the Raussendorf lattice, providing a measurement-based approach to the surface code. In this paper we describe how lattice surgery can be performed on the Raussendorf lattice and therefore give a viable alternative to computation using braiding in measurement-based implementations of topological codes.
Lawrenz, Morgan; Baron, Riccardo; Wang, Yi; McCammon, J Andrew
2012-01-01
The Independent-Trajectory Thermodynamic Integration (IT-TI) approach for free energy calculation with distributed computing is described. IT-TI utilizes diverse conformational sampling obtained from multiple, independent simulations to obtain more reliable free energy estimates compared to single TI predictions. The latter may significantly under- or over-estimate the binding free energy due to finite sampling. We exemplify the advantages of the IT-TI approach using two distinct cases of protein-ligand binding. In both cases, IT-TI yields distributions of absolute binding free energy estimates that are remarkably centered on the target experimental values. Alternative protocols for the practical and general application of IT-TI calculations are investigated. We highlight a protocol that maximizes predictive power and computational efficiency.
Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms
Roald, Line Alnaes; Andersson, Goran
2017-08-29
Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less
Analysis of multigrid methods on massively parallel computers: Architectural implications
NASA Technical Reports Server (NTRS)
Matheson, Lesley R.; Tarjan, Robert E.
1993-01-01
We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.
NASA Astrophysics Data System (ADS)
Zhang, Hongqin; Tian, Xiangjun
2018-04-01
Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.
Simulation and Experimental Study of Metal Organic Frameworks Used in Adsorption Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenks, Jeromy J.; Motkuri, Radha K.; TeGrotenhuis, Ward
2016-10-11
Metal-organic frameworks (MOFs) have recently attracted enormous interest over the past few years in energy storage and gas separation, yet there have been few reports for adsorption cooling applications. Adsorption cooling technology is an established alternative to mechanical vapor compression refrigeration systems and is an excellent alternative in industrial environments where waste heat is available. We explored the use of MOFs that have very high mass loading and relatively low heats of adsorption, with certain combinations of refrigerants to demonstrate a new type of highly efficient adsorption chiller. Computational fluid dynamics combined with a system level lumped-parameter model have beenmore » used to project size and performance for chillers with a cooling capacity ranging from a few kW to several thousand kW. These systems rely on stacked micro/mini-scale architectures to enhance heat and mass transfer. Recent computational studies of an adsorption chiller based on MOFs suggests that a thermally-driven coefficient of performance greater than one may be possible, which would represent a fundamental breakthrough in performance of adsorption chiller technology. Presented herein are computational and experimental results for hydrophyilic and fluorophilic MOFs.« less
Degree of quantum correlation required to speed up a computation
NASA Astrophysics Data System (ADS)
Kay, Alastair
2015-12-01
The one-clean-qubit model of quantum computation (DQC1) efficiently implements a computational task that is not known to have a classical alternative. During the computation, there is never more than a small but finite amount of entanglement present, and it is typically vanishingly small in the system size. In this paper, we demonstrate that there is nothing unexpected hidden within the DQC1 model—Grover's search, when acting on a mixed state, provably exhibits a speedup over classical, with guarantees as to the presence of only vanishingly small amounts of quantum correlations (entanglement and quantum discord)—while arguing that this is not an artifact of the oracle-based construction. We also present some important refinements in the evaluation of how much entanglement may be present in the DQC1 and how the typical entanglement of the system must be evaluated.
Aerodynamic optimization by simultaneously updating flow variables and design parameters
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
All-Particle Multiscale Computation of Hypersonic Rarefied Flow
NASA Astrophysics Data System (ADS)
Jun, E.; Burt, J. M.; Boyd, I. D.
2011-05-01
This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.
Scalable Faceted Ranking in Tagging Systems
NASA Astrophysics Data System (ADS)
Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.
Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.
N S Andreasen Struijk, Lotte; Lontis, Eugen R; Gaihede, Michael; Caltenco, Hector A; Lund, Morten Enemark; Schioeler, Henrik; Bentsen, Bo
2017-08-01
Individuals with tetraplegia depend on alternative interfaces in order to control computers and other electronic equipment. Current interfaces are often limited in the number of available control commands, and may compromise the social identity of an individual due to their undesirable appearance. The purpose of this study was to implement an alternative computer interface, which was fully embedded into the oral cavity and which provided multiple control commands. The development of a wireless, intraoral, inductive tongue computer was described. The interface encompassed a 10-key keypad area and a mouse pad area. This system was embedded wirelessly into the oral cavity of the user. The functionality of the system was demonstrated in two tetraplegic individuals and two able-bodied individuals Results: The system was invisible during use and allowed the user to type on a computer using either the keypad area or the mouse pad. The maximal typing rate was 1.8 s for repetitively typing a correct character with the keypad area and 1.4 s for repetitively typing a correct character with the mouse pad area. The results suggest that this inductive tongue computer interface provides an esthetically acceptable and functionally efficient environmental control for a severely disabled user. Implications for Rehabilitation New Design, Implementation and detection methods for intra oral assistive devices. Demonstration of wireless, powering and encapsulation techniques suitable for intra oral embedment of assistive devices. Demonstration of the functionality of a rechargeable and fully embedded intra oral tongue controlled computer input device.
Alternative Fuels Data Center: Ten Ways You Can Implement Alternative Fuels
and Energy-Efficient Vehicle Technologies Ten Ways You Can Implement Alternative Fuels and Energy-Efficient Vehicle Technologies to someone by E-mail Share Alternative Fuels Data Center: Ten Ways You Can Implement Alternative Fuels and Energy-Efficient Vehicle Technologies on Facebook Tweet about
Circling motion and screen edges as an alternative input method for on-screen target manipulation.
Ka, Hyun W; Simpson, Richard C
2017-04-01
To investigate a new alternative interaction method, called circling interface, for manipulating on-screen objects. To specify a target, the user makes a circling motion around the target. To specify a desired pointing command with the circling interface, each edge of the screen is used. The user selects a command before circling the target. To evaluate the circling interface, we conducted an experiment with 16 participants, comparing the performance on pointing tasks with different combinations of selection method (circling interface, physical mouse and dwelling interface) and input device (normal computer mouse, head pointer and joystick mouse emulator). A circling interface is compatible with many types of pointing devices, not requiring physical activation of mouse buttons, and is more efficient than dwell-clicking. Across all common pointing operations, the circling interface had a tendency to produce faster performance with a head-mounted mouse emulator than with a joystick mouse. The performance accuracy of the circling interface outperformed the dwelling interface. It was demonstrated that the circling interface has the potential as another alternative pointing method for selecting and manipulating objects in a graphical user interface. Implications for Rehabilitation A circling interface will improve clinical practice by providing an alternative pointing method that does not require physically activating mouse buttons and is more efficient than dwell-clicking. The Circling interface can also work with AAC devices.
Recurrent neural network based virtual detection line
NASA Astrophysics Data System (ADS)
Kadikis, Roberts
2018-04-01
The paper proposes an efficient method for detection of moving objects in the video. The objects are detected when they cross a virtual detection line. Only the pixels of the detection line are processed, which makes the method computationally efficient. A Recurrent Neural Network processes these pixels. The machine learning approach allows one to train a model that works in different and changing outdoor conditions. Also, the same network can be trained for various detection tasks, which is demonstrated by the tests on vehicle and people counting. In addition, the paper proposes a method for semi-automatic acquisition of labeled training data. The labeling method is used to create training and testing datasets, which in turn are used to train and evaluate the accuracy and efficiency of the detection method. The method shows similar accuracy as the alternative efficient methods but provides greater adaptability and usability for different tasks.
Lee, Tian-Fu; Chang, I-Pin; Lin, Tsung-Hung; Wang, Ching-Cheng
2013-06-01
The integrated EPR information system supports convenient and rapid e-medicine services. A secure and efficient authentication scheme for the integrated EPR information system provides safeguarding patients' electronic patient records (EPRs) and helps health care workers and medical personnel to rapidly making correct clinical decisions. Recently, Wu et al. proposed an efficient password-based user authentication scheme using smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various malicious attacks. However, their scheme is still vulnerable to lost smart card and stolen verifier attacks. This investigation discusses these weaknesses and proposes a secure and efficient authentication scheme for the integrated EPR information system as alternative. Compared with related approaches, the proposed scheme not only retains a lower computational cost and does not require verifier tables for storing users' secrets, but also solves the security problems in previous schemes and withstands possible attacks.
1983-12-01
while at the same time improving its operational efficiency. Through their integration and use, System Program Managers have a comprehensive analytical... systems . The NRLA program is hosted on the CREATE Operating System and contains approxiamately 5500 lines of computer code. It consists of a main...associated with C alternative maintenance plans. As the technological complexity of weapons systems has increased new and innovative logisitcal support
Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup
Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.
2010-01-01
Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651
Schettini, Francesca; Riccio, Angela; Simione, Luca; Liberati, Giulia; Caruso, Mario; Frasca, Vittorio; Calabrese, Barbara; Mecella, Massimo; Pizzimenti, Alessia; Inghilleri, Maurizio; Mattia, Donatella; Cincotti, Febo
2015-03-01
To evaluate the feasibility and usability of an assistive technology (AT) prototype designed to be operated with conventional/alternative input channels and a P300-based brain-computer interface (BCI) in order to provide users who have different degrees of muscular impairment resulting from amyotrophic lateral sclerosis (ALS) with communication and environmental control applications. Proof-of-principle study with a convenience sample. An apartment-like space designed to be fully accessible by people with motor disabilities for occupational therapy, placed in a neurologic rehabilitation hospital. End-users with ALS (N=8; 5 men, 3 women; mean age ± SD, 60 ± 12 y) recruited by a clinical team from an ALS center. Three experimental conditions based on (1) a widely validated P300-based BCI alone; (2) the AT prototype operated by a conventional/alternative input device tailored to the specific end-user's residual motor abilities; and (3) the AT prototype accessed by a P300-based BCI. These 3 conditions were presented to all participants in 3 different sessions. System usability was evaluated in terms of effectiveness (accuracy), efficiency (written symbol rate, time for correct selection, workload), and end-user satisfaction (overall satisfaction) domains. A comparison of the data collected in the 3 conditions was performed. Effectiveness and end-user satisfaction did not significantly differ among the 3 experimental conditions. Condition III was less efficient than condition II as expressed by the longer time for correct selection. A BCI can be used as an input channel to access an AT by persons with ALS, with no significant reduction of usability. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Vectorization of transport and diffusion computations on the CDC Cyber 205
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abu-Shumays, I.K.
1986-01-01
The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less
Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues
NASA Astrophysics Data System (ADS)
Chakravarthy, Srinivas R.; Rumyantsev, Alexander
2018-03-01
Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication) for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.
Fractional Steps methods for transient problems on commodity computer architectures
NASA Astrophysics Data System (ADS)
Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.
2008-12-01
Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.
Finite Element Analysis in Concurrent Processing: Computational Issues
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett
2004-01-01
The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.
Comparison of real and computer-simulated outcomes of LASIK refractive surgery
NASA Astrophysics Data System (ADS)
Cano, Daniel; Barbero, Sergio; Marcos, Susana
2004-06-01
Computer simulations of alternative LASIK ablation patterns were performed for corneal elevation maps of 13 real myopic corneas (range of myopia, -2.0 to -11.5 D). The computationally simulated ablation patterns were designed with biconic surfaces (standard Munnerlyn pattern, parabolic pattern, and biconic pattern) or with aberrometry measurements (customized pattern). Simulated results were compared with real postoperative outcomes. Standard LASIK refractive surgery for myopia increased corneal asphericity and spherical aberration. Computations with the theoretical Munnerlyn ablation pattern did not increase the corneal asphericity and spherical aberration. The theoretical parabolic pattern induced a slight increase of asphericity and spherical aberration, explaining only 40% of the clinically found increase. The theoretical biconic pattern controlled corneal spherical aberration. Computations showed that the theoretical customized pattern can correct high-order asymmetric aberrations. Simulations of changes in efficiency due to reflection and nonnormal incidence of the laser light showed a further increase in corneal asphericity. Consideration of these effects with a parabolic pattern accounts for 70% of the clinical increase in asphericity.
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
On automating domain connectivity for overset grids
NASA Technical Reports Server (NTRS)
Chiu, Ing-Tsau; Meakin, Robert L.
1995-01-01
An alternative method for domain connectivity among systems of overset grids is presented. Reference uniform Cartesian systems of points are used to achieve highly efficient domain connectivity, and form the basis for a future fully automated system. The Cartesian systems are used to approximate body surfaces and to map the computational space of component grids. By exploiting the characteristics of Cartesian systems, Chimera type hole-cutting and identification of donor elements for intergrid boundary points can be carried out very efficiently. The method is tested for a range of geometrically complex multiple-body overset grid systems. A dynamic hole expansion/contraction algorithm is also implemented to obtain optimum domain connectivity; however, it is tested only for geometry of generic shapes.
An efficient three-dimensional Poisson solver for SIMD high-performance-computing architectures
NASA Technical Reports Server (NTRS)
Cohl, H.
1994-01-01
We present an algorithm that solves the three-dimensional Poisson equation on a cylindrical grid. The technique uses a finite-difference scheme with operator splitting. This splitting maps the banded structure of the operator matrix into a two-dimensional set of tridiagonal matrices, which are then solved in parallel. Our algorithm couples FFT techniques with the well-known ADI (Alternating Direction Implicit) method for solving Elliptic PDE's, and the implementation is extremely well suited for a massively parallel environment like the SIMD architecture of the MasPar MP-1. Due to the highly recursive nature of our problem, we believe that our method is highly efficient, as it avoids excessive interprocessor communication.
Progress Toward an Efficient and General CFD Tool for Propulsion Design/Analysis
NASA Technical Reports Server (NTRS)
Cox, C. F.; Cinnella, P.; Westmoreland, S.
1996-01-01
The simulation of propulsive flows inherently involves chemical activity. Recent years have seen substantial strides made in the development of numerical schemes for reacting flowfields, in particular those involving finite-rate chemistry. However, finite-rate calculations are computationally intensive and require knowledge of the actual kinetics, which are not always known with sufficient accuracy. Alternatively, flow simulations based on the assumption of local chemical equilibrium are capable of obtaining physically reasonable results at far less computational cost. The present study summarizes the development of efficient numerical techniques for the simulation of flows in local chemical equilibrium, whereby a 'Black Box' chemical equilibrium solver is coupled to the usual gasdynamic equations. The generalization of the methods enables the modelling of any arbitrary mixture of thermally perfect gases, including air, combustion mixtures and plasmas. As demonstration of the potential of the methodologies, several solutions, involving reacting and perfect gas flows, will be presented. Included is a preliminary simulation of the SSME startup transient. Future enhancements to the proposed techniques will be discussed, including more efficient finite-rate and hybrid (partial equilibrium) schemes. The algorithms that have been developed and are being optimized provide for an efficient and general tool for the design and analysis of propulsion systems.
NASA Astrophysics Data System (ADS)
Pincus, R.; Mlawer, E. J.
2017-12-01
Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-12-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111
Cost-Benefit Analysis of Computer Resources for Machine Learning
Champion, Richard A.
2007-01-01
Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.
Fragment informatics and computational fragment-based drug design: an overview and update.
Sheng, Chunquan; Zhang, Wannian
2013-05-01
Fragment-based drug design (FBDD) is a promising approach for the discovery and optimization of lead compounds. Despite its successes, FBDD also faces some internal limitations and challenges. FBDD requires a high quality of target protein and good solubility of fragments. Biophysical techniques for fragment screening necessitate expensive detection equipment and the strategies for evolving fragment hits to leads remain to be improved. Regardless, FBDD is necessary for investigating larger chemical space and can be applied to challenging biological targets. In this scenario, cheminformatics and computational chemistry can be used as alternative approaches that can significantly improve the efficiency and success rate of lead discovery and optimization. Cheminformatics and computational tools assist FBDD in a very flexible manner. Computational FBDD can be used independently or in parallel with experimental FBDD for efficiently generating and optimizing leads. Computational FBDD can also be integrated into each step of experimental FBDD and help to play a synergistic role by maximizing its performance. This review will provide critical analysis of the complementarity between computational and experimental FBDD and highlight recent advances in new algorithms and successful examples of their applications. In particular, fragment-based cheminformatics tools, high-throughput fragment docking, and fragment-based de novo drug design will provide the focus of this review. We will also discuss the advantages and limitations of different methods and the trends in new developments that should inspire future research. © 2012 Wiley Periodicals, Inc.
Global-Context Based Salient Region Detection in Nature Images
NASA Astrophysics Data System (ADS)
Bao, Hong; Xu, De; Tang, Yingjun
Visually saliency detection provides an alternative methodology to image description in many applications such as adaptive content delivery and image retrieval. One of the main aims of visual attention in computer vision is to detect and segment the salient regions in an image. In this paper, we employ matrix decomposition to detect salient object in nature images. To efficiently eliminate high contrast noise regions in the background, we integrate global context information into saliency detection. Therefore, the most salient region can be easily selected as the one which is globally most isolated. The proposed approach intrinsically provides an alternative methodology to model attention with low implementation complexity. Experiments show that our approach achieves much better performance than that from the existing state-of-art methods.
A new perspective on the perceptual selectivity of attention under load.
Giesbrecht, Barry; Sy, Jocelyn; Bundesen, Claus; Kyllingsbaek, Søren
2014-05-01
The human attention system helps us cope with a complex environment by supporting the selective processing of information relevant to our current goals. Understanding the perceptual, cognitive, and neural mechanisms that mediate selective attention is a core issue in cognitive neuroscience. One prominent model of selective attention, known as load theory, offers an account of how task demands determine when information is selected and an account of the efficiency of the selection process. However, load theory has several critical weaknesses that suggest that it is time for a new perspective. Here we review the strengths and weaknesses of load theory and offer an alternative biologically plausible computational account that is based on the neural theory of visual attention. We argue that this new perspective provides a detailed computational account of how bottom-up and top-down information is integrated to provide efficient attentional selection and allocation of perceptual processing resources. © 2014 New York Academy of Sciences.
A transient FETI methodology for large-scale parallel implicit computations in structural mechanics
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier
1992-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.
Emulation of complex open quantum systems using superconducting qubits
NASA Astrophysics Data System (ADS)
Mostame, Sarah; Huh, Joonsuk; Kreisbeck, Christoph; Kerman, Andrew J.; Fujita, Takatoshi; Eisfeld, Alexander; Aspuru-Guzik, Alán
2017-02-01
With quantum computers being out of reach for now, quantum simulators are alternative devices for efficient and accurate simulation of problems that are challenging to tackle using conventional computers. Quantum simulators are classified into analog and digital, with the possibility of constructing "hybrid" simulators by combining both techniques. Here we focus on analog quantum simulators of open quantum systems and address the limit that they can beat classical computers. In particular, as an example, we discuss simulation of the chlorosome light-harvesting antenna from green sulfur bacteria with over 250 phonon modes coupled to each electronic state. Furthermore, we propose physical setups that can be used to reproduce the quantum dynamics of a standard and multiple-mode Holstein model. The proposed scheme is based on currently available technology of superconducting circuits consist of flux qubits and quantum oscillators.
NASA Technical Reports Server (NTRS)
Bardina, J. E.
1994-01-01
A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.
A prototype computer-aided modelling tool for life-support system models
NASA Technical Reports Server (NTRS)
Preisig, H. A.; Lee, Tae-Yeong; Little, Frank
1990-01-01
Based on the canonical decomposition of physical-chemical-biological systems, a prototype kernel has been developed to efficiently model alternative life-support systems. It supports (1) the work in an interdisciplinary group through an easy-to-use mostly graphical interface, (2) modularized object-oriented model representation, (3) reuse of models, (4) inheritance of structures from model object to model object, and (5) model data base. The kernel is implemented in Modula-II and presently operates on an IBM PC.
NASA Technical Reports Server (NTRS)
Larimer, Stanley J.; Lisec, Thomas R.; Spiessbach, Andrew J.
1990-01-01
Proposed walking-beam robot simpler and more rugged than articulated-leg walkers. Requires less data processing, and uses power more efficiently. Includes pair of tripods, one nested in other. Inner tripod holds power supplies, communication equipment, computers, instrumentation, sampling arms, and articulated sensor turrets. Outer tripod holds mast on which antennas for communication with remote control site and video cameras for viewing local and distant terrain mounted. Propels itself by raising, translating, and lowering tripods in alternation. Steers itself by rotating raised tripod on turntable.
Optimal Embedding for Shape Indexing in Medical Image Databases
Qian, Xiaoning; Tagare, Hemant D.; Fulbright, Robert K.; Long, Rodney; Antani, Sameer
2010-01-01
This paper addresses the problem of indexing shapes in medical image databases. Shapes of organs are often indicative of disease, making shape similarity queries important in medical image databases. Mathematically, shapes with landmarks belong to shape spaces which are curved manifolds with a well defined metric. The challenge in shape indexing is to index data in such curved spaces. One natural indexing scheme is to use metric trees, but metric trees are prone to inefficiency. This paper proposes a more efficient alternative. We show that it is possible to optimally embed finite sets of shapes in shape space into a Euclidean space. After embedding, classical coordinate-based trees can be used for efficient shape retrieval. The embedding proposed in the paper is optimal in the sense that it least distorts the partial Procrustes shape distance. The proposed indexing technique is used to retrieve images by vertebral shape from the NHANES II database of cervical and lumbar spine x-ray images maintained at the National Library of Medicine. Vertebral shape strongly correlates with the presence of osteophytes, and shape similarity retrieval is proposed as a tool for retrieval by osteophyte presence and severity. Experimental results included in the paper evaluate (1) the usefulness of shape-similarity as a proxy for osteophytes, (2) the computational and disk access efficiency of the new indexing scheme, (3) the relative performance of indexing with embedding to the performance of indexing without embedding, and (4) the computational cost of indexing using the proposed embedding versus the cost of an alternate embedding. The experimental results clearly show the relevance of shape indexing and the advantage of using the proposed embedding. PMID:20163981
Optimal embedding for shape indexing in medical image databases.
Qian, Xiaoning; Tagare, Hemant D; Fulbright, Robert K; Long, Rodney; Antani, Sameer
2010-06-01
This paper addresses the problem of indexing shapes in medical image databases. Shapes of organs are often indicative of disease, making shape similarity queries important in medical image databases. Mathematically, shapes with landmarks belong to shape spaces which are curved manifolds with a well defined metric. The challenge in shape indexing is to index data in such curved spaces. One natural indexing scheme is to use metric trees, but metric trees are prone to inefficiency. This paper proposes a more efficient alternative. We show that it is possible to optimally embed finite sets of shapes in shape space into a Euclidean space. After embedding, classical coordinate-based trees can be used for efficient shape retrieval. The embedding proposed in the paper is optimal in the sense that it least distorts the partial Procrustes shape distance. The proposed indexing technique is used to retrieve images by vertebral shape from the NHANES II database of cervical and lumbar spine X-ray images maintained at the National Library of Medicine. Vertebral shape strongly correlates with the presence of osteophytes, and shape similarity retrieval is proposed as a tool for retrieval by osteophyte presence and severity. Experimental results included in the paper evaluate (1) the usefulness of shape similarity as a proxy for osteophytes, (2) the computational and disk access efficiency of the new indexing scheme, (3) the relative performance of indexing with embedding to the performance of indexing without embedding, and (4) the computational cost of indexing using the proposed embedding versus the cost of an alternate embedding. The experimental results clearly show the relevance of shape indexing and the advantage of using the proposed embedding. Copyright (c) 2010 Elsevier B.V. All rights reserved.
An Autonomous Underwater Recorder Based on a Single Board Computer.
Caldas-Morgan, Manuel; Alvarez-Rosario, Alexander; Rodrigues Padovese, Linilson
2015-01-01
As industrial activities continue to grow on the Brazilian coast, underwater sound measurements are becoming of great scientific importance as they are essential to evaluate the impact of these activities on local ecosystems. In this context, the use of commercial underwater recorders is not always the most feasible alternative, due to their high cost and lack of flexibility. Design and construction of more affordable alternatives from scratch can become complex because it requires profound knowledge in areas such as electronics and low-level programming. With the aim of providing a solution; a well succeeded model of a highly flexible, low-cost alternative to commercial recorders was built based on a Raspberry Pi single board computer. A properly working prototype was assembled and it demonstrated adequate performance levels in all tested situations. The prototype was equipped with a power management module which was thoroughly evaluated. It is estimated that it will allow for great battery savings on long-term scheduled recordings. The underwater recording device was successfully deployed at selected locations along the Brazilian coast, where it adequately recorded animal and manmade acoustic events, among others. Although power consumption may not be as efficient as that of commercial and/or micro-processed solutions, the advantage offered by the proposed device is its high customizability, lower development time and inherently, its cost.
Raman, E Prabhu; Lakkaraju, Sirish Kaushik; Denny, Rajiah Aldrin; MacKerell, Alexander D
2017-06-05
Accurate and rapid estimation of relative binding affinities of ligand-protein complexes is a requirement of computational methods for their effective use in rational ligand design. Of the approaches commonly used, free energy perturbation (FEP) methods are considered one of the most accurate, although they require significant computational resources. Accordingly, it is desirable to have alternative methods of similar accuracy but greater computational efficiency to facilitate ligand design. In the present study relative free energies of binding are estimated for one or two non-hydrogen atom changes in compounds targeting the proteins ACK1 and p38 MAP kinase using three methods. The methods include standard FEP, single-step free energy perturbation (SSFEP) and the site-identification by ligand competitive saturation (SILCS) ligand grid free energy (LGFE) approach. Results show the SSFEP and SILCS LGFE methods to be competitive with or better than the FEP results for the studied systems, with SILCS LGFE giving the best agreement with experimental results. This is supported by additional comparisons with published FEP data on p38 MAP kinase inhibitors. While both the SSFEP and SILCS LGFE approaches require a significant upfront computational investment, they offer a 1000-fold computational savings over FEP for calculating the relative affinities of ligand modifications once those pre-computations are complete. An illustrative example of the potential application of these methods in the context of screening large numbers of transformations is presented. Thus, the SSFEP and SILCS LGFE approaches represent viable alternatives for actively driving ligand design during drug discovery and development. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Liu, Hongcheng; Dong, Peng; Xing, Lei
2017-08-01
{{\\ell }2,1} -minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the {{\\ell }2,1} -based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the {{\\ell }2,1} -minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the {{\\ell }2,1} -minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the {{\\ell }2,1} -minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.
Liu, Hongcheng; Dong, Peng; Xing, Lei
2017-07-20
[Formula: see text]-minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the [Formula: see text]-based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the [Formula: see text]-minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the [Formula: see text]-minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the [Formula: see text]-minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.
Swellix: a computational tool to explore RNA conformational space.
Sloat, Nathan; Liu, Jui-Wen; Schroeder, Susan J
2017-11-21
The sequence of nucleotides in an RNA determines the possible base pairs for an RNA fold and thus also determines the overall shape and function of an RNA. The Swellix program presented here combines a helix abstraction with a combinatorial approach to the RNA folding problem in order to compute all possible non-pseudoknotted RNA structures for RNA sequences. The Swellix program builds on the Crumple program and can include experimental constraints on global RNA structures such as the minimum number and lengths of helices from crystallography, cryoelectron microscopy, or in vivo crosslinking and chemical probing methods. The conceptual advance in Swellix is to count helices and generate all possible combinations of helices rather than counting and combining base pairs. Swellix bundles similar helices and includes improvements in memory use and efficient parallelization. Biological applications of Swellix are demonstrated by computing the reduction in conformational space and entropy due to naturally modified nucleotides in tRNA sequences and by motif searches in Human Endogenous Retroviral (HERV) RNA sequences. The Swellix motif search reveals occurrences of protein and drug binding motifs in the HERV RNA ensemble that do not occur in minimum free energy or centroid predicted structures. Swellix presents significant improvements over Crumple in terms of efficiency and memory use. The efficient parallelization of Swellix enables the computation of sequences as long as 418 nucleotides with sufficient experimental constraints. Thus, Swellix provides a practical alternative to free energy minimization tools when multiple structures, kinetically determined structures, or complex RNA-RNA and RNA-protein interactions are present in an RNA folding problem.
Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction
Agulleiro, Jose-Ignacio; Fernández, José Jesús
2012-01-01
Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768
NASA Astrophysics Data System (ADS)
Prychynenko, Diana; Sitte, Matthias; Litzius, Kai; Krüger, Benjamin; Bourianoff, George; Kläui, Mathias; Sinova, Jairo; Everschor-Sitte, Karin
2018-01-01
Inspired by the human brain, there is a strong effort to find alternative models of information processing capable of imitating the high energy efficiency of neuromorphic information processing. One possible realization of cognitive computing involves reservoir computing networks. These networks are built out of nonlinear resistive elements which are recursively connected. We propose that a Skyrmion network embedded in magnetic films may provide a suitable physical implementation for reservoir computing applications. The significant key ingredient of such a network is a two-terminal device with nonlinear voltage characteristics originating from magnetoresistive effects, such as the anisotropic magnetoresistance or the recently discovered noncollinear magnetoresistance. The most basic element for a reservoir computing network built from "Skyrmion fabrics" is a single Skyrmion embedded in a ferromagnetic ribbon. In order to pave the way towards reservoir computing systems based on Skyrmion fabrics, we simulate and analyze (i) the current flow through a single magnetic Skyrmion due to the anisotropic magnetoresistive effect and (ii) the combined physics of local pinning and the anisotropic magnetoresistive effect.
Impacts: NIST Building and Fire Research Laboratory (technical and societal)
NASA Astrophysics Data System (ADS)
Raufaste, N. J.
1993-08-01
The Building and Fire Research Laboratory (BFRL) of the National Institute of Standards and Technology (NIST) is dedicated to the life cycle quality of constructed facilities. The report describes major effects of BFRL's program on building and fire research. Contents of the document include: structural reliability; nondestructive testing of concrete; structural failure investigations; seismic design and construction standards; rehabilitation codes and standards; alternative refrigerants research; HVAC simulation models; thermal insulation; residential equipment energy efficiency; residential plumbing standards; computer image evaluation of building materials; corrosion-protection for reinforcing steel; prediction of the service lives of building materials; quality of construction materials laboratory testing; roofing standards; simulating fires with computers; fire safety evaluation system; fire investigations; soot formation and evolution; cone calorimeter development; smoke detector standards; standard for the flammability of children's sleepwear; smoldering insulation fires; wood heating safety research; in-place testing of concrete; communication protocols for building automation and control systems; computer simulation of the properties of concrete and other porous materials; cigarette-induced furniture fires; carbon monoxide formation in enclosure fires; halon alternative fire extinguishing agents; turbulent mixing research; materials fire research; furniture flammability testing; standard for the cigarette ignition resistance of mattresses; support of navy firefighter trainer program; and using fire to clean up oil spills.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less
Computational assessment of organic photovoltaic candidate compounds
NASA Astrophysics Data System (ADS)
Borunda, Mario; Dai, Shuo; Olivares-Amaya, Roberto; Amador-Bedolla, Carlos; Aspuru-Guzik, Alan
2015-03-01
Organic photovoltaic (OPV) cells are emerging as a possible renewable alternative to petroleum based resources and are needed to meet our growing demand for energy. Although not as efficient as silicon based cells, OPV cells have as an advantage that their manufacturing cost is potentially lower. The Harvard Clean Energy Project, using a cheminformatic approach of pattern recognition and machine learning strategies, has ranked a molecular library of more than 2.6 million candidate compounds based on their performance as possible OPV materials. Here, we present a ranking of the top 1000 molecules for use as photovoltaic materials based on their optical absorption properties obtained via time-dependent density functional theory. This computational search has revealed the molecular motifs shared by the set of most promising molecules.
Face pose tracking using the four-point algorithm
NASA Astrophysics Data System (ADS)
Fung, Ho Yin; Wong, Kin Hong; Yu, Ying Kin; Tsui, Kwan Pang; Kam, Ho Chuen
2017-06-01
In this paper, we have developed an algorithm to track the pose of a human face robustly and efficiently. Face pose estimation is very useful in many applications such as building virtual reality systems and creating an alternative input method for the disabled. Firstly, we have modified a face detection toolbox called DLib for the detection of a face in front of a camera. The detected face features are passed to a pose estimation method, known as the four-point algorithm, for pose computation. The theory applied and the technical problems encountered during system development are discussed in the paper. It is demonstrated that the system is able to track the pose of a face in real time using a consumer grade laptop computer.
Efficient Posterior Probability Mapping Using Savage-Dickey Ratios
Penny, William D.; Ridgway, Gerard R.
2013-01-01
Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640
Forbes, Thomas P.; Degertekin, F. Levent; Fedorov, Andrei G.
2010-01-01
Electrochemistry and ion transport in a planar array of mechanically-driven, droplet-based ion sources are investigated using an approximate time scale analysis and in-depth computational simulations. The ion source is modeled as a controlled-current electrolytic cell, in which the piezoelectric transducer electrode, which mechanically drives the charged droplet generation using ultrasonic atomization, also acts as the oxidizing/corroding anode (positive mode). The interplay between advective and diffusive ion transport of electrochemically generated ions is analyzed as a function of the transducer duty cycle and electrode location. A time scale analysis of the relative importance of advective vs. diffusive ion transport provides valuable insight into optimality, from the ionization prospective, of alternative design and operation modes of the ion source operation. A computational model based on the solution of time-averaged, quasi-steady advection-diffusion equations for electroactive species transport is used to substantiate the conclusions of the time scale analysis. The results show that electrochemical ion generation at the piezoelectric transducer electrodes located at the back-side of the ion source reservoir results in poor ionization efficiency due to insufficient time for the charged analyte to diffuse away from the electrode surface to the ejection location, especially at near 100% duty cycle operation. Reducing the duty cycle of droplet/analyte ejection increases the analyte residence time and, in turn, improves ionization efficiency, but at an expense of the reduced device throughput. For applications where this is undesirable, i.e., multiplexed and disposable device configurations, an alternative electrode location is incorporated. By moving the charging electrode to the nozzle surface, the diffusion length scale is greatly reduced, drastically improving ionization efficiency. The ionization efficiency of all operating conditions considered is expressed as a function of the dimensionless Peclet number, which defines the relative effect of advection as compared to diffusion. This analysis is general enough to elucidate an important role of electrochemistry in ionization efficiency of any arrayed ion sources, be they mechanically-driven or electrosprays, and is vital for determining optimal design and operation conditions. PMID:20607111
Decision rules for unbiased inventory estimates
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Koch, D.
1979-01-01
An efficient and accurate procedure for estimating inventories from remote sensing scenes is presented. In place of the conventional and expensive full dimensional Bayes decision rule, a one-dimensional feature extraction and classification technique was employed. It is shown that this efficient decision rule can be used to develop unbiased inventory estimates and that for large sample sizes typical of satellite derived remote sensing scenes, resulting accuracies are comparable or superior to more expensive alternative procedures. Mathematical details of the procedure are provided in the body of the report and in the appendix. Results of a numerical simulation of the technique using statistics obtained from an observed LANDSAT scene are included. The simulation demonstrates the effectiveness of the technique in computing accurate inventory estimates.
Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.
2004-01-01
Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.
Multiagent optimization system for solving the traveling salesman problem (TSP).
Xie, Xiao-Feng; Liu, Jiming
2009-04-01
The multiagent optimization system (MAOS) is a nature-inspired method, which supports cooperative search by the self-organization of a group of compact agents situated in an environment with certain sharing public knowledge. Moreover, each agent in MAOS is an autonomous entity with personal declarative memory and behavioral components. In this paper, MAOS is refined for solving the traveling salesman problem (TSP), which is a classic hard computational problem. Based on a simplified MAOS version, in which each agent manipulates on extremely limited declarative knowledge, some simple and efficient components for solving TSP, including two improving heuristics based on a generalized edge assembly recombination, are implemented. Compared with metaheuristics in adaptive memory programming, MAOS is particularly suitable for supporting cooperative search. The experimental results on two TSP benchmark data sets show that MAOS is competitive as compared with some state-of-the-art algorithms, including the Lin-Kernighan-Helsgaun, IBGLK, PHGA, etc., although MAOS does not use any explicit local search during the runtime. The contributions of MAOS components are investigated. It indicates that certain clues can be positive for making suitable selections before time-consuming computation. More importantly, it shows that the cooperative search of agents can achieve an overall good performance with a macro rule in the switch mode, which deploys certain alternate search rules with the offline performance in negative correlations. Using simple alternate rules may prevent the high difficulty of seeking an omnipotent rule that is efficient for a large data set.
An Alternative Surgical Method for Treatment of Osteoid Osteoma
Gökalp, Mehmet Ata; Gözen, Abdurrahim; Ünsal, Seyyid Şerif; Önder, Haci; Güner, Savaş
2016-01-01
Background An osteoid osteoma is a benign bone tumor that tends to be <1 cm in size. The tumor is characterized by night-time pain that may be relieved by aspirin or other non-steroidal anti-inflammatory drugs. Osteoid osteoma can be treated with various conservative and surgical methods, but these have some risks and difficulties. The purpose of the present study was to present an alternative treatment method for osteoid osteoma and the results we obtained. Material/Methods In the period from 2010 to 2014, 10 patients with osteoid osteoma underwent nidus excision by using a safe alternative method in an operating room (OR) with no computed tomography (CT). The localization of the tumor was determined by use of a CT-guided Kirschner wire in the radiology unit, then, in the OR the surgical intervention was performed without removing the Kirschner wire. Results Following the alternative intervention, all the patients were completely relieved of pain. In the follow-up, no recurrence or complication occurred. Conclusions The presented alternative method for treating osteoid osteoma is an efficient and practical procedure for surgeons working in clinics that lack specialized equipment. PMID:26898923
Computational tools for exact conditional logistic regression.
Corcoran, C; Mehta, C; Patel, N; Senchaudhuri, P
Logistic regression analyses are often challenged by the inability of unconditional likelihood-based approximations to yield consistent, valid estimates and p-values for model parameters. This can be due to sparseness or separability in the data. Conditional logistic regression, though useful in such situations, can also be computationally unfeasible when the sample size or number of explanatory covariates is large. We review recent developments that allow efficient approximate conditional inference, including Monte Carlo sampling and saddlepoint approximations. We demonstrate through real examples that these methods enable the analysis of significantly larger and more complex data sets. We find in this investigation that for these moderately large data sets Monte Carlo seems a better alternative, as it provides unbiased estimates of the exact results and can be executed in less CPU time than can the single saddlepoint approximation. Moreover, the double saddlepoint approximation, while computationally the easiest to obtain, offers little practical advantage. It produces unreliable results and cannot be computed when a maximum likelihood solution does not exist. Copyright 2001 John Wiley & Sons, Ltd.
Nonuniform code concatenation for universal fault-tolerant quantum computing
NASA Astrophysics Data System (ADS)
Nikahd, Eesa; Sedighi, Mehdi; Saheb Zamani, Morteza
2017-09-01
Using transversal gates is a straightforward and efficient technique for fault-tolerant quantum computing. Since transversal gates alone cannot be computationally universal, they must be combined with other approaches such as magic state distillation, code switching, or code concatenation to achieve universality. In this paper we propose an alternative approach for universal fault-tolerant quantum computing, mainly based on the code concatenation approach proposed in [T. Jochym-O'Connor and R. Laflamme, Phys. Rev. Lett. 112, 010505 (2014), 10.1103/PhysRevLett.112.010505], but in a nonuniform fashion. The proposed approach is described based on nonuniform concatenation of the 7-qubit Steane code with the 15-qubit Reed-Muller code, as well as the 5-qubit code with the 15-qubit Reed-Muller code, which lead to two 49-qubit and 47-qubit codes, respectively. These codes can correct any arbitrary single physical error with the ability to perform a universal set of fault-tolerant gates, without using magic state distillation.
Numerical Algorithms for Acoustic Integrals - The Devil is in the Details
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.
1996-01-01
The accurate prediction of the aeroacoustic field generated by aerospace vehicles or nonaerospace machinery is necessary for designers to control and reduce source noise. Powerful computational aeroacoustic methods, based on various acoustic analogies (primarily the Lighthill acoustic analogy) and Kirchhoff methods, have been developed for prediction of noise from complicated sources, such as rotating blades. Both methods ultimately predict the noise through a numerical evaluation of an integral formulation. In this paper, we consider three generic acoustic formulations and several numerical algorithms that have been used to compute the solutions to these formulations. Algorithms for retarded-time formulations are the most efficient and robust, but they are difficult to implement for supersonic-source motion. Collapsing-sphere and emission-surface formulations are good alternatives when supersonic-source motion is present, but the numerical implementations of these formulations are more computationally demanding. New algorithms - which utilize solution adaptation to provide a specified error level - are needed.
An algorithmic approach to solving polynomial equations associated with quantum circuits
NASA Astrophysics Data System (ADS)
Gerdt, V. P.; Zinin, M. V.
2009-12-01
In this paper we present two algorithms for reducing systems of multivariate polynomial equations over the finite field F 2 to the canonical triangular form called lexicographical Gröbner basis. This triangular form is the most appropriate for finding solutions of the system. On the other hand, the system of polynomials over F 2 whose variables also take values in F 2 (Boolean polynomials) completely describes the unitary matrix generated by a quantum circuit. In particular, the matrix itself can be computed by counting the number of solutions (roots) of the associated polynomial system. Thereby, efficient construction of the lexicographical Gröbner bases over F 2 associated with quantum circuits gives a method for computing their circuit matrices that is alternative to the direct numerical method based on linear algebra. We compare our implementation of both algorithms with some other software packages available for computing Gröbner bases over F 2.
Optimizing Integrated Terminal Airspace Operations Under Uncertainty
NASA Technical Reports Server (NTRS)
Bosson, Christabelle; Xue, Min; Zelinski, Shannon
2014-01-01
In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.
GridPP - Preparing for LHC Run 2 and the Wider Context
NASA Astrophysics Data System (ADS)
Coles, Jeremy
2015-12-01
This paper elaborates upon the operational status and directions within the UK Computing for Particle Physics (GridPP) project as it approaches LHC Run 2. It details the pressures that have been gradually reshaping the deployed hardware and middleware environments at GridPP sites - from the increasing adoption of larger multicore nodes to the move towards alternative batch systems and cloud alternatives - as well as changes being driven by funding considerations. The paper highlights work being done with non-LHC communities and describes some of the early outcomes of adopting a generic DIRAC based job submission and management framework. The paper presents results from an analysis of how GridPP effort is distributed across various deployment and operations tasks and how this may be used to target further improvements in efficiency.
Distributed GPU Computing in GIScience
NASA Astrophysics Data System (ADS)
Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.
2013-12-01
Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE Transactions on, 9(3), 378-394. 2. Li, J., Jiang, Y., Yang, C., Huang, Q., & Rice, M. (2013). Visualizing 3D/4D Environmental Data Using Many-core Graphics Processing Units (GPUs) and Multi-core Central Processing Units (CPUs). Computers & Geosciences, 59(9), 78-89. 3. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaefer, Bastian; Goedecker, Stefan, E-mail: stefan.goedecker@unibas.ch
2016-07-21
An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This methodmore » allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.« less
NASA Technical Reports Server (NTRS)
Wagner, Michael Broderick
1987-01-01
The modeled cascade cells offer an alternative to conventional series cascade designs that require a monolithic intercell ohmic contact. Selective electrodes provide a simple means of fabricating three-terminal devices, which can be configured in complementary pairs to circumvent the attendant losses and fabrication complexities of intercell ohmic contacts. Moreover, selective electrodes allow incorporation of additional layers in the upper subcell which can improve spectral response and increase radiation tolerance. Realistic simulations of such cells operating under one-sun AMO conditions show that the seven-layer structure is optimum from the standpoint of beginning-of-life efficiency and radiation tolerance. Projected efficiencies exceed 26 percent. Under higher concentration factors, it should be possible to achieve efficiencies beyond 30 percent. However, to simulate operation at high concentration will require a model for resistive losses. Overall, these devices appear to be a promising contender for future space applications.
Group Additivity in Ligand Binding Affinity: An Alternative Approach to Ligand Efficiency.
Reynolds, Charles H; Reynolds, Ryan C
2017-12-26
Group additivity is a concept that has been successfully applied to a variety of thermochemical and kinetic properties. This includes drug discovery, where functional group additivity is often assumed in ligand binding. Ligand efficiency can be recast as a special case of group additivity where ΔG/HA is the group equivalent (HA is the number of non-hydrogen atoms in a ligand). Analysis of a large data set of protein-ligand binding affinities (K i ) for diverse targets shows that in general ligand binding is distinctly nonlinear. It is possible to create a group equivalent scheme for ligand binding, but only in the context of closely related proteins, at least with regard to size. This finding has broad implications for drug design from both experimental and computational points of view. It also offers a path forward for a more general scheme to assess the efficiency of ligand binding.
Ink jet assisted metallization for low cost flat plate solar cells
NASA Technical Reports Server (NTRS)
Teng, K. F.; Vest, R. W.
1987-01-01
Computer-controlled ink-jet-assisted metallization of the front surface of solar cells with metalorganic silver inks offers a maskless alternative method to conventional photolithography and screen printing. This method can provide low cost, fine resolution, reduced process complexity, avoidance of degradation of the p-n junction by firing at lower temperature, and uniform line film on rough surface of solar cells. The metallization process involves belt furnace firing and thermal spiking. With multilayer ink jet printing and firing, solar cells of about 5-6 percent efficiency without antireflection (AR) coating can be produced. With a titanium thin-film underlayer as an adhesion promoter, solar cells of average efficiency 8.08 percent without AR coating can be obtained. This efficiency value is approximately equal to that of thin-film solar cells of the same lot. Problems with regard to lower inorganic content of the inks and contact resistance are noted.
Parallel Calculation of Sensitivity Derivatives for Aircraft Design using Automatic Differentiation
NASA Technical Reports Server (NTRS)
Bischof, c. H.; Green, L. L.; Haigler, K. J.; Knauff, T. L., Jr.
1994-01-01
Sensitivity derivative (SD) calculation via automatic differentiation (AD) typical of that required for the aerodynamic design of a transport-type aircraft is considered. Two ways of computing SD via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicability to problems involving large numbers of design variables. A vector implementation on a Cray Y-MP computer is compared with a coarse-grained parallel implementation on an IBM SP1 computer, employing a Fortran M wrapper. The SD are computed for a swept transport wing in turbulent, transonic flow; the number of geometric design variables varies from 1 to 60 with coupling between a wing grid generation program and a state-of-the-art, 3-D computational fluid dynamics program, both augmented for derivative computation via AD. For a small number of design variables, the Cray Y-MP implementation is much faster. As the number of design variables grows, however, the IBM SP1 becomes an attractive alternative in terms of compute speed, job turnaround time, and total memory available for solutions with large numbers of design variables. The coarse-grained parallel implementation also can be moved easily to a network of workstations.
Szyperski, Piotr D
2018-06-01
The purpose of this research was to evaluate the applicability of the fractal dimension (FD) estimators to assess lateral shearing interferometric (LSI) measurements of tear film surface quality. Retrospective recordings of tear film measured with LSI were used: 69 from healthy subjects and 41 from patients diagnosed with dry eye syndrome. Five surface quality descriptors were considered, four based on FD and a previously reported descriptor operating in a spatial frequency domain (M 2 ), presenting temporal kinetics of post-blink tear film. A set of 12 regression parameters has been extracted and analyzed for classification purposes. The classifiers are assessed in terms of receiver operating characteristics and areas under their curves (AUC). Also, the computational loads are estimated. The maximum AUC of 82.4% was achieved for M 2 , closely followed by the binary box-counting (BBC) FD estimator with AUC=78.6%. For all descriptors, statistically significant differences between the subject groups were found (p<0.05). The BBC FD estimator was characterized with the highest empirical computational efficiency that was about 30% faster than that of M 2 , while that based on the differential box-counting exhibited the lowest efficiency (4.5 times slower than the best one). Concluding, FD estimators can be utilized for quantitative assessment of tear film kinetics. They provide a viable alternative to previously used spectral counter parameters, and at the same time allow higher computational efficiency.
2014-01-01
Background Recent innovations in sequencing technologies have provided researchers with the ability to rapidly characterize the microbial content of an environmental or clinical sample with unprecedented resolution. These approaches are producing a wealth of information that is providing novel insights into the microbial ecology of the environment and human health. However, these sequencing-based approaches produce large and complex datasets that require efficient and sensitive computational analysis workflows. Many recent tools for analyzing metagenomic-sequencing data have emerged, however, these approaches often suffer from issues of specificity, efficiency, and typically do not include a complete metagenomic analysis framework. Results We present PathoScope 2.0, a complete bioinformatics framework for rapidly and accurately quantifying the proportions of reads from individual microbial strains present in metagenomic sequencing data from environmental or clinical samples. The pipeline performs all necessary computational analysis steps; including reference genome library extraction and indexing, read quality control and alignment, strain identification, and summarization and annotation of results. We rigorously evaluated PathoScope 2.0 using simulated data and data from the 2011 outbreak of Shiga-toxigenic Escherichia coli O104:H4. Conclusions The results show that PathoScope 2.0 is a complete, highly sensitive, and efficient approach for metagenomic analysis that outperforms alternative approaches in scope, speed, and accuracy. The PathoScope 2.0 pipeline software is freely available for download at: http://sourceforge.net/projects/pathoscope/. PMID:25225611
10 CFR 429.70 - Alternative methods for determining energy efficiency or energy use.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Alternative methods for determining energy efficiency or energy use. 429.70 Section 429.70 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION CERTIFICATION....70 Alternative methods for determining energy efficiency or energy use. (a) General. A manufacturer...
10 CFR 429.70 - Alternative methods for determining energy efficiency or energy use.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Alternative methods for determining energy efficiency or energy use. 429.70 Section 429.70 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION CERTIFICATION....70 Alternative methods for determining energy efficiency or energy use. Link to an amendment...
Periodic component analysis as a spatial filter for SSVEP-based brain-computer interface.
Kiran Kumar, G R; Reddy, M Ramasubba
2018-06-08
Traditional Spatial filters used for steady-state visual evoked potential (SSVEP) extraction such as minimum energy combination (MEC) require the estimation of the background electroencephalogram (EEG) noise components. Even though this leads to improved performance in low signal to noise ratio (SNR) conditions, it makes such algorithms slow compared to the standard detection methods like canonical correlation analysis (CCA) due to the additional computational cost. In this paper, Periodic component analysis (πCA) is presented as an alternative spatial filtering approach to extract the SSVEP component effectively without involving extensive modelling of the noise. The πCA can separate out components corresponding to a given frequency of interest from the background electroencephalogram (EEG) by capturing the temporal information and does not generalize SSVEP based on rigid templates. Data from ten test subjects were used to evaluate the proposed method and the results demonstrate that the periodic component analysis acts as a reliable spatial filter for SSVEP extraction. Statistical tests were performed to validate the results. The experimental results show that πCA provides significant improvement in accuracy compared to standard CCA and MEC in low SNR conditions. The results demonstrate that πCA provides better detection accuracy compared to CCA and on par with that of MEC at a lower computational cost. Hence πCA is a reliable and efficient alternative detection algorithm for SSVEP based brain-computer interface (BCI). Copyright © 2018. Published by Elsevier B.V.
TAPAS: tools to assist the targeted protein quantification of human alternative splice variants.
Yang, Jae-Seong; Sabidó, Eduard; Serrano, Luis; Kiel, Christina
2014-10-15
In proteomes of higher eukaryotes, many alternative splice variants can only be detected by their shared peptides. This makes it highly challenging to use peptide-centric mass spectrometry to distinguish and to quantify protein isoforms resulting from alternative splicing events. We have developed two complementary algorithms based on linear mathematical models to efficiently compute a minimal set of shared and unique peptides needed to quantify a set of isoforms and splice variants. Further, we developed a statistical method to estimate the splice variant abundances based on stable isotope labeled peptide quantities. The algorithms and databases are integrated in a web-based tool, and we have experimentally tested the limits of our quantification method using spiked proteins and cell extracts. The TAPAS server is available at URL http://davinci.crg.es/tapas/. luis.serrano@crg.eu or christina.kiel@crg.eu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DMRfinder: efficiently identifying differentially methylated regions from MethylC-seq data.
Gaspar, John M; Hart, Ronald P
2017-11-29
DNA methylation is an epigenetic modification that is studied at a single-base resolution with bisulfite treatment followed by high-throughput sequencing. After alignment of the sequence reads to a reference genome, methylation counts are analyzed to determine genomic regions that are differentially methylated between two or more biological conditions. Even though a variety of software packages is available for different aspects of the bioinformatics analysis, they often produce results that are biased or require excessive computational requirements. DMRfinder is a novel computational pipeline that identifies differentially methylated regions efficiently. Following alignment, DMRfinder extracts methylation counts and performs a modified single-linkage clustering of methylation sites into genomic regions. It then compares methylation levels using beta-binomial hierarchical modeling and Wald tests. Among its innovative attributes are the analyses of novel methylation sites and methylation linkage, as well as the simultaneous statistical analysis of multiple sample groups. To demonstrate its efficiency, DMRfinder is benchmarked against other computational approaches using a large published dataset. Contrasting two replicates of the same sample yielded minimal genomic regions with DMRfinder, whereas two alternative software packages reported a substantial number of false positives. Further analyses of biological samples revealed fundamental differences between DMRfinder and another software package, despite the fact that they utilize the same underlying statistical basis. For each step, DMRfinder completed the analysis in a fraction of the time required by other software. Among the computational approaches for identifying differentially methylated regions from high-throughput bisulfite sequencing datasets, DMRfinder is the first that integrates all the post-alignment steps in a single package. Compared to other software, DMRfinder is extremely efficient and unbiased in this process. DMRfinder is free and open-source software, available on GitHub ( github.com/jsh58/DMRfinder ); it is written in Python and R, and is supported on Linux.
More efficient optimization of long-term water supply portfolios
NASA Astrophysics Data System (ADS)
Kirsch, Brian R.; Characklis, Gregory W.; Dillard, Karen E. M.; Kelley, C. T.
2009-03-01
The use of temporary transfers, such as options and leases, has grown as utilities attempt to meet increases in demand while reducing dependence on the expansion of costly infrastructure capacity (e.g., reservoirs). Earlier work has been done to construct optimal portfolios comprising firm capacity and transfers, using decision rules that determine the timing and volume of transfers. However, such work has only focused on the short-term (e.g., 1-year scenarios), which limits the utility of these planning efforts. Developing multiyear portfolios can lead to the exploration of a wider range of alternatives but also increases the computational burden. This work utilizes a coupled hydrologic-economic model to simulate the long-term performance of a city's water supply portfolio. This stochastic model is linked with an optimization search algorithm that is designed to handle the high-frequency, low-amplitude noise inherent in many simulations, particularly those involving expected values. This noise is detrimental to the accuracy and precision of the optimized solution and has traditionally been controlled by investing greater computational effort in the simulation. However, the increased computational effort can be substantial. This work describes the integration of a variance reduction technique (control variate method) within the simulation/optimization as a means of more efficiently identifying minimum cost portfolios. Random variation in model output (i.e., noise) is moderated using knowledge of random variations in stochastic input variables (e.g., reservoir inflows, demand), thereby reducing the computing time by 50% or more. Using these efficiency gains, water supply portfolios are evaluated over a 10-year period in order to assess their ability to reduce costs and adapt to demand growth, while still meeting reliability goals. As a part of the evaluation, several multiyear option contract structures are explored and compared.
The Flight Track Noise Impact Model
NASA Technical Reports Server (NTRS)
Burn, Melissa; Carey, Jeffrey; Czech, Joseph; Wingrove, Earl R., III
1997-01-01
To meet its objective of assisting the U.S. aviation industry with the technological challenges of the future, NASA must identify research areas that have the greatest potential for improving the operation of the air transportation system. To accomplish this, NASA is building an Aviation System Analysis Capability (ASAC). The Flight Track Noise Impact Model (FTNIM) has been developed as part of the ASAC. Its primary purpose is to enable users to examine the impact that quieter aircraft technologies and/or operations might have on air carrier operating efficiency at any one of 8 selected U.S. airports. The analyst selects an airport and case year for study, chooses a set of flight tracks for use in the case, and has the option of reducing the noise of the aircraft by 3, 6, or 10 decibels. Two sets of flight tracks are available for each airport: one that represents actual current conditions, including noise abatement tracks, which avoid flying over noise-sensitive areas; and a second set that offers more efficient routing. FTNIM computes the resultant noise impact and the time and distance saved for each operation on the more efficient, alternate tracks. Noise impact is characterized in three ways: the size of the noise contour footprint, the number of people living within the contours, and the number of homes located in the same contours. Distance and time savings are calculated by comparing the noise abatement flight path length to the more efficient alternate routing.
NASA Technical Reports Server (NTRS)
Davis, G. J.
1994-01-01
One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
Sevink, G J A; Schmid, F; Kawakatsu, T; Milano, G
2017-02-22
We have extended an existing hybrid MD-SCF simulation technique that employs a coarsening step to enhance the computational efficiency of evaluating non-bonded particle interactions. This technique is conceptually equivalent to the single chain in mean-field (SCMF) method in polymer physics, in the sense that non-bonded interactions are derived from the non-ideal chemical potential in self-consistent field (SCF) theory, after a particle-to-field projection. In contrast to SCMF, however, MD-SCF evolves particle coordinates by the usual Newton's equation of motion. Since collisions are seriously affected by the softening of non-bonded interactions that originates from their evaluation at the coarser continuum level, we have devised a way to reinsert the effect of collisions on the structural evolution. Merging MD-SCF with multi-particle collision dynamics (MPCD), we mimic particle collisions at the level of computational cells and at the same time properly account for the momentum transfer that is important for a realistic system evolution. The resulting hybrid MD-SCF/MPCD method was validated for a particular coarse-grained model of phospholipids in aqueous solution, against reference full-particle simulations and the original MD-SCF model. We additionally implemented and tested an alternative and more isotropic finite difference gradient. Our results show that efficiency is improved by merging MD-SCF with MPCD, as properly accounting for hydrodynamic interactions considerably speeds up the phase separation dynamics, with negligible additional computational costs compared to efficient MD-SCF. This new method enables realistic simulations of large-scale systems that are needed to investigate the applications of self-assembled structures of lipids in nanotechnologies.
Engineering calculations for communications satellite systems planning
NASA Technical Reports Server (NTRS)
Martin, C. H.; Gonsalvez, D. J.; Levis, C. A.; Wang, C. W.
1983-01-01
Progress is reported on a computer code to improve the efficiency of spectrum and orbit utilization for the Broadcasting Satellite Service in the 12 GHz band for Region 2. It implements a constrained gradient search procedure using an exponential objective function based on aggregate signal to noise ratio and an extended line search in the gradient direction. The procedure is tested against a manually generated initial scenario and appears to work satisfactorily. In this test it was assumed that alternate channels use orthogonal polarizations at any one satellite location.
Lin, Ying-Tsong; Collis, Jon M; Duda, Timothy F
2012-11-01
An alternating direction implicit (ADI) three-dimensional fluid parabolic equation solution method with enhanced accuracy is presented. The method uses a square-root Helmholtz operator splitting algorithm that retains cross-multiplied operator terms that have been previously neglected. With these higher-order cross terms, the valid angular range of the parabolic equation solution is improved. The method is tested for accuracy against an image solution in an idealized wedge problem. Computational efficiency improvements resulting from the ADI discretization are also discussed.
ERIC Educational Resources Information Center
Zamora, Ramon M.
Alternative learning environments offering computer-related instruction are developing around the world. Storefront learning centers, museum-based computer facilities, and special theme parks are some of the new concepts. ComputerTown, USA! is a public access computer literacy project begun in 1979 to serve both adults and children in Menlo Park…
An Autonomous Underwater Recorder Based on a Single Board Computer
Caldas-Morgan, Manuel; Alvarez-Rosario, Alexander; Rodrigues Padovese, Linilson
2015-01-01
As industrial activities continue to grow on the Brazilian coast, underwater sound measurements are becoming of great scientific importance as they are essential to evaluate the impact of these activities on local ecosystems. In this context, the use of commercial underwater recorders is not always the most feasible alternative, due to their high cost and lack of flexibility. Design and construction of more affordable alternatives from scratch can become complex because it requires profound knowledge in areas such as electronics and low-level programming. With the aim of providing a solution; a well succeeded model of a highly flexible, low-cost alternative to commercial recorders was built based on a Raspberry Pi single board computer. A properly working prototype was assembled and it demonstrated adequate performance levels in all tested situations. The prototype was equipped with a power management module which was thoroughly evaluated. It is estimated that it will allow for great battery savings on long-term scheduled recordings. The underwater recording device was successfully deployed at selected locations along the Brazilian coast, where it adequately recorded animal and manmade acoustic events, among others. Although power consumption may not be as efficient as that of commercial and/or micro-processed solutions, the advantage offered by the proposed device is its high customizability, lower development time and inherently, its cost. PMID:26076479
Report of the Task Force on Computer Charging.
ERIC Educational Resources Information Center
Computer Co-ordination Group, Ottawa (Ontario).
The objectives of the Task Force on Computer Charging as approved by the Committee of Presidents of Universities of Ontario were: (1) to identify alternative methods of costing computing services; (2) to identify alternative methods of pricing computing services; (3) to develop guidelines for the pricing of computing services; (4) to identify…
Deng, Shaozhong; Xue, Changfeng; Baumketner, Andriy; Jacobs, Donald; Cai, Wei
2013-01-01
This paper extends the image charge solvation model (ICSM) [J. Chem. Phys. 131, 154103 (2009)], a hybrid explicit/implicit method to treat electrostatic interactions in computer simulations of biomolecules formulated for spherical cavities, to prolate spheroidal and triaxial ellipsoidal cavities, designed to better accommodate non-spherical solutes in molecular dynamics (MD) simulations. In addition to the utilization of a general truncated octahedron as the MD simulation box, central to the proposed extension is an image approximation method to compute the reaction field for a point charge placed inside such a non-spherical cavity by using a single image charge located outside the cavity. The resulting generalized image charge solvation model (GICSM) is tested in simulations of liquid water, and the results are analyzed in comparison with those obtained from the ICSM simulations as a reference. We find that, for improved computational efficiency due to smaller simulation cells and consequently a less number of explicit solvent molecules, the generalized model can still faithfully reproduce known static and dynamic properties of liquid water at least for systems considered in the present paper, indicating its great potential to become an accurate but more efficient alternative to the ICSM when bio-macromolecules of irregular shapes are to be simulated. PMID:23913979
Networks and games for precision medicine.
Biane, Célia; Delaplace, Franck; Klaudel, Hanna
2016-12-01
Recent advances in omics technologies provide the leverage for the emergence of precision medicine that aims at personalizing therapy to patient. In this undertaking, computational methods play a central role for assisting physicians in their clinical decision-making by combining data analysis and systems biology modelling. Complex diseases such as cancer or diabetes arise from the intricate interplay of various biological molecules. Therefore, assessing drug efficiency requires to study the effects of elementary perturbations caused by diseases on relevant biological networks. In this paper, we propose a computational framework called Network-Action Game applied to best drug selection problem combining Game Theory and discrete models of dynamics (Boolean networks). Decision-making is modelled using Game Theory that defines the process of drug selection among alternative possibilities, while Boolean networks are used to model the effects of the interplay between disease and drugs actions on the patient's molecular system. The actions/strategies of disease and drugs are focused on arc alterations of the interactome. The efficiency of this framework has been evaluated for drug prediction on a model of breast cancer signalling. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Madrigal-Arias, Jorge Enrique; Argumedo-Delira, Rosalba; Alarcón, Alejandro; Mendoza-López, Ma. Remedios; García-Barradas, Oscar; Cruz-Sánchez, Jesús Samuel; Ferrera-Cerrato, Ronald; Jiménez-Fernández, Maribel
2015-01-01
In an effort to develop alternate techniques to recover metals from waste electrical and electronic equipment (WEEE), this research evaluated the bioleaching efficiency of gold (Au), copper (Cu) and nickel (Ni) by two strains of Aspergillus niger in the presence of gold-plated finger integrated circuits found in computer motherboards (GFICMs) and cellular phone printed circuit boards (PCBs). These three metals were analyzed for their commercial value and their diverse applications in the industry. Au-bioleaching ranged from 42 to 1% for Aspergillus niger strain MXPE6; with the combination of Aspergillus niger MXPE6 + Aspergillus niger MX7, the Au-bioleaching was 87 and 28% for PCBs and GFICMs, respectively. In contrast, the bioleaching of Cu by Aspergillus niger MXPE6 was 24 and 5%; using the combination of both strains, the values were 0.2 and 29% for PCBs and GFICMs, respectively. Fungal Ni-leaching was only found for PCBs, but with no significant differences among treatments. Improvement of the metal recovery efficiency by means of fungal metabolism is also discussed. PMID:26413051
High-performance noncontact thermal diode via asymmetric nanostructures
NASA Astrophysics Data System (ADS)
Shen, Jiadong; Liu, Xianglei; He, Huan; Wu, Weitao; Liu, Baoan
2018-05-01
Electric diodes, though laying the foundation of modern electronics and information processing industries, suffer from ineffectiveness and even failure at high temperatures. Thermal diodes are promising alternatives to relieve above limitations, but usually possess low rectification ratios, and how to obtain a high-performance thermal rectification effect is still an open question. This paper proposes an efficient contactless thermal diode based on the near-field thermal radiation of asymmetric doped silicon nanostructures. The rectification ratio computed via exact scattering theories is demonstrated to be as high as 10 at a nanoscale gap distance and period, outperforming the counterpart flat-plate diode by more than one order of magnitude. This extraordinary performance mainly lies in the higher forward and lower reverse radiative heat flux within the low frequency band compared with the counterpart flat-plate diode, which is caused by a lower loss and smaller cut-off wavevector of nanostructures for the forward and reversed scheme, respectively. This work opens new routes to realize high performance thermal diodes, and may have wide applications in efficient thermal computing, thermal information processing, and thermal management.
Jun, Goo; Wing, Mary Kate; Abecasis, Gonçalo R; Kang, Hyun Min
2015-06-01
The analysis of next-generation sequencing data is computationally and statistically challenging because of the massive volume of data and imperfect data quality. We present GotCloud, a pipeline for efficiently detecting and genotyping high-quality variants from large-scale sequencing data. GotCloud automates sequence alignment, sample-level quality control, variant calling, filtering of likely artifacts using machine-learning techniques, and genotype refinement using haplotype information. The pipeline can process thousands of samples in parallel and requires less computational resources than current alternatives. Experiments with whole-genome and exome-targeted sequence data generated by the 1000 Genomes Project show that the pipeline provides effective filtering against false positive variants and high power to detect true variants. Our pipeline has already contributed to variant detection and genotyping in several large-scale sequencing projects, including the 1000 Genomes Project and the NHLBI Exome Sequencing Project. We hope it will now prove useful to many medical sequencing studies. © 2015 Jun et al.; Published by Cold Spring Harbor Laboratory Press.
NASA Astrophysics Data System (ADS)
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
Bourke, Jason M; Witmer, Lawrence M
2016-12-01
We tested the aerodynamic function of nasal conchae in birds using CT data from an adult male wild turkey (Meleagris gallopavo) to construct 3D models of its nasal passage. A series of digital "turbinectomies" were performed on these models and computational fluid dynamic analyses were performed to simulate resting inspiration. Models with turbinates removed were compared to the original, unmodified control airway. Results revealed that the four conchae found in turkeys, along with the crista nasalis, alter the flow of inspired air in ways that can be considered baffle-like. However, these baffle-like functions were remarkably limited in their areal extent, indicating that avian conchae are more functionally independent than originally hypothesized. Our analysis revealed that the conchae of birds are efficient baffles that-along with potential heat and moisture transfer-serve to efficiently move air to specific regions of the nasal passage. This alternate function of conchae has implications for their evolution in birds and other amniotes. Copyright © 2016 Elsevier B.V. All rights reserved.
Madrigal-Arias, Jorge Enrique; Argumedo-Delira, Rosalba; Alarcón, Alejandro; Mendoza-López, Ma Remedios; García-Barradas, Oscar; Cruz-Sánchez, Jesús Samuel; Ferrera-Cerrato, Ronald; Jiménez-Fernández, Maribel
2015-01-01
In an effort to develop alternate techniques to recover metals from waste electrical and electronic equipment (WEEE), this research evaluated the bioleaching efficiency of gold (Au), copper (Cu) and nickel (Ni) by two strains of Aspergillus niger in the presence of gold-plated finger integrated circuits found in computer motherboards (GFICMs) and cellular phone printed circuit boards (PCBs). These three metals were analyzed for their commercial value and their diverse applications in the industry. Au-bioleaching ranged from 42 to 1% for Aspergillus niger strain MXPE6; with the combination of Aspergillus niger MXPE6 + Aspergillus niger MX7, the Au-bioleaching was 87 and 28% for PCBs and GFICMs, respectively. In contrast, the bioleaching of Cu by Aspergillus niger MXPE6 was 24 and 5%; using the combination of both strains, the values were 0.2 and 29% for PCBs and GFICMs, respectively. Fungal Ni-leaching was only found for PCBs, but with no significant differences among treatments. Improvement of the metal recovery efficiency by means of fungal metabolism is also discussed.
Stochastic computing with biomolecular automata
Adar, Rivka; Benenson, Yaakov; Linshiz, Gregory; Rosner, Amit; Tishby, Naftali; Shapiro, Ehud
2004-01-01
Stochastic computing has a broad range of applications, yet electronic computers realize its basic step, stochastic choice between alternative computation paths, in a cumbersome way. Biomolecular computers use a different computational paradigm and hence afford novel designs. We constructed a stochastic molecular automaton in which stochastic choice is realized by means of competition between alternative biochemical pathways, and choice probabilities are programmed by the relative molar concentrations of the software molecules coding for the alternatives. Programmable and autonomous stochastic molecular automata have been shown to perform direct analysis of disease-related molecular indicators in vitro and may have the potential to provide in situ medical diagnosis and cure. PMID:15215499
Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing
NASA Astrophysics Data System (ADS)
Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey
Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.
An algorithmic framework for multiobjective optimization.
Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization.
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
X-ray computed tomography datasets for forensic analysis of vertebrate fossils.
Rowe, Timothy B; Luo, Zhe-Xi; Ketcham, Richard A; Maisano, Jessica A; Colbert, Matthew W
2016-06-07
We describe X-ray computed tomography (CT) datasets from three specimens recovered from Early Cretaceous lakebeds of China that illustrate the forensic interpretation of CT imagery for paleontology. Fossil vertebrates from thinly bedded sediments often shatter upon discovery and are commonly repaired as amalgamated mosaics grouted to a solid backing slab of rock or plaster. Such methods are prone to inadvertent error and willful forgery, and once required potentially destructive methods to identify mistakes in reconstruction. CT is an efficient, nondestructive alternative that can disclose many clues about how a specimen was handled and repaired. These annotated datasets illustrate the power of CT in documenting specimen integrity and are intended as a reference in applying CT more broadly to evaluating the authenticity of comparable fossils.
Survivable algorithms and redundancy management in NASA's distributed computing systems
NASA Technical Reports Server (NTRS)
Malek, Miroslaw
1992-01-01
The design of survivable algorithms requires a solid foundation for executing them. While hardware techniques for fault-tolerant computing are relatively well understood, fault-tolerant operating systems, as well as fault-tolerant applications (survivable algorithms), are, by contrast, little understood, and much more work in this field is required. We outline some of our work that contributes to the foundation of ultrareliable operating systems and fault-tolerant algorithm design. We introduce our consensus-based framework for fault-tolerant system design. This is followed by a description of a hierarchical partitioning method for efficient consensus. A scheduler for redundancy management is introduced, and application-specific fault tolerance is described. We give an overview of our hybrid algorithm technique, which is an alternative to the formal approach given.
Characterization of the electrical output of flat-plate photovoltaic arrays
NASA Technical Reports Server (NTRS)
Gonzalez, C. C.; Hill, G. M.; Ross, R. G., Jr.
1982-01-01
The electric output of flat-plate photovoltaic arrays changes constantly, due primarily to changes in cell temperature and irradiance level. As a result, array loads such as direct-current to alternating-current power conditioners must be able to accommodate widely varying input levels, while maintaining operation at or near the array maximum power point.The results of an extensive computer simulation study that was used to define the parameters necessary for the systematic design of array/power-conditioner interfaces are presented as normalized ratios of power-conditioner parameters to array parameters, to make the results universally applicable to a wide variety of system sizes, sites, and operating modes. The advantages of maximum power tracking and a technique for computing average annual power-conditioner efficiency are discussed.
Machine learning bandgaps of double perovskites
Pilania, G.; Mannodi-Kanakkithodi, A.; Uberuaga, B. P.; Ramprasad, R.; Gubernatis, J. E.; Lookman, T.
2016-01-01
The ability to make rapid and accurate predictions on bandgaps of double perovskites is of much practical interest for a range of applications. While quantum mechanical computations for high-fidelity bandgaps are enormously computation-time intensive and thus impractical in high throughput studies, informatics-based statistical learning approaches can be a promising alternative. Here we demonstrate a systematic feature-engineering approach and a robust learning framework for efficient and accurate predictions of electronic bandgaps of double perovskites. After evaluating a set of more than 1.2 million features, we identify lowest occupied Kohn-Sham levels and elemental electronegativities of the constituent atomic species as the most crucial and relevant predictors. The developed models are validated and tested using the best practices of data science and further analyzed to rationalize their prediction performance. PMID:26783247
A FORTRAN program for multivariate survival analysis on the personal computer.
Mulder, P G
1988-01-01
In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.
X-ray computed tomography datasets for forensic analysis of vertebrate fossils
Rowe, Timothy B.; Luo, Zhe-Xi; Ketcham, Richard A.; Maisano, Jessica A.; Colbert, Matthew W.
2016-01-01
We describe X-ray computed tomography (CT) datasets from three specimens recovered from Early Cretaceous lakebeds of China that illustrate the forensic interpretation of CT imagery for paleontology. Fossil vertebrates from thinly bedded sediments often shatter upon discovery and are commonly repaired as amalgamated mosaics grouted to a solid backing slab of rock or plaster. Such methods are prone to inadvertent error and willful forgery, and once required potentially destructive methods to identify mistakes in reconstruction. CT is an efficient, nondestructive alternative that can disclose many clues about how a specimen was handled and repaired. These annotated datasets illustrate the power of CT in documenting specimen integrity and are intended as a reference in applying CT more broadly to evaluating the authenticity of comparable fossils. PMID:27272251
Alternative Fuels Data Center: College Students Engineer Efficient Vehicles
in EcoCAR 2 CompetitionA> College Students Engineer Efficient Vehicles in EcoCAR 2 Competition to someone by E-mail Share Alternative Fuels Data Center: College Students Engineer Efficient Vehicles in EcoCAR 2 Competition on Facebook Tweet about Alternative Fuels Data Center: College Students Engineer
Alternative Fuels Data Center: County Fleet Goes Big on Idle Reduction,
Ethanol Use, Fuel Efficiency County Fleet Goes Big on Idle Reduction, Ethanol Use, Fuel , Ethanol Use, Fuel Efficiency on Facebook Tweet about Alternative Fuels Data Center: County Fleet Goes Big on Idle Reduction, Ethanol Use, Fuel Efficiency on Twitter Bookmark Alternative Fuels Data Center
Harris, Sion Kim; Knight, John R; Van Hook, Shari; Sherritt, Lon; Brooks, Traci; Kulig, John W; Nordt, Christina; Saitz, Richard
2015-01-01
Background Computer self-administration may help busy pediatricians’ offices increase adolescent substance use screening rates efficiently and effectively, if proven to yield valid responses. The CRAFFT screening protocol for adolescents has demonstrated validity as an interview, but a computer self-entry approach needs validity testing. The aim of this study was to evaluate the criterion validity and time efficiency of a computerized adolescent substance use screening protocol implemented by self-administration or clinician-administration. Methods 12- to 17-year-old patients coming for routine care at three primary care clinics completed the computerized screen by both self-administration and clinician-administration during their visit. To account for order effects, we randomly assigned participants to self-administer the screen either before or after seeing their clinician. Both were conducted using a tablet computer and included identical items (any past-12-month use of tobacco, alcohol, drugs; past-3-months frequency of each; and six CRAFFT items). The criterion measure for substance use was the Timeline Follow-Back, and for alcohol/drug use disorder, the Adolescent Diagnostic Interview, both conducted by confidential research assistant-interview after the visit. Tobacco dependence risk was assessed with the self-administered Hooked on Nicotine Checklist (HONC). Analyses accounted for the multi-site cluster sampling design. Results Among 136 participants, mean age was 15.0±1.5 yrs, 54% were girls, 53% were Black or Hispanic, and 67% had ≥3 prior visits with their clinician. Twenty-seven percent reported any substance use (including tobacco) in the past 12 months, 7% met criteria for an alcohol or cannabis use disorder, and 4% were HONC-positive. Sensitivity/specificity of the screener were high for detecting past-12-month use or disorder and did not differ between computer and clinician. Mean completion time was 49 seconds (95%CI 44-54) for computer and 74 seconds (95%CI 68-87) for clinician (paired comparison p<0.001). Conclusions Substance use screening by computer self-entry is a valid and time-efficient alternative to clinician-administered screening. PMID:25774878
Multi-partitioning for ADI-schemes on message passing architectures
NASA Technical Reports Server (NTRS)
Vanderwijngaart, Rob F.
1994-01-01
A kind of discrete-operator splitting called Alternating Direction Implicit (ADI) has been found to be useful in simulating fluid flow problems. In particular, it is being used to study the effects of hot exhaust jets from high performance aircraft on landing surfaces. Decomposition techniques that minimize load imbalance and message-passing frequency are described. Three strategies that are investigated for implementing the NAS Scalar Penta-diagonal Parallel Benchmark (SP) are transposition, pipelined Gaussian elimination, and multipartitioning. The multipartitioning strategy, which was used on Ethernet, was found to be the most efficient, although it was considered only a moderate success because of Ethernet's limited communication properties. The efficiency derived largely from the coarse granularity of the strategy, which reduced latencies and allowed overlap of communication and computation.
10 CFR 905.16 - What are the requirements for the minimum investment report alternative?
Code of Federal Regulations, 2013 CFR
2013-01-01
... report alternative? 905.16 Section 905.16 Energy DEPARTMENT OF ENERGY ENERGY PLANNING AND MANAGEMENT...; renewable energy; efficiency and alternative energy-related research and development; low-income energy... collected for and spent on DSM, renewable energy, efficiency or alternative energy-related research and...
10 CFR 905.16 - What are the requirements for the minimum investment report alternative?
Code of Federal Regulations, 2014 CFR
2014-01-01
... report alternative? 905.16 Section 905.16 Energy DEPARTMENT OF ENERGY ENERGY PLANNING AND MANAGEMENT...; renewable energy; efficiency and alternative energy-related research and development; low-income energy... collected for and spent on DSM, renewable energy, efficiency or alternative energy-related research and...
10 CFR 905.16 - What are the requirements for the minimum investment report alternative?
Code of Federal Regulations, 2011 CFR
2011-01-01
... report alternative? 905.16 Section 905.16 Energy DEPARTMENT OF ENERGY ENERGY PLANNING AND MANAGEMENT...; renewable energy; efficiency and alternative energy-related research and development; low-income energy... collected for and spent on DSM, renewable energy, efficiency or alternative energy-related research and...
10 CFR 905.16 - What are the requirements for the minimum investment report alternative?
Code of Federal Regulations, 2012 CFR
2012-01-01
... report alternative? 905.16 Section 905.16 Energy DEPARTMENT OF ENERGY ENERGY PLANNING AND MANAGEMENT...; renewable energy; efficiency and alternative energy-related research and development; low-income energy... collected for and spent on DSM, renewable energy, efficiency or alternative energy-related research and...
An Automatic Image-Based Modelling Method Applied to Forensic Infography
Zancajo-Blazquez, Sandra; Gonzalez-Aguilera, Diego; Gonzalez-Jorge, Higinio; Hernandez-Lopez, David
2015-01-01
This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i) flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet) and image (visible, infrared, thermal, etc.); (ii) automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii) high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model. PMID:25793628
Comparing solar energy alternatives
NASA Astrophysics Data System (ADS)
White, J. R.
1984-03-01
This paper outlines a computational procedure for comparing the merits of alternative processes to convert solar radiation to heat, electrical power, or chemical energy. The procedure uses the ratio of equipment investment to useful work as an index. Comparisons with conversion counterparts based on conventional fuels are also facilitated by examining this index. The procedure is illustrated by comparisons of (1) photovoltaic converters of differing efficiencies; (2) photovoltaic converters with and without focusing concentrators; (3) photovoltaic conversion plus electrolysis vs photocatalysis for the production of hydrogen; (4) photovoltaic conversion plus plasma arcs vs photocatalysis for nitrogen fixation. Estimates for conventionally-fuelled processes are included for comparison. The reasons why solar-based concepts fare poorly in such comparisons are traced to the low energy density of solar radiation and its low stream time factor resulting from the limited number of daylight hours available and clouds obscuring the sun.
ERIC Educational Resources Information Center
Dennis, J. Richard; Thomson, David
This paper is concerned with a low cost alternative for providing computer experience to secondary school students. The brief discussion covers the programmable calculator and its relevance for teaching the concepts and the rudiments of computer programming and for computer problem solving. A list of twenty-five programming activities related to…
RAP: RNA-Seq Analysis Pipeline, a new cloud-based NGS web application
2015-01-01
Background The study of RNA has been dramatically improved by the introduction of Next Generation Sequencing platforms allowing massive and cheap sequencing of selected RNA fractions, also providing information on strand orientation (RNA-Seq). The complexity of transcriptomes and of their regulative pathways make RNA-Seq one of most complex field of NGS applications, addressing several aspects of the expression process (e.g. identification and quantification of expressed genes and transcripts, alternative splicing and polyadenylation, fusion genes and trans-splicing, post-transcriptional events, etc.). Moreover, the huge volume of data generated by NGS platforms introduces unprecedented computational and technological challenges to efficiently analyze and store sequence data and results. Methods In order to provide researchers with an effective and friendly resource for analyzing RNA-Seq data, we present here RAP (RNA-Seq Analysis Pipeline), a cloud computing web application implementing a complete but modular analysis workflow. This pipeline integrates both state-of-the-art bioinformatics tools for RNA-Seq analysis and in-house developed scripts to offer to the user a comprehensive strategy for data analysis. RAP is able to perform quality checks (adopting FastQC and NGS QC Toolkit), identify and quantify expressed genes and transcripts (with Tophat, Cufflinks and HTSeq), detect alternative splicing events (using SpliceTrap) and chimeric transcripts (with ChimeraScan). This pipeline is also able to identify splicing junctions and constitutive or alternative polyadenylation sites (implementing custom analysis modules) and call for statistically significant differences in genes and transcripts expression, splicing pattern and polyadenylation site usage (using Cuffdiff2 and DESeq). Results Through a user friendly web interface, the RAP workflow can be suitably customized by the user and it is automatically executed on our cloud computing environment. This strategy allows to access to bioinformatics tools and computational resources without specific bioinformatics and IT skills. RAP provides a set of tabular and graphical results that can be helpful to browse, filter and export analyzed data, according to the user needs. PMID:26046471
The data base management system alternative for computing in the human services.
Sircar, S; Schkade, L L; Schoech, D
1983-01-01
The traditional incremental approach to computerization presents substantial problems as systems develop and grow. The Data Base Management System approach to computerization was developed to overcome the problems resulting from implementing computer applications one at a time. The authors describe the applications approach and the alternative Data Base Management System (DBMS) approach through their developmental history, discuss the technology of DBMS components, and consider the implications of choosing the DBMS alternative. Human service managers need an understanding of the DBMS alternative and its applicability to their agency data processing needs. The basis for a conscious selection of computing alternatives is outlined.
NASA Astrophysics Data System (ADS)
Khazdozian, Helena; Hadimani, Ravi; Jiles, David
2014-03-01
The United States is currently dependent on fossil fuels for the majority of its energy needs, which has many negative consequences such as climate change. Wind turbines present a viable alternative, with the highest energy return on investment among even fossil fuel generation. Traditional commercial wind turbines use an induction generator for energy conversion. However, induction generators require a gearbox to increase the rotational speed of the drive shaft. These gearboxes increase the overall cost of the wind turbine and account for about 35 percent of reported wind turbine failures. Direct drive permanent magnet synchronous generators (PMSGs) offer an alternative to induction generators which eliminate the need for a gearbox. Yet, PMSGs can be more expensive than induction generators at large power output due to their size and weight. To increase the efficiency of PMSGs, the geometry and configuration of NdFeB permanent magnets were investigated using finite element techniques. The optimized design of the PMSG increases flux density and minimizes cogging torque with NdFeB permanent magnets of a reduced volume. These factors serve to increase the efficiency and reduce the overall cost of the PMSG. This work is supported by a National Science Foundation IGERT fellowship and the Barbara and James Palmer Endowment at the Department of Electrical and Computer Engineering of Iowa State University.
Efficient identification of context dependent subgroups of risk from genome wide association studies
Dyson, Greg; Sing, Charles F.
2014-01-01
We have developed a modified Patient Rule-Induction Method (PRIM) as an alternative strategy for analyzing representative samples of non-experimental human data to estimate and test the role of genomic variations as predictors of disease risk in etiologically heterogeneous sub-samples. A computational limit of the proposed strategy is encountered when the number of genomic variations (predictor variables) under study is large (> 500) because permutations are used to generate a null distribution to test the significance of a term (defined by values of particular variables) that characterizes a sub-sample of individuals through the peeling and pasting processes. As an alternative, in this paper we introduce a theoretical strategy that facilitates the quick calculation of Type I and Type II errors in the evaluation of terms in the peeling and pasting processes carried out in the execution of a PRIM analysis that are underestimated and non-existent, respectively, when a permutation-based hypothesis test is employed. The resultant savings in computational time makes possible the consideration of larger numbers of genomic variations (an example genome wide association study is given) in the selection of statistically significant terms in the formulation of PRIM prediction models. PMID:24570412
NASA Technical Reports Server (NTRS)
Yee, H. C.; Shinn, J. L.
1986-01-01
Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogenous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the soruce terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.
An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction.
Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua
2015-01-01
Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as "ALM-ANAD". The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics.
NEAT: an efficient network enrichment analysis test.
Signorelli, Mirko; Vinciotti, Veronica; Wit, Ernst C
2016-09-05
Network enrichment analysis is a powerful method, which allows to integrate gene enrichment analysis with the information on relationships between genes that is provided by gene networks. Existing tests for network enrichment analysis deal only with undirected networks, they can be computationally slow and are based on normality assumptions. We propose NEAT, a test for network enrichment analysis. The test is based on the hypergeometric distribution, which naturally arises as the null distribution in this context. NEAT can be applied not only to undirected, but to directed and partially directed networks as well. Our simulations indicate that NEAT is considerably faster than alternative resampling-based methods, and that its capacity to detect enrichments is at least as good as the one of alternative tests. We discuss applications of NEAT to network analyses in yeast by testing for enrichment of the Environmental Stress Response target gene set with GO Slim and KEGG functional gene sets, and also by inspecting associations between functional sets themselves. NEAT is a flexible and efficient test for network enrichment analysis that aims to overcome some limitations of existing resampling-based tests. The method is implemented in the R package neat, which can be freely downloaded from CRAN ( https://cran.r-project.org/package=neat ).
20 CFR 404.232 - Computing your average monthly wage under the guaranteed alternative.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage under the... OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Guaranteed Alternative for People Reaching Age 62 After 1978 But Before 1984 § 404.232 Computing your average monthly...
Andreasen Struijk, Lotte N S; Bentsen, Bo; Gaihede, Michael; Lontis, Eugen R
2017-11-01
For severely paralyzed individuals, alternative computer interfaces are becoming increasingly essential for everyday life as social and vocational activities are facilitated by information technology and as the environment becomes more automatic and remotely controllable. Tongue computer interfaces have proven to be desirable by the users partly due to their high degree of aesthetic acceptability, but so far the mature systems have shown a relatively low error-free text typing efficiency. This paper evaluated the intra-oral inductive tongue computer interface (ITCI) in its intended use: Error-free text typing in a generally available text editing system, Word. Individuals with tetraplegia and able bodied individuals used the ITCI for typing using a MATLAB interface and for Word typing for 4 to 5 experimental days, and the results showed an average error-free text typing rate in Word of 11.6 correct characters/min across all participants and of 15.5 correct characters/min for participants familiar with tongue piercings. Improvements in typing rates between the sessions suggest that typing ratescan be improved further through long-term use of the ITCI.
An FPGA computing demo core for space charge simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jinyuan; Huang, Yifei; /Fermilab
2009-01-01
In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computedmore » using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.« less
Middleware for big data processing: test results
NASA Astrophysics Data System (ADS)
Gankevich, I.; Gaiduchok, V.; Korkhov, V.; Degtyarev, A.; Bogdanov, A.
2017-12-01
Dealing with large volumes of data is resource-consuming work which is more and more often delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some critical size, much effort should be put into improving its fault tolerance. It is difficult to estimate when some particular distributed system needs such facilities for a given workload, so instead they should be implemented in a middleware which works efficiently with a distributed system of any size. It is also difficult to estimate whether a volume of data is large or not, so the middleware should also work with data of any volume. In other words, the purpose of the middleware is to provide facilities that adapt distributed computing system for a given workload. In this paper we introduce such middleware appliance. Tests show that this middleware is well-suited for typical HPC and big data workloads and its performance is comparable with well-known alternatives.
Stanislawski, Jerzy; Kotulska, Malgorzata; Unold, Olgierd
2013-01-17
Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%). The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile) to 0.5 CPU-hours (simplified 3D profile) to seconds (machine learning). We showed that the simplified profile generation method does not introduce an error with regard to the original method, while increasing the computational efficiency. Our new dataset proved representative enough to use simple statistical methods for testing the amylogenicity based only on six letter sequences. Statistical machine learning methods such as Alternating Decision Tree and Multilayer Perceptron can replace the energy based classifier, with advantage of very significantly reduced computational time and simplicity to perform the analysis. Additionally, a decision tree provides a set of very easily interpretable rules.
Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™
Gomes, Jeremias M.; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H.
2016-01-01
We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel® Xeon Phi™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP’s irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63× on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7× and 1.62×, respectively, as compared to efficient CPU and GPU implementations. PMID:27298591
Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™.
Gomes, Jeremias M; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H
2015-10-01
We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel ® Xeon Phi ™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP's irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63 × on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7 × and 1.62 × , respectively, as compared to efficient CPU and GPU implementations.
Bruno Garza, J L; Young, J G
2015-01-01
Extended use of conventional computer input devices is associated with negative musculoskeletal outcomes. While many alternative designs have been proposed, it is unclear whether these devices reduce biomechanical loading and musculoskeletal outcomes. To review studies describing and evaluating the biomechanical loading and musculoskeletal outcomes associated with conventional and alternative input devices. Included studies evaluated biomechanical loading and/or musculoskeletal outcomes of users' distal or proximal upper extremity regions associated with the operation of alternative input devices (pointing devices, mice, other devices) that could be used in a desktop personal computing environment during typical office work. Some alternative pointing device designs (e.g. rollerbar) were consistently associated with decreased biomechanical loading while other designs had inconsistent results across studies. Most alternative keyboards evaluated in the literature reduce biomechanical loading and musculoskeletal outcomes. Studies of other input devices (e.g. touchscreen and gestural controls) were rare, however, those reported to date indicate that these devices are currently unsuitable as replacements for traditional devices. Alternative input devices that reduce biomechanical loading may make better choices for preventing or alleviating musculoskeletal outcomes during computer use, however, it is unclear whether many existing designs are effective.
NASA Astrophysics Data System (ADS)
Loring, B.; Karimabadi, H.; Rortershteyn, V.
2015-10-01
The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.
Smart Grid Privacy through Distributed Trust
NASA Astrophysics Data System (ADS)
Lipton, Benjamin
Though the smart electrical grid promises many advantages in efficiency and reliability, the risks to consumer privacy have impeded its deployment. Researchers have proposed protecting privacy by aggregating user data before it reaches the utility, using techniques of homomorphic encryption to prevent exposure of unaggregated values. However, such schemes generally require users to trust in the correct operation of a single aggregation server. We propose two alternative systems based on secret sharing techniques that distribute this trust among multiple service providers, protecting user privacy against a misbehaving server. We also provide an extensive evaluation of the systems considered, comparing their robustness to privacy compromise, error handling, computational performance, and data transmission costs. We conclude that while all the systems should be computationally feasible on smart meters, the two methods based on secret sharing require much less computation while also providing better protection against corrupted aggregators. Building systems using these techniques could help defend the privacy of electricity customers, as well as customers of other utilities as they move to a more data-driven architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim
2014-07-01
The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not.more » We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.« less
Parallel compression/decompression-based datapath architecture for multibeam mask writers
NASA Astrophysics Data System (ADS)
Chaudhary, Narendra; Savari, Serap A.
2017-06-01
Multibeam electron beam systems will be used in the future for mask writing and for complimentary lithography. The major challenges of the multibeam systems are in meeting throughput requirements and in handling the large data volumes associated with writing grayscale data on the wafer. In terms of future communications and computational requirements Amdahl's Law suggests that a simple increase of computation power and parallelism may not be a sustainable solution. We propose a parallel data compression algorithm to exploit the sparsity of mask data and a grayscale video-like representation of data. To improve the communication and computational efficiency of these systems at the write time we propose an alternate datapath architecture partly motivated by multibeam direct write lithography and partly motivated by the circuit testing literature, where parallel decompression reduces clock cycles. We explain a deflection plate architecture inspired by NuFlare Technology's multibeam mask writing system and how our datapath architecture can be easily added to it to improve performance.
Parallel compression/decompression-based datapath architecture for multibeam mask writers
NASA Astrophysics Data System (ADS)
Chaudhary, Narendra; Savari, Serap A.
2017-10-01
Multibeam electron beam systems will be used in the future for mask writing and for complementary lithography. The major challenges of the multibeam systems are in meeting throughput requirements and in handling the large data volumes associated with writing grayscale data on the wafer. In terms of future communications and computational requirements, Amdahl's law suggests that a simple increase of computation power and parallelism may not be a sustainable solution. We propose a parallel data compression algorithm to exploit the sparsity of mask data and a grayscale video-like representation of data. To improve the communication and computational efficiency of these systems at the write time, we propose an alternate datapath architecture partly motivated by multibeam direct-write lithography and partly motivated by the circuit testing literature, where parallel decompression reduces clock cycles. We explain a deflection plate architecture inspired by NuFlare Technology's multibeam mask writing system and how our datapath architecture can be easily added to it to improve performance.
Neighbour lists for smoothed particle hydrodynamics on GPUs
NASA Astrophysics Data System (ADS)
Winkler, Daniel; Rezavand, Massoud; Rauch, Wolfgang
2018-04-01
The efficient iteration of neighbouring particles is a performance critical aspect of any high performance smoothed particle hydrodynamics (SPH) solver. SPH solvers that implement a constant smoothing length generally divide the simulation domain into a uniform grid to reduce the computational complexity of the neighbour search. Based on this method, particle neighbours are either stored per grid cell or for each individual particle, denoted as Verlet list. While the latter approach has significantly higher memory requirements, it has the potential for a significant computational speedup. A theoretical comparison is performed to estimate the potential improvements of the method based on unknown hardware dependent factors. Subsequently, the computational performance of both approaches is empirically evaluated on graphics processing units. It is shown that the speedup differs significantly for different hardware, dimensionality and floating point precision. The Verlet list algorithm is implemented as an alternative to the cell linked list approach in the open-source SPH solver DualSPHysics and provided as a standalone software package.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E
2016-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.
2018-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777
Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models
NASA Astrophysics Data System (ADS)
Altuntas, Alper; Baugh, John
2017-07-01
Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.
NASA Astrophysics Data System (ADS)
Liansheng, Sui; Yin, Cheng; Bing, Li; Ailing, Tian; Krishna Asundi, Anand
2018-07-01
A novel computational ghost imaging scheme based on specially designed phase-only masks, which can be efficiently applied to encrypt an original image into a series of measured intensities, is proposed in this paper. First, a Hadamard matrix with a certain order is generated, where the number of elements in each row is equal to the size of the original image to be encrypted. Each row of the matrix is rearranged into the corresponding 2D pattern. Then, each pattern is encoded into the phase-only masks by making use of an iterative phase retrieval algorithm. These specially designed masks can be wholly or partially used in the process of computational ghost imaging to reconstruct the original information with high quality. When a significantly small number of phase-only masks are used to record the measured intensities in a single-pixel bucket detector, the information can be authenticated without clear visualization by calculating the nonlinear correlation map between the original image and its reconstruction. The results illustrate the feasibility and effectiveness of the proposed computational ghost imaging mechanism, which will provide an effective alternative for enriching the related research on the computational ghost imaging technique.
Monte Carlo sampling in diffusive dynamical systems
NASA Astrophysics Data System (ADS)
Tapias, Diego; Sanders, David P.; Altmann, Eduardo G.
2018-05-01
We introduce a Monte Carlo algorithm to efficiently compute transport properties of chaotic dynamical systems. Our method exploits the importance sampling technique that favors trajectories in the tail of the distribution of displacements, where deviations from a diffusive process are most prominent. We search for initial conditions using a proposal that correlates states in the Markov chain constructed via a Metropolis-Hastings algorithm. We show that our method outperforms the direct sampling method and also Metropolis-Hastings methods with alternative proposals. We test our general method through numerical simulations in 1D (box-map) and 2D (Lorentz gas) systems.
Sensory Interactive Teleoperator Robotic Grasping
NASA Technical Reports Server (NTRS)
Alark, Keli; Lumia, Ron
1997-01-01
As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.
Calculation of transmission probability by solving an eigenvalue problem
NASA Astrophysics Data System (ADS)
Bubin, Sergiy; Varga, Kálmán
2010-11-01
The electron transmission probability in nanodevices is calculated by solving an eigenvalue problem. The eigenvalues are the transmission probabilities and the number of nonzero eigenvalues is equal to the number of open quantum transmission eigenchannels. The number of open eigenchannels is typically a few dozen at most, thus the computational cost amounts to the calculation of a few outer eigenvalues of a complex Hermitian matrix (the transmission matrix). The method is implemented on a real space grid basis providing an alternative to localized atomic orbital based quantum transport calculations. Numerical examples are presented to illustrate the efficiency of the method.
NASA Technical Reports Server (NTRS)
1976-01-01
The primary objective of this study was to develop an integrated approach for the development, implementation, and utilization of all software that is required to efficiently and cost-effectively support advanced technology laboratory flight and ground operations. It was recognized that certain aspects of the operations would be mandatory computerized services; computerization of other aspects would be optional. Thus, the analyses encompassed not only alternate computer utilization and implementations but trade studies of the programmatic effects of non-computerized versus computerized approaches to the operations. A general overview of the study is presented.
The Efficiency of Higher Education Institutions in England Revisited: Comparing Alternative Measures
ERIC Educational Resources Information Center
Johnes, Geraint; Tone, Kaoru
2017-01-01
Data envelopment analysis (DEA) has often been used to evaluate efficiency in the context of higher education institutions. Yet there are numerous alternative non-parametric measures of efficiency available. This paper compares efficiency scores obtained for institutions of higher education in England, 2013-2014, using three different methods: the…
Coherent multiscale image processing using dual-tree quaternion wavelets.
Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G
2008-07-01
The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.
Comparison of alternative designs for reducing complex neurons to equivalent cables.
Burke, R E
2000-01-01
Reduction of the morphological complexity of actual neurons into accurate, computationally efficient surrogate models is an important problem in computational neuroscience. The present work explores the use of two morphoelectrotonic transformations, somatofugal voltage attenuation (AT cables) and signal propagation delay (DL cables), as bases for construction of electrotonically equivalent cable models of neurons. In theory, the AT and DL cables should provide more accurate lumping of membrane regions that have the same transmembrane potential than the familiar equivalent cables that are based only on somatofugal electrotonic distance (LM cables). In practice, AT and DL cables indeed provided more accurate simulations of the somatic transient responses produced by fully branched neuron models than LM cables. This was the case in the presence of a somatic shunt as well as when membrane resistivity was uniform.
Computational methods for yeast prion curing curves.
Ridout, Martin S
2008-10-01
If the chemical guanidine hydrochloride is added to a dividing culture of yeast cells in which some of the protein Sup35p is in its prion form, the proportion of cells that carry replicating units of the prion, termed propagons, decreases gradually over time. Stochastic models to describe this process of 'curing' have been developed in earlier work. The present paper investigates the use of numerical methods of Laplace transform inversion to calculate curing curves and contrasts this with an alternative, more direct, approach that involves numerical integration. Transform inversion is found to provide a much more efficient computational approach that allows different models to be investigated with minimal programming effort. The method is used to investigate the robustness of the curing curve to changes in the assumed distribution of cell generation times. Matlab code is available for carrying out the calculations.
NASA Astrophysics Data System (ADS)
Das, A.; Bang, H. S.; Bang, H. S.
2018-05-01
Multi-material combinations of aluminium alloy and carbon-fiber-reinforced-plastics (CFRP) have gained attention in automotive and aerospace industries to enhance fuel efficiency and strength-to-weight ratio of components. Various limitations of laser beam welding, adhesive bonding and mechanical fasteners make these processes inefficient to join metal and CFRP sheets. Friction lap joining is an alternative choice for the same. Comprehensive studies in friction lap joining of aluminium to CFRP sheets are essential and scare in the literature. The present work reports a combined theoretical and experimental study in joining of AA5052 and CFRP sheets using friction lap joining process. A three-dimensional finite element based heat transfer model is developed to compute the temperature fields and thermal cycles. The computed results are validated extensively with the corresponding experimentally measured results.
Machine learning bandgaps of double perovskites
Pilania, G.; Mannodi-Kanakkithodi, A.; Uberuaga, B. P.; ...
2016-01-19
The ability to make rapid and accurate predictions on bandgaps of double perovskites is of much practical interest for a range of applications. While quantum mechanical computations for high-fidelity bandgaps are enormously computation-time intensive and thus impractical in high throughput studies, informatics-based statistical learning approaches can be a promising alternative. Here we demonstrate a systematic feature-engineering approach and a robust learning framework for efficient and accurate predictions of electronic bandgaps of double perovskites. After evaluating a set of more than 1.2 million features, we identify lowest occupied Kohn-Sham levels and elemental electronegativities of the constituent atomic species as the mostmore » crucial and relevant predictors. As a result, the developed models are validated and tested using the best practices of data science and further analyzed to rationalize their prediction performance.« less
Meng, Yilin; Roux, Benoît
2015-08-11
The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.
2015-01-01
The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437
An automated method for detecting alternatively spliced protein domains.
Coelho, Vitor; Sammeth, Michael
2018-06-01
Alternative splicing (AS) has been demonstrated to play a role in shaping eukaryotic gene diversity at the transcriptional level. However, the impact of AS on the proteome is still controversial. Studies that seek to explore the effect of AS at the proteomic level are hampered by technical difficulties in the cumbersome process of casting forth and back between genome, transcriptome and proteome space coordinates, and the naïve prediction of protein domains in the presence of AS suffers many redundant sequence scans that emerge from constitutively spliced regions that are shared between alternative products of a gene. We developed the AstaFunk pipeline that computes for every generic transcriptome all domains that are altered by AS events in a systematic and efficient manner. In a nutshell, our method employs Viterbi dynamic programming, which guarantees to find all score-optimal hits of the domains under consideration, while complementary optimisations at different levels avoid redundant and other irrelevant computations. We evaluate AstaFunk qualitatively and quantitatively using RNAseq in well-studied genes with AS, and on large-scale employing entire transcriptomes. Our study confirms complementary reports that the effect of most AS events on the proteome seems to be rather limited, but our results also pinpoint several cases where AS could have a major impact on the function of a protein domain. The JAVA implementation of AstaFunk is available as an open source project on http://astafunk.sammeth.net. micha@sammeth.net. Supplementary data are available at Bioinformatics online.
Analyzing big data with the hybrid interval regression methods.
Huang, Chia-Hui; Yang, Keng-Chieh; Kao, Han-Ying
2014-01-01
Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.
Analyzing Big Data with the Hybrid Interval Regression Methods
Kao, Han-Ying
2014-01-01
Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes. PMID:25143968
APPLICATION OF NEURAL NETWORK ALGORITHMS FOR BPM LINEARIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musson, John C.; Seaton, Chad; Spata, Mike F.
2012-11-01
Stripline BPM sensors contain inherent non-linearities, as a result of field distortions from the pickup elements. Many methods have been devised to facilitate corrections, often employing polynomial fitting. The cost of computation makes real-time correction difficult, particulalry when integer math is utilized. The application of neural-network technology, particularly the multi-layer perceptron algorithm, is proposed as an efficient alternative for electrode linearization. A process of supervised learning is initially used to determine the weighting coefficients, which are subsequently applied to the incoming electrode data. A non-linear layer, known as an activation layer, is responsible for the removal of saturation effects. Implementationmore » of a perceptron in an FPGA-based software-defined radio (SDR) is presented, along with performance comparisons. In addition, efficient calculation of the sigmoidal activation function via the CORDIC algorithm is presented.« less
Carbon emissions from U.S. ethylene production under climate change policies.
Ruth, Matthias; Amato, Anthony D; Davidsdottir, Brynhildur
2002-01-15
This paper presents the results from a dynamic computer model of U.S. ethylene production, designed to explore implications of alternative climate change policies for the industry's energy use and carbon emissions profiles. The model applies to the aggregate ethylene industry but distinguishes its main cracker types, fuels used as feedstocks and for process energy, as well as the industry's capital vintage structure and vintage-specific efficiencies. Results indicate that policies which increase the cost of carbon of process energy-such as carbon taxes or carbon permit systems-are relatively blunt instruments for cutting carbon emissions from ethylene production. In contrast, policies directly affecting the relative efficiencies of new to old capital-such as R&D stimuli or accelerated depreciation schedules-may be more effective in leveraging the industry's potential for carbon emissions reductions.
Jin, Guangxu; Wong, Stephen T C
2014-05-01
Recycling old drugs, rescuing shelved drugs and extending patents' lives make drug repositioning an attractive form of drug discovery. Drug repositioning accounts for approximately 30% of the newly US Food and Drug Administration (FDA)-approved drugs and vaccines in recent years. The prevalence of drug-repositioning studies has resulted in a variety of innovative computational methods for the identification of new opportunities for the use of old drugs. Questions often arise from customizing or optimizing these methods into efficient drug-repositioning pipelines for alternative applications. It requires a comprehensive understanding of the available methods gained by evaluating both biological and pharmaceutical knowledge and the elucidated mechanism-of-action of drugs. Here, we provide guidance for prioritizing and integrating drug-repositioning methods for specific drug-repositioning pipelines. Copyright © 2013 Elsevier Ltd. All rights reserved.
Distribution system model calibration with big data from AMI and PV inverters
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.; ...
2016-03-03
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Distribution system model calibration with big data from AMI and PV inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Multithreaded implicitly dealiased convolutions
NASA Astrophysics Data System (ADS)
Roberts, Malcolm; Bowman, John C.
2018-03-01
Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.
A Real-Time Localization System for an Endoscopic Capsule Using Magnetic Sensors †
Pham, Duc Minh; Aziz, Syed Mahfuzul
2014-01-01
Magnetic sensing technology offers an attractive alternative for in vivo tracking with much better performance than RF and ultrasound technologies. In this paper, an efficient in vivo magnetic tracking system is presented. The proposed system is intended to localize an endoscopic capsule which delivers biomarkers around specific locations of the gastrointestinal (GI) tract. For efficiently localizing a magnetic marker inside the capsule, a mathematical model has been developed for the magnetic field around a cylindrical magnet and used with a localization algorithm that provides minimum error and fast computation. The proposed tracking system has much reduced complexity compared to the ones reported in the literature to date. Laboratory tests and in vivo animal trials have demonstrated the suitability of the proposed system for tracking a magnetic marker with expected accuracy. PMID:25379813
NASA Technical Reports Server (NTRS)
Larimer, Stanley J.; Lisec, Thomas R.; Spiessbach, Andrew J.
1989-01-01
Under a contract with NASA's Jet Propulsion Laboratory, Martin Marietta has developed several alternative rover concepts for unmanned exploration of the planet Mars. One of those concepts, the 'Walking Beam', is the subject of this paper. This concept was developed with the goal of achieving many of the capabilities of more sophisticated articulated-leg walkers with a much simpler, more robust, less computationally demanding and more power efficient design. It consists of two large-base tripods nested one within the other which alternately translate with respect to each other along a 5-meter beam to propel the vehicle. The semiautonomous navigation system relies on terrain geometry sensors and tacticle feedback from each foot to autonomously select a path which avoids hazards along a route designated from earth. Both mobility and navigation features of this concept are discussed including a top-level description of the vehicle's physical characteristics, deployment strategy, mobility elements, sensor suite, theory of operation, navigation and control processes, and estimated performance.
Ocean outfalls as an alternative to minimizing risks to human and environmental health.
Feitosa, Renato Castiglia
2017-06-01
Submarine outfalls are proposed as an efficient alternative for the final destination of wastewater in densely populated coastal areas, due to the high dispersal capacity and the clearance of organic matter in the marine environment, and because they require small areas for implementation. This paper evaluates the probability of unsuitable bathing conditions in coastal areas nearby to the Ipanema, Barra da Tijuca and Icaraí outfalls based on a computational methodology gathering hydrodynamic, pollutant transport, and bacterial decay modelling. The results show a strong influence of solar radiation and all factors that mitigate its levels in the marine environment on coliform concentration. The aforementioned outfalls do not pollute the coastal areas, and unsuitable bathing conditions are restricted to nearby effluent launching points. The pollution observed at the beaches indicates that the contamination occurs due to the polluted estuarine systems, rivers and canals that flow to the coast.
Efficient evaluation of the Coulomb force in the Gaussian and finite-element Coulomb method.
Kurashige, Yuki; Nakajima, Takahito; Sato, Takeshi; Hirao, Kimihiko
2010-06-28
We propose an efficient method for evaluating the Coulomb force in the Gaussian and finite-element Coulomb (GFC) method, which is a linear-scaling approach for evaluating the Coulomb matrix and energy in large molecular systems. The efficient evaluation of the analytical gradient in the GFC is not straightforward as well as the evaluation of the energy because the SCF procedure with the Coulomb matrix does not give a variational solution for the Coulomb energy. Thus, an efficient approximate method is alternatively proposed, in which the Coulomb potential is expanded in the Gaussian and finite-element auxiliary functions as done in the GFC. To minimize the error in the gradient not just in the energy, the derived functions of the original auxiliary functions of the GFC are used additionally for the evaluation of the Coulomb gradient. In fact, the use of the derived functions significantly improves the accuracy of this approach. Although these additional auxiliary functions enlarge the size of the discretized Poisson equation and thereby increase the computational cost, it maintains the near linear scaling as the GFC and does not affects the overall efficiency of the GFC approach.
Single-Sex Computer Classes: An Effective Alternative.
ERIC Educational Resources Information Center
Swain, Sandra L.; Harvey, Douglas M.
2002-01-01
Advocates single-sex computer instruction as a temporary alternative educational program to provide middle school and secondary school girls with access to computers, to present girls with opportunities to develop positive attitudes towards technology, and to make available a learning environment conducive to girls gaining technological skills.…
Solving multistage stochastic programming models of portfolio selection with outstanding liabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edirisinghe, C.
1994-12-31
Models for portfolio selection in the presence of an outstanding liability have received significant attention, for example, models for pricing options. The problem may be described briefly as follows: given a set of risky securities (and a riskless security such as a bond), and given a set of cash flows, i.e., outstanding liability, to be met at some future date, determine an initial portfolio and a dynamic trading strategy for the underlying securities such that the initial cost of the portfolio is within a prescribed wealth level and the expected cash surpluses arising from trading is maximized. While the tradingmore » strategy should be self-financing, there may also be other restrictions such as leverage and short-sale constraints. Usually the treatment is limited to binomial evolution of uncertainty (of stock price), with possible extensions for developing computational bounds for multinomial generalizations. Posing as stochastic programming models of decision making, we investigate alternative efficient solution procedures under continuous evolution of uncertainty, for discrete time economies. We point out an important moment problem arising in the portfolio selection problem, the solution (or bounds) on which provides the basis for developing efficient computational algorithms. While the underlying stochastic program may be computationally tedious even for a modest number of trading opportunities (i.e., time periods), the derived algorithms may used to solve problems whose sizes are beyond those considered within stochastic optimization.« less
FCA Group LLC request to the EPA regarding greenhouse gas, off-cycle CO2 credits for High Efficiency Alternators used on 2009 and subsequent model year vehicles and off-cycle fuel consumption credits for 2017 and subsequent model year vehicles.
Efficient alignment-free DNA barcode analytics.
Kuksa, Pavel; Pavlovic, Vladimir
2009-11-10
In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoang, Tuan L.; Physical and Life Sciences Directorate, Lawrence Livermore National Laboratory, CA 94550; Marian, Jaime, E-mail: jmarian@ucla.edu
2015-11-01
An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a proceduremore » for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe{sup 3+}, He{sup +} and H{sup +}) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.« less
NASA Astrophysics Data System (ADS)
Hoang, Tuan L.; Marian, Jaime; Bulatov, Vasily V.; Hosemann, Peter
2015-11-01
An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a procedure for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe3+, He+ and H+) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.
Terascale direct numerical simulations of turbulent combustion using S3D
NASA Astrophysics Data System (ADS)
Chen, J. H.; Choudhary, A.; de Supinski, B.; DeVries, M.; Hawkes, E. R.; Klasky, S.; Liao, W. K.; Ma, K. L.; Mellor-Crummey, J.; Podhorszki, N.; Sankaran, R.; Shende, S.; Yoo, C. S.
2009-01-01
Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.
Mostafavi, Kamal; Tutunea-Fatan, O Remus; Bordatchev, Evgueni V; Johnson, James A
2014-12-01
The strong advent of computer-assisted technologies experienced by the modern orthopedic surgery prompts for the expansion of computationally efficient techniques to be built on the broad base of computer-aided engineering tools that are readily available. However, one of the common challenges faced during the current developmental phase continues to remain the lack of reliable frameworks to allow a fast and precise conversion of the anatomical information acquired through computer tomography to a format that is acceptable to computer-aided engineering software. To address this, this study proposes an integrated and automatic framework capable to extract and then postprocess the original imaging data to a common planar and closed B-Spline representation. The core of the developed platform relies on the approximation of the discrete computer tomography data by means of an original two-step B-Spline fitting technique based on successive deformations of the control polygon. In addition to its rapidity and robustness, the developed fitting technique was validated to produce accurate representations that do not deviate by more than 0.2 mm with respect to alternate representations of the bone geometry that were obtained through different-contact-based-data acquisition or data processing methods. © IMechE 2014.
ERIC Educational Resources Information Center
Miller, Elizabeth R.
2013-01-01
Alternative schools educate students who have previously been unsuccessful in the traditional school setting. Many alternative school students are behind on high school credits, and the schools provide options for credit recovery. Computer-assisted instruction is often used for this purpose. Using case study methodology and a critical theoretical…
On-line Bayesian model updating for structural health monitoring
NASA Astrophysics Data System (ADS)
Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo
2018-03-01
Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.
High-fidelity spin measurement on the nitrogen-vacancy center
NASA Astrophysics Data System (ADS)
Hanks, Michael; Trupke, Michael; Schmiedmayer, Jörg; Munro, William J.; Nemoto, Kae
2017-10-01
Nitrogen-vacancy (NV) centers in diamond are versatile candidates for many quantum information processing tasks, ranging from quantum imaging and sensing through to quantum communication and fault-tolerant quantum computers. Critical to almost every potential application is an efficient mechanism for the high fidelity readout of the state of the electronic and nuclear spins. Typically such readout has been achieved through an optically resonant fluorescence measurement, but the presence of decay through a meta-stable state will limit its efficiency to the order of 99%. While this is good enough for many applications, it is insufficient for large scale quantum networks and fault-tolerant computational tasks. Here we explore an alternative approach based on dipole induced transparency (state-dependent reflection) in an NV center cavity QED system, using the most recent knowledge of the NV center’s parameters to determine its feasibility, including the decay channels through the meta-stable subspace and photon ionization. We find that single-shot measurements above fault-tolerant thresholds should be available in the strong coupling regime for a wide range of cavity-center cooperativities, using a majority voting approach utilizing single photon detection. Furthermore, extremely high fidelity measurements are possible using weak optical pulses.
PEGASUS 5: An Automated Pre-Processor for Overset-Grid CFD
NASA Technical Reports Server (NTRS)
Suhs, Norman E.; Rogers, Stuart E.; Dietz, William E.; Kwak, Dochan (Technical Monitor)
2002-01-01
An all new, automated version of the PEGASUS software has been developed and tested. PEGASUS provides the hole-cutting and connectivity information between overlapping grids, and is used as the final part of the grid generation process for overset-grid computational fluid dynamics approaches. The new PEGASUS code (Version 5) has many new features: automated hole cutting; a projection scheme for fixing gaps in overset surfaces; more efficient interpolation search methods using an alternating digital tree; hole-size optimization based on adding additional layers of fringe points; and an automatic restart capability. The new code has also been parallelized using the Message Passing Interface standard. The parallelization performance provides efficient speed-up of the execution time by an order of magnitude, and up to a factor of 30 for very large problems. The results of three example cases are presented: a three-element high-lift airfoil, a generic business jet configuration, and a complete Boeing 777-200 aircraft in a high-lift landing configuration. Comparisons of the computed flow fields for the airfoil and 777 test cases between the old and new versions of the PEGASUS codes show excellent agreement with each other and with experimental results.
Background recovery via motion-based robust principal component analysis with matrix factorization
NASA Astrophysics Data System (ADS)
Pan, Peng; Wang, Yongli; Zhou, Mingyuan; Sun, Zhipeng; He, Guoping
2018-03-01
Background recovery is a key technique in video analysis, but it still suffers from many challenges, such as camouflage, lighting changes, and diverse types of image noise. Robust principal component analysis (RPCA), which aims to recover a low-rank matrix and a sparse matrix, is a general framework for background recovery. The nuclear norm is widely used as a convex surrogate for the rank function in RPCA, which requires computing the singular value decomposition (SVD), a task that is increasingly costly as matrix sizes and ranks increase. However, matrix factorization greatly reduces the dimension of the matrix for which the SVD must be computed. Motion information has been shown to improve low-rank matrix recovery in RPCA, but this method still finds it difficult to handle original video data sets because of its batch-mode formulation and implementation. Hence, in this paper, we propose a motion-assisted RPCA model with matrix factorization (FM-RPCA) for background recovery. Moreover, an efficient linear alternating direction method of multipliers with a matrix factorization (FL-ADM) algorithm is designed for solving the proposed FM-RPCA model. Experimental results illustrate that the method provides stable results and is more efficient than the current state-of-the-art algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, Dustin Yewell
Echo™ is a MATLAB-based software package designed for robust and scalable analysis of complex data workflows. An alternative to tedious, error-prone conventional processes, Echo is based on three transformative principles for data analysis: self-describing data, name-based indexing, and dynamic resource allocation. The software takes an object-oriented approach to data analysis, intimately connecting measurement data with associated metadata. Echo operations in an analysis workflow automatically track and merge metadata and computation parameters to provide a complete history of the process used to generate final results, while automated figure and report generation tools eliminate the potential to mislabel those results. History reportingmore » and visualization methods provide straightforward auditability of analysis processes. Furthermore, name-based indexing on metadata greatly improves code readability for analyst collaboration and reduces opportunities for errors to occur. Echo efficiently manages large data sets using a framework that seamlessly allocates resources such that only the necessary computations to produce a given result are executed. Echo provides a versatile and extensible framework, allowing advanced users to add their own tools and data classes tailored to their own specific needs. Applying these transformative principles and powerful features, Echo greatly improves analyst efficiency and quality of results in many application areas.« less
NASA Astrophysics Data System (ADS)
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg
2016-02-01
We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.
Nested polynomial trends for the improvement of Gaussian process-based predictors
NASA Astrophysics Data System (ADS)
Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.
2017-10-01
The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.
Flow through three-dimensional arrangements of cylinders with alternating streamwise planar tilt
NASA Astrophysics Data System (ADS)
Sahraoui, M.; Marshall, H.; Kaviany, M.
1993-09-01
In this report, fluid flow through a three-dimensional model for the fibrous filters is examined. In this model, the three-dimensional Stokes equation with the appropriate periodic boundary conditions is solved using the finite volume method. In addition to the numerical solution, we attempt to model this flow analytically by using the two-dimensional extended analytic solution in each of the unit cells of the three-dimensional structure. Particle trajectories computed using the superimposed analytic solution of the flow field are closed to those computed using the numerical solution of the flow field. The numerical results show that the pressure drop is not affected significantly by the relative angle of rotation of the cylinders for the high porosity used in this study (epsilon = 0.8 and epsilon = 0.95). The numerical solution and the superimposed analytic solution are also compared in terms of the particle capture efficiency. The results show that the efficiency predictions using the two methods are within 10% for St = 0.01 and 5% for St = 100. As the the porosity decreases, the three-dimensional effect becomes more significant and a difference of 35% is obtained for epsilon = 0.8.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-31
... Determination Methods and Alternative Rating Methods AGENCY: Office of Energy Efficiency and Renewable Energy... proposing to revise and expand its existing regulations governing the use of particular methods as...- TP-0024, by any of the following methods: Email: to AED/[email protected] . Include EERE...
Alternative Fuels DISI Engine Research ? Autoignition Metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sjoberg, Carl Magnus Goran; Vuilleumier, David
Improved engine efficiency is required to comply with future fuel economy standards. Alternative fuels have the potential to enable more efficient engines while addressing concerns about energy security. This project contributes to the science base needed by industry to develop highly efficient direct injection spark igniton (DISI) engines that also beneficially exploit the different properties of alternative fuels. Here, the emphasis is on quantifying autoignition behavior for a range of spark-ignited engine conditions, including directly injected boosted conditions. The efficiency of stoichiometrically operated spark ignition engines is often limited by fuel-oxidizer end-gas autoignition, which can result in engine knock. Amore » fuel’s knock resistance is assessed empirically by the Research Octane Number (RON) and Motor Octane Number (MON) tests. By clarifying how these two tests relate to the autoignition behavior of conventional and alternative fuel formulations, fuel design guidelines for enhanced engine efficiency can be developed.« less
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-12-01
We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.
A lightweight QRS detector for single lead ECG signals using a max-min difference algorithm.
Pandit, Diptangshu; Zhang, Li; Liu, Chengyu; Chattopadhyay, Samiran; Aslam, Nauman; Lim, Chee Peng
2017-06-01
Detection of the R-peak pertaining to the QRS complex of an ECG signal plays an important role for the diagnosis of a patient's heart condition. To accurately identify the QRS locations from the acquired raw ECG signals, we need to handle a number of challenges, which include noise, baseline wander, varying peak amplitudes, and signal abnormality. This research aims to address these challenges by developing an efficient lightweight algorithm for QRS (i.e., R-peak) detection from raw ECG signals. A lightweight real-time sliding window-based Max-Min Difference (MMD) algorithm for QRS detection from Lead II ECG signals is proposed. Targeting to achieve the best trade-off between computational efficiency and detection accuracy, the proposed algorithm consists of five key steps for QRS detection, namely, baseline correction, MMD curve generation, dynamic threshold computation, R-peak detection, and error correction. Five annotated databases from Physionet are used for evaluating the proposed algorithm in R-peak detection. Integrated with a feature extraction technique and a neural network classifier, the proposed ORS detection algorithm has also been extended to undertake normal and abnormal heartbeat detection from ECG signals. The proposed algorithm exhibits a high degree of robustness in QRS detection and achieves an average sensitivity of 99.62% and an average positive predictivity of 99.67%. Its performance compares favorably with those from the existing state-of-the-art models reported in the literature. In regards to normal and abnormal heartbeat detection, the proposed QRS detection algorithm in combination with the feature extraction technique and neural network classifier achieves an overall accuracy rate of 93.44% based on an empirical evaluation using the MIT-BIH Arrhythmia data set with 10-fold cross validation. In comparison with other related studies, the proposed algorithm offers a lightweight adaptive alternative for R-peak detection with good computational efficiency. The empirical results indicate that it not only yields a high accuracy rate in QRS detection, but also exhibits efficient computational complexity at the order of O(n), where n is the length of an ECG signal. Copyright © 2017 Elsevier B.V. All rights reserved.
Yu, Peng; Shaw, Chad A
2014-06-01
The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Musenge, Eustasius; Chirwa, Tobias Freeman; Kahn, Kathleen; Vounatsou, Penelope
2013-06-01
Longitudinal mortality data with few deaths usually have problems of zero-inflation. This paper presents and applies two Bayesian models which cater for zero-inflation, spatial and temporal random effects. To reduce the computational burden experienced when a large number of geo-locations are treated as a Gaussian field (GF) we transformed the field to a Gaussian Markov Random Fields (GMRF) by triangulation. We then modelled the spatial random effects using the Stochastic Partial Differential Equations (SPDEs). Inference was done using a computationally efficient alternative to Markov chain Monte Carlo (MCMC) called Integrated Nested Laplace Approximation (INLA) suited for GMRF. The models were applied to data from 71,057 children aged 0 to under 10 years from rural north-east South Africa living in 15,703 households over the years 1992-2010. We found protective effects on HIV/TB mortality due to greater birth weight, older age and more antenatal clinic visits during pregnancy (adjusted RR (95% CI)): 0.73(0.53;0.99), 0.18(0.14;0.22) and 0.96(0.94;0.97) respectively. Therefore childhood HIV/TB mortality could be reduced if mothers are better catered for during pregnancy as this can reduce mother-to-child transmissions and contribute to improved birth weights. The INLA and SPDE approaches are computationally good alternatives in modelling large multilevel spatiotemporal GMRF data structures.
SPDE/SPRE final summary report
NASA Technical Reports Server (NTRS)
Dochat, George
1993-01-01
Mechanical Technology Incorporated (MTI) performed acceptance testing on the Space Power Research Engine (SPRE), which demonstrated satisfactory operation and sufficient reliability for delivery to NASA Lewis Research Center. The unit produced 13.5 kW PV power with an efficiency of 22 percent versus design goals of 28.8 kW PV power and efficiency of 28 percent. Maximum electric power was only 8 kWe due to lower alternator efficiency. One of the major shortcomings of the SPRE was linear alternator efficiency, which was only 70 percent compared to a design value of 90 percent. It was determined from static tests that the major cause for the efficiency shortfall was the location of the magnetic structure surrounding the linear alternator. Testing of an alternator configuration without a surrounding magnetic structure on a linear dynamometer confirmed earlier static test results. Linear alternator efficiency improved from 70 percent to over 90 percent. Testing of the MTI SPRE was also performed with hydrodynamic bearings and achieved full-stroke, stable operation. This testing indicated that hydrodynamic bearings may be useful in free piston Stirling engines. An important factor in achieving stable operation at design stroke was isolating a portion of the bearing length from the engine pressure variations. In addition, the heat pipe heater head design indicates that integration of a Stirling engine with a heat source can be performed via heat pipes. This design provides a baseline against which alternative designs can be measured.
Development of a flocculation sub-model for a 3-D CFD model based on rectangular settling tanks.
Gong, M; Xanthos, S; Ramalingam, K; Fillos, J; Beckmann, K; Deur, A; McCorquodale, J A
2011-01-01
To assess performance and evaluate alternatives to improve the efficiency of rectangular Gould II type final settling tanks (FSTs), New York City Department of Environmental Protection and City College of NY developed a 3D computer model depicting the actual structural configuration of the tanks and the current and proposed hydraulic and solids loading rates. Fluent 6.3.26™ was the base platform for the computational fluid dynamics (CFD) model, for which sub-models of the SS settling characteristics, turbulence, flocculation and rheology were incorporated. This was supplemented by field and bench scale experiments to quantify the coefficients integral to the sub-models. The 3D model developed can be used to consider different baffle arrangements, sludge withdrawal mechanisms and loading alternatives to the FSTs. Flocculation in the front half of the rectangular tank especially in the region before and after the inlet baffle is one of the vital parameters that influences the capture efficiency of SS. Flocculation could be further improved by capturing medium and small size particles by creating an additional zone with an in-tank baffle. This was one of the methods that was adopted in optimizing the performance of the tank where the CCNY 3D CFD model was used to locate the in-tank baffle position. This paper describes the development of the flocculation sub-model and the relationship of the flocculation coefficients in the known Parker equation to the initial mixed liquor suspended solids (MLSS) concentration X0. A new modified equation is proposed removing the dependency of the breakup coefficient to the initial value of X0 based on preliminary data using normal and low concentration mixed liquor suspended solids values in flocculation experiments performed.
Computationally efficient multibody simulations
NASA Technical Reports Server (NTRS)
Ramakrishnan, Jayant; Kumar, Manoj
1994-01-01
Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.
Efficient parallel implicit methods for rotary-wing aerodynamics calculations
NASA Astrophysics Data System (ADS)
Wissink, Andrew M.
Euler/Navier-Stokes Computational Fluid Dynamics (CFD) methods are commonly used for prediction of the aerodynamics and aeroacoustics of modern rotary-wing aircraft. However, their widespread application to large complex problems is limited lack of adequate computing power. Parallel processing offers the potential for dramatic increases in computing power, but most conventional implicit solution methods are inefficient in parallel and new techniques must be adopted to realize its potential. This work proposes alternative implicit schemes for Euler/Navier-Stokes rotary-wing calculations which are robust and efficient in parallel. The first part of this work proposes an efficient parallelizable modification of the Lower Upper-Symmetric Gauss Seidel (LU-SGS) implicit operator used in the well-known Transonic Unsteady Rotor Navier Stokes (TURNS) code. The new hybrid LU-SGS scheme couples a point-relaxation approach of the Data Parallel-Lower Upper Relaxation (DP-LUR) algorithm for inter-processor communication with the Symmetric Gauss Seidel algorithm of LU-SGS for on-processor computations. With the modified operator, TURNS is implemented in parallel using Message Passing Interface (MPI) for communication. Numerical performance and parallel efficiency are evaluated on the IBM SP2 and Thinking Machines CM-5 multi-processors for a variety of steady-state and unsteady test cases. The hybrid LU-SGS scheme maintains the numerical performance of the original LU-SGS algorithm in all cases and shows a good degree of parallel efficiency. It experiences a higher degree of robustness than DP-LUR for third-order upwind solutions. The second part of this work examines use of Krylov subspace iterative solvers for the nonlinear CFD solutions. The hybrid LU-SGS scheme is used as a parallelizable preconditioner. Two iterative methods are tested, Generalized Minimum Residual (GMRES) and Orthogonal s-Step Generalized Conjugate Residual (OSGCR). The Newton method demonstrates good parallel performance on the IBM SP2, with OS-GCR giving slightly better performance than GMRES on large numbers of processors. For steady and quasi-steady calculations, the convergence rate is accelerated but the overall solution time remains about the same as the standard hybrid LU-SGS scheme. For unsteady calculations, however, the Newton method maintains a higher degree of time-accuracy which allows tbe use of larger timesteps and results in CPU savings of 20-35%.
Alternative Computer Access for Young Handicapped Children: A Systematic Selection Procedure.
ERIC Educational Resources Information Center
Morris, Karen J.
The paper describes the type of computer access products appropriate for use by handicapped children and presents a systematic procedure for selection of such input and output devices. Modification of computer input is accomplished by three strategies: modifying the keyboard, adding alternative keyboards, and attaching switches to the keyboard.…
NASA Technical Reports Server (NTRS)
DeChant, Lawrence Justin
1998-01-01
In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.
NASA Astrophysics Data System (ADS)
Lai, Wencong; Ogden, Fred L.; Steinke, Robert C.; Talbot, Cary A.
2015-03-01
We have developed a one-dimensional numerical method to simulate infiltration and redistribution in the presence of a shallow dynamic water table. This method builds upon the Green-Ampt infiltration with Redistribution (GAR) model and incorporates features from the Talbot-Ogden (T-O) infiltration and redistribution method in a discretized moisture content domain. The redistribution scheme is more physically meaningful than the capillary weighted redistribution scheme in the T-O method. Groundwater dynamics are considered in this new method instead of hydrostatic groundwater front. It is also computationally more efficient than the T-O method. Motion of water in the vadose zone due to infiltration, redistribution, and interactions with capillary groundwater are described by ordinary differential equations. Numerical solutions to these equations are computationally less expensive than solutions of the highly nonlinear Richards' (1931) partial differential equation. We present results from numerical tests on 11 soil types using multiple rain pulses with different boundary conditions, with and without a shallow water table and compare against the numerical solution of Richards' equation (RE). Results from the new method are in satisfactory agreement with RE solutions in term of ponding time, deponding time, infiltration rate, and cumulative infiltrated depth. The new method, which we call "GARTO" can be used as an alternative to the RE for 1-D coupled surface and groundwater models in general situations with homogeneous soils with dynamic water table. The GARTO method represents a significant advance in simulating groundwater surface water interactions because it very closely matches the RE solution while being computationally efficient, with guaranteed mass conservation, and no stability limitations that can affect RE solvers in the case of a near-surface water table.
Progressive Damage and Failure Analysis of Composite Laminates
NASA Astrophysics Data System (ADS)
Joseph, Ashith P. K.
Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis tool is validated by comparing the simulations against experiments for a selected number of quasi-static loading cases.
Real-time computing platform for spiking neurons (RT-spike).
Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael
2006-07-01
A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.
Dai, Huanping; Micheyl, Christophe
2012-11-01
Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.
Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*
Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; ...
2014-02-24
The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO 2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as patternmore » scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. In conclusion, it may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.« less
Yu, Jia; Blom, Jochen; Sczyrba, Alexander; Goesmann, Alexander
2017-09-10
The introduction of next generation sequencing has caused a steady increase in the amounts of data that have to be processed in modern life science. Sequence alignment plays a key role in the analysis of sequencing data e.g. within whole genome sequencing or metagenome projects. BLAST is a commonly used alignment tool that was the standard approach for more than two decades, but in the last years faster alternatives have been proposed including RapSearch, GHOSTX, and DIAMOND. Here we introduce HAMOND, an application that uses Apache Hadoop to parallelize DIAMOND computation in order to scale-out the calculation of alignments. HAMOND is fault tolerant and scalable by utilizing large cloud computing infrastructures like Amazon Web Services. HAMOND has been tested in comparative genomics analyses and showed promising results both in efficiency and accuracy. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Stochastic competitive learning in complex networks.
Silva, Thiago Christiano; Zhao, Liang
2012-03-01
Competitive learning is an important machine learning approach which is widely employed in artificial neural networks. In this paper, we present a rigorous definition of a new type of competitive learning scheme realized on large-scale networks. The model consists of several particles walking within the network and competing with each other to occupy as many nodes as possible, while attempting to reject intruder particles. The particle's walking rule is composed of a stochastic combination of random and preferential movements. The model has been applied to solve community detection and data clustering problems. Computer simulations reveal that the proposed technique presents high precision of community and cluster detections, as well as low computational complexity. Moreover, we have developed an efficient method for estimating the most likely number of clusters by using an evaluator index that monitors the information generated by the competition process itself. We hope this paper will provide an alternative way to the study of competitive learning..
Using electronic surveys in nursing research.
Cope, Diane G
2014-11-01
Computer and Internet use in businesses and homes in the United States has dramatically increased since the early 1980s. In 2011, 76% of households reported having a computer, compared with only 8% in 1984 (File, 2013). A similar increase in Internet use has also been seen, with 72% of households reporting access of the Internet in 2011 compared with 18% in 1997 (File, 2013). This emerging trend in technology has prompted use of electronic surveys in the research community as an alternative to previous telephone and postal surveys. Electronic surveys can offer an efficient, cost-effective method for data collection; however, challenges exist. An awareness of the issues and strategies to optimize data collection using web-based surveys is critical when designing research studies. This column will discuss the different types and advantages and disadvantages of using electronic surveys in nursing research, as well as methods to optimize the quality and quantity of survey responses.
Space-time interface-tracking with topology change (ST-TC)
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Buscher, Austin; Asada, Shohei
2014-10-01
To address the computational challenges associated with contact between moving interfaces, such as those in cardiovascular fluid-structure interaction (FSI), parachute FSI, and flapping-wing aerodynamics, we introduce a space-time (ST) interface-tracking method that can deal with topology change (TC). In cardiovascular FSI, our primary target is heart valves. The method is a new version of the deforming-spatial-domain/stabilized space-time (DSD/SST) method, and we call it ST-TC. It includes a master-slave system that maintains the connectivity of the "parent" mesh when there is contact between the moving interfaces. It is an efficient, practical alternative to using unstructured ST meshes, but without giving up on the accurate representation of the interface or consistent representation of the interface motion. We explain the method with conceptual examples and present 2D test computations with models representative of the classes of problems we are targeting.
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
Parallel multigrid smoothing: polynomial versus Gauss-Seidel
NASA Astrophysics Data System (ADS)
Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray
2003-07-01
Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.
Behavioral and computational aspects of language and its acquisition
NASA Astrophysics Data System (ADS)
Edelman, Shimon; Waterfall, Heidi
2007-12-01
One of the greatest challenges facing the cognitive sciences is to explain what it means to know a language, and how the knowledge of language is acquired. The dominant approach to this challenge within linguistics has been to seek an efficient characterization of the wealth of documented structural properties of language in terms of a compact generative grammar-ideally, the minimal necessary set of innate, universal, exception-less, highly abstract rules that jointly generate all and only the observed phenomena and are common to all human languages. We review developmental, behavioral, and computational evidence that seems to favor an alternative view of language, according to which linguistic structures are generated by a large, open set of constructions of varying degrees of abstraction and complexity, which embody both form and meaning and are acquired through socially situated experience in a given language community, by probabilistic learning algorithms that resemble those at work in other cognitive modalities.
Nano-Filament Field Emission Cathode Development Final Report CRADA No. TSB-0731-93
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernhardt, Tony; Fahlen, Ted
At the time the CRADA was established, Silicon Video Corporation, of Cupertino, CA was a one-year-old rapidly growing start-up company. SVC was developing flat panel displays (FPDs) to replace Cathode Ray Terminals (CRTs) for personal computers, work stations and televisions. They planned to base their products on low cost and energy efficient field emission technology. It was universally recognized that the display was both the dominant cost item and differentiating feature of many products such as laptop computers and hand-held electronics and that control of the display technology through U.S. sources was essential to success in these markets. The purposemore » of this CRADA project was to determine if electrochemical planarization would be a viable, inexpensive alternative to current optical polishing techniques for planarizing the surface of a ceramic backplate of a thin film display.« less
A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.
Catarinucci, Luca; Tarricone, Luciano
2009-01-01
The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.
Ching, Travers; Zhu, Xun; Garmire, Lana X
2018-04-01
Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet.
Characterization of Radial Curved Fin Heat Sink under Natural and Forced Convection
NASA Astrophysics Data System (ADS)
Khadke, Rishikesh; Bhole, Kiran
2018-02-01
Heat exchangers are important structures widely used in power plants, food industries, refrigeration, and air conditioners and now widely used in computing systems. Finned type of heat sink is widely used in computing systems. The main aim of the design of the heat sink is to maintain the optimum temperature level. To achieve this goal so many geometrical configurations are implemented. This paper presents a characterization of radially curved fin heat sink under natural and forced convection. Forced convection is studied for the optimization of temperature for better efficiency. The different alternatives in geometry are considered in characterization are heat intensity, the height of the fin and speed of the fan. By recognizing these alternatives the heat sink is characterized by the heat flux usually generated in high-end PCs. The temperature drop characteristics across height and radial direction are presented for the constant heat input and air flow in the heat sink. The effect of dimensionless elevation height (0 ≤ Z* ≤ 1) and Elenbaas Number (0.4 ≤ El ≤ 2.8) of the heat sink were investigated for study of the Nusselt number. Based on experimental characterization, process plan has been developed for the selection of the similar heat sinks for desired output (heat dissipation and temperature distribution).
Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics
Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier
2013-01-01
Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528
An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.
1994-01-01
Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.
48 CFR 252.204-7011 - Alternative Line Item Structure.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Unit Unit price Amount 0001 Computer, Desktop with CPU, Monitor, Keyboard and Mouse 20 EA Alternative... Unit Unit Price Amount 0001 Computer, Desktop with CPU, Keyboard and Mouse 20 EA 0002 Monitor 20 EA...
48 CFR 252.204-7011 - Alternative Line Item Structure.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Unit Unit price Amount 0001 Computer, Desktop with CPU, Monitor, Keyboard and Mouse 20 EA Alternative... Unit Unit Price Amount 0001 Computer, Desktop with CPU, Keyboard and Mouse 20 EA 0002 Monitor 20 EA...
48 CFR 252.204-7011 - Alternative Line Item Structure.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Unit Unit price Amount 0001 Computer, Desktop with CPU, Monitor, Keyboard and Mouse 20 EA Alternative... Unit Unit Price Amount 0001 Computer, Desktop with CPU, Keyboard and Mouse 20 EA 0002 Monitor 20 EA...
48 CFR 252.204-7011 - Alternative Line Item Structure.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Unit Unit price Amount 0001 Computer, Desktop with CPU, Monitor, Keyboard and Mouse 20 EA Alternative... Unit Unit Price Amount 0001 Computer, Desktop with CPU, Keyboard and Mouse 20 EA 0002 Monitor 20 EA...
Vanniyasingam, Thuva; Cunningham, Charles E; Foster, Gary; Thabane, Lehana
2016-01-01
Objectives Discrete choice experiments (DCEs) are routinely used to elicit patient preferences to improve health outcomes and healthcare services. While many fractional factorial designs can be created, some are more statistically optimal than others. The objective of this simulation study was to investigate how varying the number of (1) attributes, (2) levels within attributes, (3) alternatives and (4) choice tasks per survey will improve or compromise the statistical efficiency of an experimental design. Design and methods A total of 3204 DCE designs were created to assess how relative design efficiency (d-efficiency) is influenced by varying the number of choice tasks (2–20), alternatives (2–5), attributes (2–20) and attribute levels (2–5) of a design. Choice tasks were created by randomly allocating attribute and attribute level combinations into alternatives. Outcome Relative d-efficiency was used to measure the optimality of each DCE design. Results DCE design complexity influenced statistical efficiency. Across all designs, relative d-efficiency decreased as the number of attributes and attribute levels increased. It increased for designs with more alternatives. Lastly, relative d-efficiency converges as the number of choice tasks increases, where convergence may not be at 100% statistical optimality. Conclusions Achieving 100% d-efficiency is heavily dependent on the number of attributes, attribute levels, choice tasks and alternatives. Further exploration of overlaps and block sizes are needed. This study's results are widely applicable for researchers interested in creating optimal DCE designs to elicit individual preferences on health services, programmes, policies and products. PMID:27436671
Prediction of Classroom Reverberation Time using Neural Network
NASA Astrophysics Data System (ADS)
Liyana Zainudin, Fathin; Kadir Mahamad, Abd; Saon, Sharifah; Nizam Yahya, Musli
2018-04-01
In this paper, an alternative method for predicting the reverberation time (RT) using neural network (NN) for classroom was designed and explored. Classroom models were created using Google SketchUp software. The NN applied training dataset from the classroom models with RT values that were computed from ODEON 12.10 software. The NN was conducted separately for 500Hz, 1000Hz, and 2000Hz as absorption coefficient that is one of the prominent input variable is frequency dependent. Mean squared error (MSE) and regression (R) values were obtained to examine the NN efficiency. Overall, the NN shows a good result with MSE < 0.005 and R > 0.9. The NN also managed to achieve a percentage of accuracy of 92.53% for 500Hz, 93.66% for 1000Hz, and 93.18% for 2000Hz and thus displays a good and efficient performance. Nevertheless, the optimum RT value is range between 0.75 – 0.9 seconds.
NASA Astrophysics Data System (ADS)
Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin
2017-12-01
We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.
Kalman and particle filtering methods for full vehicle and tyre identification
NASA Astrophysics Data System (ADS)
Bogdanski, Karol; Best, Matthew C.
2018-05-01
This paper considers identification of all significant vehicle handling dynamics of a test vehicle, including identification of a combined-slip tyre model, using only those sensors currently available on most vehicle controller area network buses. Using an appropriately simple but efficient model structure, all of the independent parameters are found from test vehicle data, with the resulting model accuracy demonstrated on independent validation data. The paper extends previous work on augmented Kalman Filter state estimators to concentrate wholly on parameter identification. It also serves as a review of three alternative filtering methods; identifying forms of the unscented Kalman filter, extended Kalman filter and particle filter are proposed and compared for effectiveness, complexity and computational efficiency. All three filters are suited to applications of system identification and the Kalman Filters can also operate in real-time in on-line model predictive controllers or estimators.
Is a matrix exponential specification suitable for the modeling of spatial correlation structures?
Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha
2018-01-01
This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375
Localized Fault Recovery for Nested Fork-Join Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kestor, Gokcen; Krishnamoorthy, Sriram; Ma, Wenjing
Nested fork-join programs scheduled using work stealing can automatically balance load and adapt to changes in the execution environment. In this paper, we design an approach to efficiently recover from faults encountered by these programs. Specifically, we focus on localized recovery of the task space in the presence of fail-stop failures. We present an approach to efficiently track, under work stealing, the relationships between the work executed by various threads. This information is used to identify and schedule the tasks to be re-executed without interfering with normal task execution. The algorithm precisely computes the work lost, incurs minimal re-execution overhead,more » and can recover from an arbitrary number of failures. Experimental evaluation demonstrates low overheads in the absence of failures, recovery overheads on the same order as the lost work, and much lower recovery costs than alternative strategies.« less
Kang, Dongwan D.; Froula, Jeff; Egan, Rob; ...
2015-01-01
Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. Lastly, it automatically formsmore » hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.« less
Tavares, Ana Beatriz M L A; Lima Neto, José X; Fulco, Umberto L; Albuquerque, Eudenilson L
2018-01-30
Much of the recent excitement in the cancer immunotherapy approach has been generated by the recognition that immune checkpoint proteins, like the receptor PD-1, can be blocked by antibody-based drugs with profound effects. Promising clinical data have already been released pointing to the efficiency of the drug pembrolizumab to block the PD-1 pathway, triggering the T-lymphocytes to destroy the cancer cells. Thus, a deep understanding of this drug/receptor complex is essential for the improvement of new drugs targeting the protein PD-1. In this context, by employing quantum chemistry methods based on the Density Functional Theory (DFT), we investigate in silico the binding energy features of the receptor PD-1 in complex with its drug inhibitor. Our computational results give a better understanding of the binding mechanisms, being also an efficient alternative towards the development of antibody-based drugs, pointing to new treatments for cancer therapy.
Ahmad, Peer Zahoor; Quadri, S M K; Ahmad, Firdous; Bahar, Ali Newaz; Wani, Ghulam Mohammad; Tantary, Shafiq Maqbool
2017-12-01
Quantum-dot cellular automata, is an extremely small size and a powerless nanotechnology. It is the possible alternative to current CMOS technology. Reversible QCA logic is the most important issue at present time to reduce power losses. This paper presents a novel reversible logic gate called the F-Gate. It is simplest in design and a powerful technique to implement reversible logic. A systematic approach has been used to implement a novel single layer reversible Full-Adder, Full-Subtractor and a Full Adder-Subtractor using the F-Gate. The proposed Full Adder-Subtractor has achieved significant improvements in terms of overall circuit parameters among the most previously cost-efficient designs that exploit the inevitable nano-level issues to perform arithmetic computing. The proposed designs have been authenticated and simulated using QCADesigner tool ver. 2.0.3.
NASA Astrophysics Data System (ADS)
Drobny, Jon; Curreli, Davide; Ruzic, David; Lasa, Ane; Green, David; Canik, John; Younkin, Tim; Blondel, Sophie; Wirth, Brian
2017-10-01
Surface roughness greatly impacts material erosion, and thus plays an important role in Plasma-Surface Interactions. Developing strategies for efficiently introducing rough surfaces into ion-solid interaction codes will be an important step towards whole-device modeling of plasma devices and future fusion reactors such as ITER. Fractal TRIDYN (F-TRIDYN) is an upgraded version of the Monte Carlo, BCA program TRIDYN developed for this purpose that includes an explicit fractal model of surface roughness and extended input and output options for file-based code coupling. Code coupling with both plasma and material codes has been achieved and allows for multi-scale, whole-device modeling of plasma experiments. These code coupling results will be presented. F-TRIDYN has been further upgraded with an alternative, statistical model of surface roughness. The statistical model is significantly faster than and compares favorably to the fractal model. Additionally, the statistical model compares well to alternative computational surface roughness models and experiments. Theoretical links between the fractal and statistical models are made, and further connections to experimental measurements of surface roughness are explored. This work was supported by the PSI-SciDAC Project funded by the U.S. Department of Energy through contract DOE-DE-SC0008658.
NASA Technical Reports Server (NTRS)
Schmidt, R. F.
1979-01-01
The fundamental definition of beam efficiency, given in terms of a far field radiation pattern, was used to develop alternative definitions which improve accuracy, reduce the amount of calculation required, and isolate the separate factors composing beam efficiency. Well-known definitions of aperture efficiency were introduced successively to simplify the denominator of the fundamental definition. The superposition of complex vector spillover and backscattered fields was examined, and beam efficiency analysis in terms of power patterns was carried out. An extension from single to dual reflector geometries was included. It is noted that the alternative definitions are advantageous in the mathematical simulation of a radiometer system, and are not intended for the measurements discipline where fields have merged and therefore lost their identity.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Steps in computing your primary insurance... Steps in computing your primary insurance amount under the guaranteed alternative—general. If you reach age 62 after 1978 but before 1984, we follow three major steps in finding your guaranteed alternative...
An Alternative Method for Computing Unit Costs and Productivity Ratios. AIR 1984 Annual Forum Paper.
ERIC Educational Resources Information Center
Winstead, Wayland H.; And Others
An alternative measure for evaluating the performance of academic departments was studied. A comparison was made with the traditional manner for computing unit costs and productivity ratios: prorating the salary and effort of each faculty member to each course level based on the personal mix of course taught. The alternative method used averaging…
Computer-Aided Group Problem Solving for Unified Life Cycle Engineering (ULCE)
1989-02-01
defining the problem, generating alternative solutions, evaluating alternatives, selecting alternatives, and implementing the solution. Systems...specialist in group dynamics, assists the group in formulating the problem and selecting a model framework. The analyst provides the group with computer...allocating resources, evaluating and selecting options, making judgments explicit, and analyzing dynamic systems. c. University of Rhode Island Drs. Geoffery
Benchmark Lisp And Ada Programs
NASA Technical Reports Server (NTRS)
Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.
1992-01-01
Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.
Air transportation energy efficiency - Alternatives and implications
NASA Technical Reports Server (NTRS)
Williams, L. J.
1976-01-01
Results from recent studies of air transportation energy efficiency alternatives are discussed, along with some of the implications of these alternatives. The fuel-saving alternatives considered include aircraft operation, aircraft modification, derivative aircraft, and new aircraft. In the near-term, energy efficiency improvements should be possible through small improvements in fuel-saving flight procedures, higher density seating, and higher load factors. Additional small near-term improvements could be obtained through aircraft modifications, such as the relatively inexpensive drag reduction modifications. Derivatives of existing aircraft could meet the requirements for new aircraft and provide energy improvements until advanced technology is available to justify the cost of a completely new design. In order to obtain significant improvements in energy efficiency, new aircraft must truly exploit advanced technology in such areas as aerodynamics, composite structures, active controls, and advanced propulsion.
ADM For Solving Linear Second-Order Fredholm Integro-Differential Equations
NASA Astrophysics Data System (ADS)
Karim, Mohd F.; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Che-Him, Norziha; Roslan, Rozaini; Khalid, Kamil
2018-04-01
In this paper, we apply Adomian Decomposition Method (ADM) as numerically analyse linear second-order Fredholm Integro-differential Equations. The approximate solutions of the problems are calculated by Maple package. Some numerical examples have been considered to illustrate the ADM for solving this equation. The results are compared with the existing exact solution. Thus, the Adomian decomposition method can be the best alternative method for solving linear second-order Fredholm Integro-Differential equation. It converges to the exact solution quickly and in the same time reduces computational work for solving the equation. The result obtained by ADM shows the ability and efficiency for solving these equations.
Higher Intelligence Is Associated with Less Task-Related Brain Network Reconfiguration
Cole, Michael W.
2016-01-01
The human brain is able to exceed modern computers on multiple computational demands (e.g., language, planning) using a small fraction of the energy. The mystery of how the brain can be so efficient is compounded by recent evidence that all brain regions are constantly active as they interact in so-called resting-state networks (RSNs). To investigate the brain's ability to process complex cognitive demands efficiently, we compared functional connectivity (FC) during rest and multiple highly distinct tasks. We found previously that RSNs are present during a wide variety of tasks and that tasks only minimally modify FC patterns throughout the brain. Here, we tested the hypothesis that, although subtle, these task-evoked FC updates from rest nonetheless contribute strongly to behavioral performance. One might expect that larger changes in FC reflect optimization of networks for the task at hand, improving behavioral performance. Alternatively, smaller changes in FC could reflect optimization for efficient (i.e., small) network updates, reducing processing demands to improve behavioral performance. We found across three task domains that high-performing individuals exhibited more efficient brain connectivity updates in the form of smaller changes in functional network architecture between rest and task. These smaller changes suggest that individuals with an optimized intrinsic network configuration for domain-general task performance experience more efficient network updates generally. Confirming this, network update efficiency correlated with general intelligence. The brain's reconfiguration efficiency therefore appears to be a key feature contributing to both its network dynamics and general cognitive ability. SIGNIFICANCE STATEMENT The brain's network configuration varies based on current task demands. For example, functional brain connections are organized in one way when one is resting quietly but in another way if one is asked to make a decision. We found that the efficiency of these updates in brain network organization is positively related to general intelligence, the ability to perform a wide variety of cognitively challenging tasks well. Specifically, we found that brain network configuration at rest was already closer to a wide variety of task configurations in intelligent individuals. This suggests that the ability to modify network connectivity efficiently when task demands change is a hallmark of high intelligence. PMID:27535904
General results for higher spin Wilson lines and entanglement in Vasiliev theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hegde, Ashwin; Kraus, Per; Perlmutter, Eric
Here, we develop tools for the efficient evaluation of Wilson lines in 3D higher spin gravity, and use these to compute entanglement entropy in the hs[λ ] Vasiliev theory that governs the bulk side of the duality proposal of Gaberdiel and Gopakumar. Our main technical advance is the determination of SL(N) Wilson lines for arbitrary N, which, in suitable cases, enables us to analytically continue to hs[λ ] via N→ -λ. We then apply this result to compute various quantities of interest, including entanglement entropy expanded perturbatively in the background higher spin charge, chemical potential, and interval size. This includesmore » a computation of entanglement entropy in the higher spin black hole of the Vasiliev theory. Our results are consistent with conformal field theory calculations. We also provide an alternative derivation of the Wilson line, by showing how it arises naturally from earlier work on scalar correlators in higher spin theory. The general picture that emerges is consistent with the statement that the SL(N) Wilson line computes the semiclassical W N vacuum block, and our results provide an explicit result for this object.« less
General results for higher spin Wilson lines and entanglement in Vasiliev theory
Hegde, Ashwin; Kraus, Per; Perlmutter, Eric
2016-01-28
Here, we develop tools for the efficient evaluation of Wilson lines in 3D higher spin gravity, and use these to compute entanglement entropy in the hs[λ ] Vasiliev theory that governs the bulk side of the duality proposal of Gaberdiel and Gopakumar. Our main technical advance is the determination of SL(N) Wilson lines for arbitrary N, which, in suitable cases, enables us to analytically continue to hs[λ ] via N→ -λ. We then apply this result to compute various quantities of interest, including entanglement entropy expanded perturbatively in the background higher spin charge, chemical potential, and interval size. This includesmore » a computation of entanglement entropy in the higher spin black hole of the Vasiliev theory. Our results are consistent with conformal field theory calculations. We also provide an alternative derivation of the Wilson line, by showing how it arises naturally from earlier work on scalar correlators in higher spin theory. The general picture that emerges is consistent with the statement that the SL(N) Wilson line computes the semiclassical W N vacuum block, and our results provide an explicit result for this object.« less
Symmetric log-domain diffeomorphic Registration: a demons-based approach.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2008-01-01
Modern morphometric studies use non-linear image registration to compare anatomies and perform group analysis. Recently, log-Euclidean approaches have contributed to promote the use of such computational anatomy tools by permitting simple computations of statistics on a rather large class of invertible spatial transformations. In this work, we propose a non-linear registration algorithm perfectly fit for log-Euclidean statistics on diffeomorphisms. Our algorithm works completely in the log-domain, i.e. it uses a stationary velocity field. This implies that we guarantee the invertibility of the deformation and have access to the true inverse transformation. This also means that our output can be directly used for log-Euclidean statistics without relying on the heavy computation of the log of the spatial transformation. As it is often desirable, our algorithm is symmetric with respect to the order of the input images. Furthermore, we use an alternate optimization approach related to Thirion's demons algorithm to provide a fast non-linear registration algorithm. First results show that our algorithm outperforms both the demons algorithm and the recently proposed diffeomorphic demons algorithm in terms of accuracy of the transformation while remaining computationally efficient.
Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures
2017-10-04
Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These
Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P
2015-11-01
The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. Copyright © 2015 Elsevier Ltd. All rights reserved.
An efficient dictionary learning algorithm and its application to 3-D medical image denoising.
Li, Shutao; Fang, Leyuan; Yin, Haitao
2012-02-01
In this paper, we propose an efficient dictionary learning algorithm for sparse representation of given data and suggest a way to apply this algorithm to 3-D medical image denoising. Our learning approach is composed of two main parts: sparse coding and dictionary updating. On the sparse coding stage, an efficient algorithm named multiple clusters pursuit (MCP) is proposed. The MCP first applies a dictionary structuring strategy to cluster the atoms with high coherence together, and then employs a multiple-selection strategy to select several competitive atoms at each iteration. These two strategies can greatly reduce the computation complexity of the MCP and assist it to obtain better sparse solution. On the dictionary updating stage, the alternating optimization that efficiently approximates the singular value decomposition is introduced. Furthermore, in the 3-D medical image denoising application, a joint 3-D operation is proposed for taking the learning capabilities of the presented algorithm to simultaneously capture the correlations within each slice and correlations across the nearby slices, thereby obtaining better denoising results. The experiments on both synthetically generated data and real 3-D medical images demonstrate that the proposed approach has superior performance compared to some well-known methods. © 2011 IEEE
Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till
2018-02-01
(1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.
Membraneless laminar flow cell for electrocatalytic CO2 reduction with liquid product separation
NASA Astrophysics Data System (ADS)
Monroe, Morgan M.; Lobaccaro, Peter; Lum, Yanwei; Ager, Joel W.
2017-04-01
The production of liquid fuel products via electrochemical reduction of CO2 is a potential path to produce sustainable fuels. However, to be practical, a separation strategy is required to isolate the fuel-containing electrolyte produced at the cathode from the anode and also prevent the oxidation products (i.e. O2) from reaching the cathode. Ion-conducting membranes have been applied in CO2 reduction reactors to achieve this separation, but they represent an efficiency loss and can be permeable to some product species. An alternative membraneless approach is developed here to maintain product separation through the use of a laminar flow cell. Computational modelling shows that near-unity separation efficiencies are possible at current densities achievable now with metal cathodes via optimization of the spacing between the electrodes and the electrolyte flow rate. Laminar flow reactor prototypes were fabricated with a range of channel widths by 3D printing. CO2 reduction to formic acid on Sn electrodes was used as the liquid product forming reaction, and the separation efficiency for the dissolved product was evaluated with high performance liquid chromatography. Trends in product separation efficiency with channel width and flow rate were in qualitative agreement with the model, but the separation efficiency was lower, with a maximum value of 90% achieved.
Exact posterior computation in non-conjugate Gaussian location-scale parameters models
NASA Astrophysics Data System (ADS)
Andrade, J. A. A.; Rathie, P. N.
2017-12-01
In Bayesian analysis the class of conjugate models allows to obtain exact posterior distributions, however this class quite restrictive in the sense that it involves only a few distributions. In fact, most of the practical applications involves non-conjugate models, thus approximate methods, such as the MCMC algorithms, are required. Although these methods can deal with quite complex structures, some practical problems can make their applications quite time demanding, for example, when we use heavy-tailed distributions, convergence may be difficult, also the Metropolis-Hastings algorithm can become very slow, in addition to the extra work inevitably required on choosing efficient candidate generator distributions. In this work, we draw attention to the special functions as a tools for Bayesian computation, we propose an alternative method for obtaining the posterior distribution in Gaussian non-conjugate models in an exact form. We use complex integration methods based on the H-function in order to obtain the posterior distribution and some of its posterior quantities in an explicit computable form. Two examples are provided in order to illustrate the theory.
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network
NASA Astrophysics Data System (ADS)
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-03-01
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.
Electron microscopy and computed microtomography studies of in vivo implanted mini-TL dosimeters.
Strand, S E; Strandh, M; Spanne, P
1993-01-01
The need for direct methods of measuring the absorbed dose in vivo increases for systemic radiation therapy, and in more sophisticated methodologies developed for radioimmunotherapy. One method suggested is the use of mini-thermoluminescent dosimeters (TLD). Recent reports indicate a marked loss of signal when the dosimeters are used in vivo. We investigated the exterior surface of the dosimeters with scanning electron microscopy and the interior dosimeter volume with computed microtomography. The results show that the dosimeters initially have crystals uniformly embedded in the teflon matrix, with some of them directly exposed to the environment. After incubation in gel, holes appear in the dosimeter matrix where the crystals should have been. The computed microtomographic images show that crystals remain in the interior of the matrix, producing the remaining signal. We conclude that these dosimeters should be very carefully handled, and for practical use of mini-TLDs in vivo the dosimeters should be calibrated in equivalent milieus. An alternative solution to the problem of decreased TL efficiency, would be to coat the dosimeters with a thin layer, of Teflon, or other suitable material.
NASA Astrophysics Data System (ADS)
Aigner, M.; Köpplmayr, T.; Kneidinger, C.; Miethlinger, J.
2014-05-01
Barrier screws are widely used in the plastics industry. Due to the extreme diversity of their geometries, describing the flow behavior is difficult and rarely done in practice. We present a systematic approach based on networks that uses tensor algebra and numerical methods to model and calculate selected barrier screw geometries in terms of pressure, mass flow, and residence time. In addition, we report the results of three-dimensional simulations using the commercially available ANSYS Polyflow software. The major drawbacks of three-dimensional finite-element-method (FEM) simulations are that they require vast computational power and, large quantities of memory, and consume considerable time to create a geometric model created by computer-aided design (CAD) and complete a flow calculation. Consequently, a modified 2.5-dimensional finite volume method, termed network analysis is preferable. The results obtained by network analysis and FEM simulations correlated well. Network analysis provides an efficient alternative to complex FEM software in terms of computing power and memory consumption. Furthermore, typical barrier screw geometries can be parameterized and used for flow calculations without timeconsuming CAD-constructions.
An evaluation of exact methods for the multiple subset maximum cardinality selection problem.
Brusco, Michael J; Köhn, Hans-Friedrich; Steinley, Douglas
2016-05-01
The maximum cardinality subset selection problem requires finding the largest possible subset from a set of objects, such that one or more conditions are satisfied. An important extension of this problem is to extract multiple subsets, where the addition of one more object to a larger subset would always be preferred to increases in the size of one or more smaller subsets. We refer to this as the multiple subset maximum cardinality selection problem (MSMCSP). A recently published branch-and-bound algorithm solves the MSMCSP as a partitioning problem. Unfortunately, the computational requirement associated with the algorithm is often enormous, thus rendering the method infeasible from a practical standpoint. In this paper, we present an alternative approach that successively solves a series of binary integer linear programs to obtain a globally optimal solution to the MSMCSP. Computational comparisons of the methods using published similarity data for 45 food items reveal that the proposed sequential method is computationally far more efficient than the branch-and-bound approach. © 2016 The British Psychological Society.
Computer-Aided Discovery Tools for Volcano Deformation Studies with InSAR and GPS
NASA Astrophysics Data System (ADS)
Pankratius, V.; Pilewskie, J.; Rude, C. M.; Li, J. D.; Gowanlock, M.; Bechor, N.; Herring, T.; Wauthier, C.
2016-12-01
We present a Computer-Aided Discovery approach that facilitates the cloud-scalable fusion of different data sources, such as GPS time series and Interferometric Synthetic Aperture Radar (InSAR), for the purpose of identifying the expansion centers and deformation styles of volcanoes. The tools currently developed at MIT allow the definition of alternatives for data processing pipelines that use various analysis algorithms. The Computer-Aided Discovery system automatically generates algorithmic and parameter variants to help researchers explore multidimensional data processing search spaces efficiently. We present first application examples of this technique using GPS data on volcanoes on the Aleutian Islands and work in progress on combined GPS and InSAR data in Hawaii. In the model search context, we also illustrate work in progress combining time series Principal Component Analysis with InSAR augmentation to constrain the space of possible model explanations on current empirical data sets and achieve a better identification of deformation patterns. This work is supported by NASA AIST-NNX15AG84G and NSF ACI-1442997 (PI: V. Pankratius).
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network.
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-03-21
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices' non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.
NASA Astrophysics Data System (ADS)
Pan, M.-Ch.; Chu, W.-Ch.; Le, Duc-Do
2016-12-01
The paper presents an alternative Vold-Kalman filter order tracking (VKF_OT) method, i.e. adaptive angular-velocity VKF_OT technique, to extract and characterize order components in an adaptive manner for the condition monitoring and fault diagnosis of rotary machinery. The order/spectral waveforms to be tracked can be recursively solved by using Kalman filter based on the one-step state prediction. The paper comprises theoretical derivation of computation scheme, numerical implementation, and parameter investigation. Comparisons of the adaptive VKF_OT scheme with two other ones are performed through processing synthetic signals of designated order components. Processing parameters such as the weighting factor and the correlation matrix of process noise, and data conditions like the sampling frequency, which influence tracking behavior, are explored. The merits such as adaptive processing nature and computation efficiency brought by the proposed scheme are addressed although the computation was performed in off-line conditions. The proposed scheme can simultaneously extract multiple spectral components, and effectively decouple close and crossing orders associated with multi-axial reference rotating speeds.
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-01-01
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing. PMID:28322262
An optimization approach for fitting canonical tensor decompositions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less
Performance Improvement Through Indexing of Turbine Airfoils. Part 2; Numerical Simulation
NASA Technical Reports Server (NTRS)
Griffin, Lisa W.; Huber, Frank W.; Sharma, Om P.
1996-01-01
An experimental/analytical study has been conducted to determine the performance improvements achievable by circumferentially indexing succeeding rows of turbine stator airfoils. A series of tests was conducted to experimentally investigate stator wake clocking effects on the performance of the space shuttle main engine (SSME) alternate turbopump development (ATD) fuel turbine test article (TTA). The results from this study indicate that significant increases in stage efficiency can be attained through application of this airfoil clocking concept. Details of the experiment and its results are documented in part 1 of this paper. In order to gain insight into the mechanisms of the performance improvement, extensive computational fluid dynamics (CFD) simulations were executed. The subject of the present paper is the initial results from the CFD investigation of the configurations and conditions detailed in part 1 of the paper. To characterize the aerodynamic environments in the experimental test series, two-dimensional (2D), time accurate, multistage, viscous analyses were performed at the TTA midspan. Computational analyses for five different circumferential positions of the first stage stator have been completed. Details of the computational procedure and the results are presented. The analytical results verify the experimentally demonstrated performance improvement and are compared with data whenever possible. Predictions of time-averaged turbine efficiencies as well as gas conditions throughout the flow field are presented. An initial understanding of the turbine performance improvement mechanism based on the results from this investigation is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shu, Yinan; Levine, Benjamin G., E-mail: levine@chemistry.msu.edu; Hohenstein, Edward G.
2015-01-14
Multireference quantum chemical methods, such as the complete active space self-consistent field (CASSCF) method, have long been the state of the art for computing regions of potential energy surfaces (PESs) where complex, multiconfigurational wavefunctions are required, such as near conical intersections. Herein, we present a computationally efficient alternative to the widely used CASSCF method based on a complete active space configuration interaction (CASCI) expansion built from the state-averaged natural orbitals of configuration interaction singles calculations (CISNOs). This CISNO-CASCI approach is shown to predict vertical excitation energies of molecules with closed-shell ground states similar to those predicted by state averaged (SA)-CASSCFmore » in many cases and to provide an excellent reference for a perturbative treatment of dynamic electron correlation. Absolute energies computed at the CISNO-CASCI level are found to be variationally superior, on average, to other CASCI methods. Unlike SA-CASSCF, CISNO-CASCI provides vertical excitation energies which are both size intensive and size consistent, thus suggesting that CISNO-CASCI would be preferable to SA-CASSCF for the study of systems with multiple excitable centers. The fact that SA-CASSCF and some other CASCI methods do not provide a size intensive/consistent description of excited states is attributed to changes in the orbitals that occur upon introduction of non-interacting subsystems. Finally, CISNO-CASCI is found to provide a suitable description of the PES surrounding a biradicaloid conical intersection in ethylene.« less
Dzyubachyk, Oleh; Khmelinskii, Artem; Plenge, Esben; Kok, Peter; Snoeks, Thomas J A; Poot, Dirk H J; Löwik, Clemens W G M; Botha, Charl P; Niessen, Wiro J; van der Weerd, Louise; Meijering, Erik; Lelieveldt, Boudewijn P F
2014-01-01
In small animal imaging studies, when the locations of the micro-structures of interest are unknown a priori, there is a simultaneous need for full-body coverage and high resolution. In MRI, additional requirements to image contrast and acquisition time will often make it impossible to acquire such images directly. Recently, a resolution enhancing post-processing technique called super-resolution reconstruction (SRR) has been demonstrated to improve visualization and localization of micro-structures in small animal MRI by combining multiple low-resolution acquisitions. However, when the field-of-view is large relative to the desired voxel size, solving the SRR problem becomes very expensive, in terms of both memory requirements and computation time. In this paper we introduce a novel local approach to SRR that aims to overcome the computational problems and allow researchers to efficiently explore both global and local characteristics in whole-body small animal MRI. The method integrates state-of-the-art image processing techniques from the areas of articulated atlas-based segmentation, planar reformation, and SRR. A proof-of-concept is provided with two case studies involving CT, BLI, and MRI data of bone and kidney tumors in a mouse model. We show that local SRR-MRI is a computationally efficient complementary imaging modality for the precise characterization of tumor metastases, and that the method provides a feasible high-resolution alternative to conventional MRI.
Adjusting for cross-cultural differences in computer-adaptive tests of quality of life.
Gibbons, C J; Skevington, S M
2018-04-01
Previous studies using the WHOQOL measures have demonstrated that the relationship between individual items and the underlying quality of life (QoL) construct may differ between cultures. If unaccounted for, these differing relationships can lead to measurement bias which, in turn, can undermine the reliability of results. We used item response theory (IRT) to assess differential item functioning (DIF) in WHOQOL data from diverse language versions collected in UK, Zimbabwe, Russia, and India (total N = 1332). Data were fitted to the partial credit 'Rasch' model. We used four item banks previously derived from the WHOQOL-100 measure, which provided excellent measurement for physical, psychological, social, and environmental quality of life domains (40 items overall). Cross-cultural differential item functioning was assessed using analysis of variance for item residuals and post hoc Tukey tests. Simulated computer-adaptive tests (CATs) were conducted to assess the efficiency and precision of the four items banks. Splitting item parameters by DIF results in four linked item banks without DIF or other breaches of IRT model assumptions. Simulated CATs were more precise and efficient than longer paper-based alternatives. Assessing differential item functioning using item response theory can identify measurement invariance between cultures which, if uncontrolled, may undermine accurate comparisons in computer-adaptive testing assessments of QoL. We demonstrate how compensating for DIF using item anchoring allowed data from all four countries to be compared on a common metric, thus facilitating assessments which were both sensitive to cultural nuance and comparable between countries.
Code of Federal Regulations, 2013 CFR
2013-01-01
... renewable energy report (EE/RE report) alternative? 905.17 Section 905.17 Energy DEPARTMENT OF ENERGY ENERGY... energy efficiency and/or renewable energy report (EE/RE report) alternative? (a) Requests to submit an EE..., including any requirements for documenting customer energy efficiency and renewable energy activities. (b...
Code of Federal Regulations, 2011 CFR
2011-01-01
... renewable energy report (EE/RE report) alternative? 905.17 Section 905.17 Energy DEPARTMENT OF ENERGY ENERGY... energy efficiency and/or renewable energy report (EE/RE report) alternative? (a) Requests to submit an EE..., including any requirements for documenting customer energy efficiency and renewable energy activities. (b...
Code of Federal Regulations, 2014 CFR
2014-01-01
... renewable energy report (EE/RE report) alternative? 905.17 Section 905.17 Energy DEPARTMENT OF ENERGY ENERGY... energy efficiency and/or renewable energy report (EE/RE report) alternative? (a) Requests to submit an EE..., including any requirements for documenting customer energy efficiency and renewable energy activities. (b...
Code of Federal Regulations, 2012 CFR
2012-01-01
... renewable energy report (EE/RE report) alternative? 905.17 Section 905.17 Energy DEPARTMENT OF ENERGY ENERGY... energy efficiency and/or renewable energy report (EE/RE report) alternative? (a) Requests to submit an EE..., including any requirements for documenting customer energy efficiency and renewable energy activities. (b...
TU-E-BRB-08: Dual Gated Volumetric Modulated Arc Therapy.
Wu, J; Fahimian, B; Wu, H; Xing, L
2012-06-01
Gated Volumetric Modulated Arc Therapy (VMAT) is an emerging treatment modality for Stereotactic Body Radiotherapy (SBRT). However, gating significantly prolongs treatment time. In order to enhance treatment efficiency, a novel dual gated VMAT, in which dynamic arc deliveries are executed sequentially in alternating exhale and inhale phases, is proposed and evaluated experimentally. The essence of dual gated VMAT is to take advantage of the natural pauses that occur at inspiration and exhalation by alternatively delivering the dose at the two phases, instead of the exhale window only. The arc deliveries at the two phases are realized by rotating gantry forward at the exhale window and backward at the inhale in an alternative fashion. Custom XML scripts were developed in Varian's TrueBeam STx Developer Mode to enable dual gated VMAT delivery. RapidArc plans for a lung case were generated for both inhale and exhale phases. The two plans were then combined into a dual gated arc by interleaving the arc treatment nodes of the two RapidArc plans. The dual gated plan was delivered in the development mode of TrueBeam LINAC onto a motion phantom and the delivery was measured by using pinpoint chamber/film/diode array (delta 4). The measured dose distribution was compared with that computed using Eclipse AAA algorithm. The treatment delivery time was recorded and compared with the corresponding single gated plans. Relative to the corresponding single gated delivery, it was found that treatment time efficiency was improved by 95.5% for the case studied here. Pinpoint chamber absolute dose measurement agreed the calculation to within 0.7%. Diode chamber array measurements revealed that 97.5% of measurement points of dual gated RapidArc delivery passed the 3% and 3mm gamma-test criterion. A dual gated VMAT treatment has been developed and implemented successfully with nearly doubled treatment delivery efficiency. © 2012 American Association of Physicists in Medicine.
Vanniyasingam, Thuva; Cunningham, Charles E; Foster, Gary; Thabane, Lehana
2016-07-19
Discrete choice experiments (DCEs) are routinely used to elicit patient preferences to improve health outcomes and healthcare services. While many fractional factorial designs can be created, some are more statistically optimal than others. The objective of this simulation study was to investigate how varying the number of (1) attributes, (2) levels within attributes, (3) alternatives and (4) choice tasks per survey will improve or compromise the statistical efficiency of an experimental design. A total of 3204 DCE designs were created to assess how relative design efficiency (d-efficiency) is influenced by varying the number of choice tasks (2-20), alternatives (2-5), attributes (2-20) and attribute levels (2-5) of a design. Choice tasks were created by randomly allocating attribute and attribute level combinations into alternatives. Relative d-efficiency was used to measure the optimality of each DCE design. DCE design complexity influenced statistical efficiency. Across all designs, relative d-efficiency decreased as the number of attributes and attribute levels increased. It increased for designs with more alternatives. Lastly, relative d-efficiency converges as the number of choice tasks increases, where convergence may not be at 100% statistical optimality. Achieving 100% d-efficiency is heavily dependent on the number of attributes, attribute levels, choice tasks and alternatives. Further exploration of overlaps and block sizes are needed. This study's results are widely applicable for researchers interested in creating optimal DCE designs to elicit individual preferences on health services, programmes, policies and products. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Efficient alignment-free DNA barcode analytics
Kuksa, Pavel; Pavlovic, Vladimir
2009-01-01
Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305
A dictionary based informational genome analysis
2012-01-01
Background In the post-genomic era several methods of computational genomics are emerging to understand how the whole information is structured within genomes. Literature of last five years accounts for several alignment-free methods, arisen as alternative metrics for dissimilarity of biological sequences. Among the others, recent approaches are based on empirical frequencies of DNA k-mers in whole genomes. Results Any set of words (factors) occurring in a genome provides a genomic dictionary. About sixty genomes were analyzed by means of informational indexes based on genomic dictionaries, where a systemic view replaces a local sequence analysis. A software prototype applying a methodology here outlined carried out some computations on genomic data. We computed informational indexes, built the genomic dictionaries with different sizes, along with frequency distributions. The software performed three main tasks: computation of informational indexes, storage of these in a database, index analysis and visualization. The validation was done by investigating genomes of various organisms. A systematic analysis of genomic repeats of several lengths, which is of vivid interest in biology (for example to compute excessively represented functional sequences, such as promoters), was discussed, and suggested a method to define synthetic genetic networks. Conclusions We introduced a methodology based on dictionaries, and an efficient motif-finding software application for comparative genomics. This approach could be extended along many investigation lines, namely exported in other contexts of computational genomics, as a basis for discrimination of genomic pathologies. PMID:22985068
The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency
ERIC Educational Resources Information Center
Oder, Karl; Pittman, Stephanie
2015-01-01
Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…
An O(Nm(sup 2)) Plane Solver for the Compressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Thomas, J. L.; Bonhaus, D. L.; Anderson, W. K.; Rumsey, C. L.; Biedron, R. T.
1999-01-01
A hierarchical multigrid algorithm for efficient steady solutions to the two-dimensional compressible Navier-Stokes equations is developed and demonstrated. The algorithm applies multigrid in two ways: a Full Approximation Scheme (FAS) for a nonlinear residual equation and a Correction Scheme (CS) for a linearized defect correction implicit equation. Multigrid analyses which include the effect of boundary conditions in one direction are used to estimate the convergence rate of the algorithm for a model convection equation. Three alternating-line- implicit algorithms are compared in terms of efficiency. The analyses indicate that full multigrid efficiency is not attained in the general case; the number of cycles to attain convergence is dependent on the mesh density for high-frequency cross-stream variations. However, the dependence is reasonably small and fast convergence is eventually attained for any given frequency with either the FAS or the CS scheme alone. The paper summarizes numerical computations for which convergence has been attained to within truncation error in a few multigrid cycles for both inviscid and viscous ow simulations on highly stretched meshes.
NASA Astrophysics Data System (ADS)
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
NASA Astrophysics Data System (ADS)
Salehi Fashami, Mohammad
Excessive energy dissipation in CMOS devices during switching is the primary threat to continued downscaling of computing devices in accordance with Moore's law. In the quest for alternatives to traditional transistor based electronics, nanomagnet-based computing [1, 2] is emerging as an attractive alternative since: (i) nanomagnets are intrinsically more energy-efficient than transistors due to the correlated switching of spins [3], and (ii) unlike transistors, magnets have no leakage and hence have no standby power dissipation. However, large energy dissipation in the clocking circuit appears to be a barrier to the realization of ultra low power logic devices with such nanomagnets. To alleviate this issue, we propose the use of a hybrid spintronics-straintronics or straintronic nanomagnetic logic (SML) paradigm. This uses a piezoelectric layer elastically coupled to an elliptically shaped magnetostrictive nanomagnetic layer for both logic [4-6] and memory [7-8] and other information processing [9-10] applications that could potentially be 2-3 orders of magnitude more energy efficient than current CMOS based devices. This dissertation focuses on studying the feasibility, performance and reliability of such nanomagnetic logic circuits by simulating the nanoscale magnetization dynamics of dipole coupled nanomagnets clocked by stress. Specifically, the topics addressed are: 1. Theoretical study of multiferroic nanomagnetic arrays laid out in specific geometric patterns to implement a "logic wire" for unidirectional information propagation and a universal logic gate [4-6]. 2. Monte Carlo simulations of the magnetization trajectories in a simple system of dipole coupled nanomagnets and NAND gate described by the Landau-Lifshitz-Gilbert (LLG) equations simulated in the presence of random thermal noise to understand the dynamics switching error [11, 12] in such devices. 3. Arriving at a lower bound for energy dissipation as a function of switching error [13] for a practical nanomagnetic logic scheme. 4. Clocking of nanomagnetic logic with surface acoustic waves (SAW) to drastically decrease the lithographic burden needed to contact each multiferroic nanomagnet while maintaining pipelined information processing. 5. Nanomagnets with four (or higher states) implemented with shape engineering. Two types of magnet that encode four states: (i) diamond, and (ii) concave nanomagnets are studied for coherence of the switching process.
Effective and efficient implementation of alternative project delivery : research summary.
DOT National Transportation Integrated Search
2017-05-01
Alternative project delivery (APD) methods such as Design Build (DB) and Construction Manager at Risk (CMAR), are used by state departments of transportation to improve the efficiency and effectiveness of project delivery. The Maryland Department of ...
Code of Federal Regulations, 2013 CFR
2013-07-01
... approach follows: 4.3A source conducts an initial series of at least three runs. The owner or operator may... Confidence Limit Approaches for Alternative Capture Efficiency Protocols and Test Methods A Appendix A to... to Subpart KK of Part 63—Data Quality Objective and Lower Confidence Limit Approaches for Alternative...
Code of Federal Regulations, 2010 CFR
2010-07-01
... approach follows: 4.3A source conducts an initial series of at least three runs. The owner or operator may... Confidence Limit Approaches for Alternative Capture Efficiency Protocols and Test Methods A Appendix A to... to Subpart KK of Part 63—Data Quality Objective and Lower Confidence Limit Approaches for Alternative...
Code of Federal Regulations, 2014 CFR
2014-07-01
... of the LCL approach follows: 4.3A source conducts an initial series of at least three runs. The owner... Confidence Limit Approaches for Alternative Capture Efficiency Protocols and Test Methods A Appendix A to... to Subpart KK of Part 63—Data Quality Objective and Lower Confidence Limit Approaches for Alternative...
Code of Federal Regulations, 2012 CFR
2012-07-01
... approach follows: 4.3A source conducts an initial series of at least three runs. The owner or operator may... Confidence Limit Approaches for Alternative Capture Efficiency Protocols and Test Methods A Appendix A to... to Subpart KK of Part 63—Data Quality Objective and Lower Confidence Limit Approaches for Alternative...
An assessment of the potential of PFEM-2 for solving long real-time industrial applications
NASA Astrophysics Data System (ADS)
Gimenez, Juan M.; Ramajo, Damián E.; Márquez Damián, Santiago; Nigro, Norberto M.; Idelsohn, Sergio R.
2017-07-01
The latest generation of the particle finite element method (PFEM-2) is a numerical method based on the Lagrangian formulation of the equations, which presents advantages in terms of robustness and efficiency over classical Eulerian methodologies when certain kind of flows are simulated, especially those where convection plays an important role. These situations are often encountered in real engineering problems, where very complex geometries and operating conditions require very large and long computations. The advantages that the parallelism introduced in the computational fluid dynamics making affordable computations with very fine spatial discretizations are well known. However, it is not possible to have the time parallelized, despite the effort that is being dedicated to use space-time formulations. In this sense, PFEM-2 adds a valuable feature in that its strong stability with little loss of accuracy provides an interesting way of satisfying the real-life computation needs. After having already demonstrated in previous publications its ability to achieve academic-based solutions with a good compromise between accuracy and efficiency, in this work, the method is revisited and employed to solve several nonacademic problems of technological interest, which fall into that category. Simulations concerning oil-water separation, waste-water treatment, metallurgical foundries, and safety assessment are presented. These cases are selected due to their particular requirements of long simulation times and or intensive interface treatment. Thus, large time-steps may be employed with PFEM-2 without compromising the accuracy and robustness of the simulation, as occurs with Eulerian alternatives, showing the potentiality of the methodology for solving not only academic tests but also real engineering problems.
Deformable registration of CT and cone-beam CT with local intensity matching.
Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon
2017-02-07
Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.
Deformable registration of CT and cone-beam CT with local intensity matching
NASA Astrophysics Data System (ADS)
Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon
2017-02-01
Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.
HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kannan, Ramakrishnan; Sukumar, Sreenivas R.; Ballard, Grey M.
NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems formore » $$\\WW$$ and $$\\HH$$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $$\\WW$$ and $$\\HH$$ within the alternating iterations.« less
Overview of NASA Lewis Research Center free-piston Stirling engine activities
NASA Technical Reports Server (NTRS)
Slaby, J. G.
1984-01-01
A generic free-piston Stirling technology project is being conducted to develop technologies generic to both space power and terrestrial heat pump applications in a cooperative, cost-shared effort. The generic technology effort includes extensive parametric testing of a 1 kW free-piston Stirling engine (RE-1000), development of a free-piston Stirling performance computer code, design and fabrication under contract of a hydraulic output modification for RE-1000 engine tests, and a 1000-hour endurance test, under contract, of a 3 kWe free-piston Stirling/alternator engine. A newly initiated space power technology feasibility demonstration effort addresses the capability of scaling a free-piston Stirling/alternator system to about 25 kWe; developing thermodynamic cycle efficiency or equal to 70 percent of Carnot at temperature ratios in the order of 1.5 to 2.0; achieving a power conversion unit specific weight of 6 kg/kWe; operating with noncontacting gas bearings; and dynamically balancing the system. Planned engine and component design and test efforts are described.
Birnhack, Liat; Nir, Oded; Telzhenski, Marina; Lahav, Ori
2015-01-01
Deliberate struvite (MgNH4PO4) precipitation from wastewater streams has been the topic of extensive research in the last two decades and is expected to gather worldwide momentum in the near future as a P-reuse technique. A wide range of operational alternatives has been reported for struvite precipitation, including the application of various Mg(II) sources, two pH elevation techniques and several Mg:P ratios and pH values. The choice of each operational parameter within the struvite precipitation process affects process efficiency, the overall cost and also the choice of other operational parameters. Thus, a comprehensive simulation program that takes all these parameters into account is essential for process design. This paper introduces a systematic decision-supporting tool which accepts a wide range of possible operational parameters, including unconventional Mg(II) sources (i.e. seawater and seawater nanofiltration brines). The study is supplied with a free-of-charge computerized tool (http://tx.technion.ac.il/~agrengn/agr/Struvite_Program.zip) which links two computer platforms (Python and PHREEQC) for executing thermodynamic calculations according to predefined kinetic considerations. The model can be (inter alia) used for optimizing the struvite-fluidized bed reactor process operation with respect to P removal efficiency, struvite purity and economic feasibility of the chosen alternative. The paper describes the algorithm and its underlying assumptions, and shows results (i.e. effluent water quality, cost breakdown and P removal efficiency) of several case studies consisting of typical wastewaters treated at various operational conditions.
Experimental and Computational Study of Trapped Vortex Combustor Sector Rig With Tri-Pass Diffuser
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Shouse, D. T.; Roquernore, W. M.; Burrus, D. L.; Duncan, B. S.; Ryder, R. C.; Brankovic, A.; Liu, N.-S.; Gallagher, J. R.; Hendricks, J. A.
2004-01-01
The Trapped Vortex Combustor (TVC) potentially offers numerous operational advantages over current production gas turbine engine combustors. These include lower weight, lower pollutant emissions, effective flame stabilization, high combustion efficiency, excellent high altitude relight capability, and operation in the lean burn or RQL modes of combustion. The present work describes the operational principles of the TVC, and extends diffuser velocities toward choked flow and provides system performance data. Performance data include EINOx results for various fuel-air ratios and combustor residence times, combustion efficiency as a function of combustor residence time, and combustor lean blow-out (LBO) performance. Computational fluid dynamics (CFD) simulations using liquid spray droplet evaporation and combustion modeling are performed and related to flow structures observed in photographs of the combustor. The CFD results are used to understand the aerodynamics and combustion features under different fueling conditions. Performance data acquired to date are favorable compared to conventional gas turbine combustors. Further testing over a wider range of fuel-air ratios, fuel flow splits, and pressure ratios is in progress to explore the TVC performance. In addition, alternate configurations for the upstream pressure feed, including bi-pass diffusion schemes, as well as variations on the fuel injection patterns, are currently in test and evaluation phases.
Volume sharing of reservoir water
NASA Astrophysics Data System (ADS)
Dudley, Norman J.
1988-05-01
Previous models optimize short-, intermediate-, and long-run irrigation decision making in a simplified river valley system characterized by highly variable water supplies and demands for a single decision maker controlling both reservoir releases and farm water use. A major problem in relaxing the assumption of one decision maker is communicating the stochastic nature of supplies and demands between reservoir and farm managers. In this paper, an optimizing model is used to develop release rules for reservoir management when all users share equally in releases, and computer simulation is used to generate an historical time sequence of announced releases. These announced releases become a state variable in a farm management model which optimizes farm area-to-irrigate decisions through time. Such modeling envisages the use of growing area climatic data by the reservoir authority to gauge water demand and the transfer of water supply data from reservoir to farm managers via computer data files. Alternative model forms, including allocating water on a priority basis, are discussed briefly. Results show lower mean aggregate farm income and lower variance of aggregate farm income than in the single decision-maker case. This short-run economic efficiency loss coupled with likely long-run economic efficiency losses due to the attenuated nature of property rights indicates the need for quite different ways of integrating reservoir and farm management.
NASA Astrophysics Data System (ADS)
Bui, Francis Minhthang; Hatzinakos, Dimitrios
2007-12-01
As electronic communications become more prevalent, mobile and universal, the threats of data compromises also accordingly loom larger. In the context of a body sensor network (BSN), which permits pervasive monitoring of potentially sensitive medical data, security and privacy concerns are particularly important. It is a challenge to implement traditional security infrastructures in these types of lightweight networks since they are by design limited in both computational and communication resources. A key enabling technology for secure communications in BSN's has emerged to be biometrics. In this work, we present two complementary approaches which exploit physiological signals to address security issues: (1) a resource-efficient key management system for generating and distributing cryptographic keys to constituent sensors in a BSN; (2) a novel data scrambling method, based on interpolation and random sampling, that is envisioned as a potential alternative to conventional symmetric encryption algorithms for certain types of data. The former targets the resource constraints in BSN's, while the latter addresses the fuzzy variability of biometric signals, which has largely precluded the direct application of conventional encryption. Using electrocardiogram (ECG) signals as biometrics, the resulting computer simulations demonstrate the feasibility and efficacy of these methods for delivering secure communications in BSN's.
Cryogenic setup for trapped ion quantum computing.
Brandl, M F; van Mourik, M W; Postler, L; Nolf, A; Lakhmanskiy, K; Paiva, R R; Möller, S; Daniilidis, N; Häffner, H; Kaushal, V; Ruster, T; Warschburger, C; Kaufmann, H; Poschinger, U G; Schmidt-Kaler, F; Schindler, P; Monz, T; Blatt, R
2016-11-01
We report on the design of a cryogenic setup for trapped ion quantum computing containing a segmented surface electrode trap. The heat shield of our cryostat is designed to attenuate alternating magnetic field noise, resulting in 120 dB reduction of 50 Hz noise along the magnetic field axis. We combine this efficient magnetic shielding with high optical access required for single ion addressing as well as for efficient state detection by placing two lenses each with numerical aperture 0.23 inside the inner heat shield. The cryostat design incorporates vibration isolation to avoid decoherence of optical qubits due to the motion of the cryostat. We measure vibrations of the cryostat of less than ±20 nm over 2 s. In addition to the cryogenic apparatus, we describe the setup required for an operation with 40 Ca + and 88 Sr + ions. The instability of the laser manipulating the optical qubits in 40 Ca + is characterized by yielding a minimum of its Allan deviation of 2.4 ⋅ 10 -15 at 0.33 s. To evaluate the performance of the apparatus, we trapped 40 Ca + ions, obtaining a heating rate of 2.14(16) phonons/s and a Gaussian decay of the Ramsey contrast with a 1/e-time of 18.2(8) ms.
Groundwater management under uncertainty using a stochastic multi-cell model
NASA Astrophysics Data System (ADS)
Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.
2017-08-01
The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.
Adaptive Importance Sampling for Control and Inference
NASA Astrophysics Data System (ADS)
Kappen, H. J.; Ruiz, H. C.
2016-03-01
Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
NASA Astrophysics Data System (ADS)
Iacobucci, Joseph V.
The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular problem domain by establishing an effective means to communicate the semantics from the RAAM framework. These techniques make it possible to include diverse multi-metric models within the RAAM framework in addition to system and operational level trades. A canonical example was used to explore the uses of the methodology. The canonical example contains all of the features of a full system of systems architecture analysis study but uses fewer tasks and systems. Using RAAM with the canonical example it was possible to consider both system and operational level trades in the same analysis. Once the methodology had been tested with the canonical example, a Suppression of Enemy Air Defenses (SEAD) capability model was developed. Due to the sensitive nature of analyses on that subject, notional data was developed. The notional data has similar trends and properties to realistic Suppression of Enemy Air Defenses data. RAAM was shown to be traceable and provided a mechanism for a unified treatment of a variety of metrics. The SEAD capability model demonstrated lower computer runtimes and reduced model creation complexity as compared to methods currently in use. To determine the usefulness of the implementation of the methodology on current computing hardware, RAAM was tested with system of system architecture studies of different sizes. This was necessary since system of systems may be called upon to accomplish thousands of tasks. It has been clearly demonstrated that RAAM is able to enumerate and evaluate the types of large, complex design spaces usually encountered in capability based design, oftentimes providing the ability to efficiently search the entire decision space. The core algorithms for generation and evaluation of alternatives scale linearly with expected problem sizes. The SEAD capability model outputs prompted the discovery a new issue, the data storage and manipulation requirements for an analysis. Two strategies were developed to counter large data sizes, the use of portfolio views and top 'n' analysis. This proved the usefulness of the RAAM framework and methodology during Pre-Milestone A capability based analysis. (Abstract shortened by UMI.).
Sign use and cognition in automated scientific discovery: are computers only special kinds of signs?
NASA Astrophysics Data System (ADS)
Giza, Piotr
2018-04-01
James Fetzer criticizes the computational paradigm, prevailing in cognitive science by questioning, what he takes to be, its most elementary ingredient: that cognition is computation across representations. He argues that if cognition is taken to be a purposive, meaningful, algorithmic problem solving activity, then computers are incapable of cognition. Instead, they appear to be signs of a special kind, that can facilitate computation. He proposes the conception of minds as semiotic systems as an alternative paradigm for understanding mental phenomena, one that seems to overcome the difficulties of computationalism. Now, I argue, that with computer systems dealing with scientific discovery, the matter is not so simple as that. The alleged superiority of humans using signs to stand for something other over computers being merely "physical symbol systems" or "automatic formal systems" is only easy to establish in everyday life, but becomes far from obvious when scientific discovery is at stake. In science, as opposed to everyday life, the meaning of symbols is, apart from very low-level experimental investigations, defined implicitly by the way the symbols are used in explanatory theories or experimental laws relevant to the field, and in consequence, human and machine discoverers are much more on a par. Moreover, the great practical success of the genetic programming method and recent attempts to apply it to automatic generation of cognitive theories seem to show, that computer systems are capable of very efficient problem solving activity in science, which is neither purposive nor meaningful, nor algorithmic. This, I think, undermines Fetzer's argument that computer systems are incapable of cognition because computation across representations is bound to be a purposive, meaningful, algorithmic problem solving activity.
Pinthong, Watthanai; Muangruen, Panya
2016-01-01
Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555
Analytical Cost Metrics : Days of Future Past
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov
As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less
Field evaluation of alternative and cost efficient bridge approach slabs.
DOT National Transportation Integrated Search
2013-11-01
Based on a recent study on cost efficient alternative bridge approach slab (BAS) designs (Thiagarajan et al. 2010) has recommended : three new BAS designs for possible implementation by MoDOT namely a) 20 feet cast-inplace slab with sleeper slab (CIP...
Advances in Significance Testing for Cluster Detection
NASA Astrophysics Data System (ADS)
Coleman, Deidra Andrea
Over the past two decades, much attention has been given to data driven project goals such as the Human Genome Project and the development of syndromic surveillance systems. A major component of these types of projects is analyzing the abundance of data. Detecting clusters within the data can be beneficial as it can lead to the identification of specified sequences of DNA nucleotides that are related to important biological functions or the locations of epidemics such as disease outbreaks or bioterrorism attacks. Cluster detection techniques require efficient and accurate hypothesis testing procedures. In this dissertation, we improve upon the hypothesis testing procedures for cluster detection by enhancing distributional theory and providing an alternative method for spatial cluster detection using syndromic surveillance data. In Chapter 2, we provide an efficient method to compute the exact distribution of the number and coverage of h-clumps of a collection of words. This method involves defining a Markov chain using a minimal deterministic automaton to reduce the number of states needed for computation. We allow words of the collection to contain other words of the collection making the method more general. We use our method to compute the distributions of the number and coverage of h-clumps in the Chi motif of H. influenza.. In Chapter 3, we provide an efficient algorithm to compute the exact distribution of multiple window discrete scan statistics for higher-order, multi-state Markovian sequences. This algorithm involves defining a Markov chain to efficiently keep track of probabilities needed to compute p-values of the statistic. We use our algorithm to identify cases where the available approximation does not perform well. We also use our algorithm to detect unusual clusters of made free throw shots by National Basketball Association players during the 2009-2010 regular season. In Chapter 4, we give a procedure to detect outbreaks using syndromic surveillance data while controlling the Bayesian False Discovery Rate (BFDR). The procedure entails choosing an appropriate Bayesian model that captures the spatial dependency inherent in epidemiological data and considers all days of interest, selecting a test statistic based on a chosen measure that provides the magnitude of the maximumal spatial cluster for each day, and identifying a cutoff value that controls the BFDR for rejecting the collective null hypothesis of no outbreak over a collection of days for a specified region.We use our procedure to analyze botulism-like syndrome data collected by the North Carolina Disease Event Tracking and Epidemiologic Collection Tool (NC DETECT).
Modeling of biological intelligence for SCM system optimization.
Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang
2012-01-01
This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.
Intermediary LEO propagation including higher order zonal harmonics
NASA Astrophysics Data System (ADS)
Hautesserres, Denis; Lara, Martin
2017-04-01
Two new intermediary orbits of the artificial satellite problem are proposed. The analytical solutions include higher order effects of the geopotential, and are obtained by means of a torsion transformation applied to the quasi-Keplerian system resulting after the elimination of the parallax simplification, for the first intermediary, and after the elimination of the parallax and perigee simplifications, for the second one. The new intermediaries perform notably well for low Earth orbits propagation, are free from special functions, and result advantageous, both in accuracy and efficiency, when compared to the standard Cowell integration of the J_2 problem, thus providing appealing alternatives for onboard, short-term, orbit propagation under limited computational resources.
Dal Palù, Alessandro; Dovier, Agostino; Pontelli, Enrico
2010-01-01
Crystal lattices are discrete models of the three-dimensional space that have been effectively employed to facilitate the task of determining proteins' natural conformation. This paper investigates alternative global constraints that can be introduced in a constraint solver over discrete crystal lattices. The objective is to enhance the efficiency of lattice solvers in dealing with the construction of approximate solutions of the protein structure determination problem. Some of them (e.g., self-avoiding-walk) have been explicitly or implicitly already used in previous approaches, while others (e.g., the density constraint) are new. The intrinsic complexities of all of them are studied and preliminary experimental results are discussed.
Systems of Inhomogeneous Linear Equations
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.
NASA Astrophysics Data System (ADS)
Sanders, Sören; Holthaus, Martin
2017-11-01
We explore in detail how analytic continuation of divergent perturbation series by generalized hypergeometric functions is achieved in practice. Using the example of strong-coupling perturbation series provided by the two-dimensional Bose-Hubbard model, we compare hypergeometric continuation to Shanks and Padé techniques, and demonstrate that the former yields a powerful, efficient and reliable alternative for computing the phase diagram of the Mott insulator-to-superfluid transition. In contrast to Shanks transformations and Padé approximations, hypergeometric continuation also allows us to determine the exponents which characterize the divergence of correlation functions at the transition points. Therefore, hypergeometric continuation constitutes a promising tool for the study of quantum phase transitions.
Modeling of Biological Intelligence for SCM System Optimization
Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang
2012-01-01
This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724
Experimental Realization of High-Efficiency Counterfactual Computation.
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-21
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
Experimental Realization of High-Efficiency Counterfactual Computation
NASA Astrophysics Data System (ADS)
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-01
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
NASA Astrophysics Data System (ADS)
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
A simple parametric model observer for quality assurance in computer tomography
NASA Astrophysics Data System (ADS)
Anton, M.; Khanin, A.; Kretz, T.; Reginatto, M.; Elster, C.
2018-04-01
Model observers are mathematical classifiers that are used for the quality assessment of imaging systems such as computer tomography. The quality of the imaging system is quantified by means of the performance of a selected model observer. For binary classification tasks, the performance of the model observer is defined by the area under its ROC curve (AUC). Typically, the AUC is estimated by applying the model observer to a large set of training and test data. However, the recording of these large data sets is not always practical for routine quality assurance. In this paper we propose as an alternative a parametric model observer that is based on a simple phantom, and we provide a Bayesian estimation of its AUC. It is shown that a limited number of repeatedly recorded images (10–15) is already sufficient to obtain results suitable for the quality assessment of an imaging system. A MATLAB® function is provided for the calculation of the results. The performance of the proposed model observer is compared to that of the established channelized Hotelling observer and the nonprewhitening matched filter for simulated images as well as for images obtained from a low-contrast phantom on an x-ray tomography scanner. The results suggest that the proposed parametric model observer, along with its Bayesian treatment, can provide an efficient, practical alternative for the quality assessment of CT imaging systems.
10 CFR 435.305 - Alternative compliance procedure.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Alternative compliance procedure. 435.305 Section 435.305 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY STANDARDS FOR NEW FEDERAL LOW-RISE RESIDENTIAL BUILDINGS Mandatory Energy Efficiency Standards for Federal Residential Buildings § 435.305...
Energy and economic efficiency alternatives for electric lighting in commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robbins, C L; Hunter, K C; Carlisle, N
1985-10-01
This report investigates current efficient alternatives for replacing or supplementing electric lighting systems in commercial buildings. Criteria for establishing the economic attractiveness of various lighting alternatives are defined and the effect of future changes in building lighting on utility capacity. The report focuses on the energy savings potential, economic efficiency, and energy demand reduction of three categories of lighting alternatives: (1) use of a renewable resource (daylighting) to replace or supplement electric lighting; (2) use of task/ambient lighting in lieu of overhead task lighting; and (3) equipment changes to improve lighting energy efficiency. The results indicate that all three categoriesmore » offer opportunities to reduce lighting energy use in commercial buildings. Further, reducing lighting energy causes a reduction in cooling energy use and cooling capacity while increasing heating energy use. It does not typically increase heating capacity because the use of lighting in the building does not offset the need for peak heating at night.« less
The report gives results of a screening evaluation of volatile organic emissions from printed circuit board laminates and potential pollution prevention alternatives. In the evaluation, printed circuit board laminates, without circuitry, commonly found in personal computer (PC) m...
20 CFR 404.242 - Use of old-start primary insurance amount as guaranteed alternative.
Code of Federal Regulations, 2010 CFR
2010-04-01
... guaranteed alternative. 404.242 Section 404.242 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Old-Start Method of Computing Primary Insurance Amounts § 404.242 Use of old-start primary insurance amount as...
Zhang, Liu-xia; Wang, Shu-zhong; Sui, Xiao-lei; Zhang, Zhen-xian
2011-09-01
This paper studied the effects of alternative furrow irrigation and nitrogen (N) application rate (no N, optimal N, and conventional N) on the photosynthesis, growth characteristics, yield formation, and fruit quality of cucumber (Cucumis sativus) cultivar Jinyu No. 5 in a solar greenhouse in winter-spring growth season and autumn-winter season. Under alternative furrow irrigation, the net photosynthetic rate of upper, middle, eand lower leaves was appreciably lower and the transpiration rate decreased significantly, and the transient water use efficiency of upper and middle leaves improved, as compared with those under conventional irrigation. Stomatal factor was the limiting factor of photosynthesis under alternative furrow irrigation. The photosynthesis and transient water use efficiency of functional leaves under alternative furrow irrigation increased with increasing N application rate. Comparing with conventional irrigation, alternative furrow irrigation decreased leaf chlorophyll content and plant biomass, but increased root biomass, root/shoot ratio, and dry matter allocation in root and fruit. The economic output under alternative furrow irrigation was nearly the same as that under conventional irrigation, whereas the water use efficiency for economic yield increased significantly, suggesting the beneficial effects of alternative furrow irrigation on root development and fruit formation. With the increase of N application rate, the leaf chlorophyll content, chlorophyll a/b, specific leaf mass, plant biomass, economic yield, and fruit Vc and soluble sugar contents under alternative furrow irrigation increased, but no significant difference was observed between the treatments optimal N and conventional N. N application had little effects on the water use efficiency for economic yield. The economic yield and biomass production of the cucumber were significantly higher in winter-spring growth season than in autumn-winter growth season.
NASA Astrophysics Data System (ADS)
Choi, Chu Hwan
2002-09-01
Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.
All-memristive neuromorphic computing with level-tuned neurons
NASA Astrophysics Data System (ADS)
Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos
2016-09-01
In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.
All-memristive neuromorphic computing with level-tuned neurons.
Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos
2016-09-02
In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.
Dimethyl ether (DME) as an alternative fuel
NASA Astrophysics Data System (ADS)
Semelsberger, Troy A.; Borup, Rodney L.; Greene, Howard L.
With ever growing concerns on environmental pollution, energy security, and future oil supplies, the global community is seeking non-petroleum based alternative fuels, along with more advanced energy technologies (e.g., fuel cells) to increase the efficiency of energy use. The most promising alternative fuel will be the fuel that has the greatest impact on society. The major impact areas include well-to-wheel greenhouse gas emissions, non-petroleum feed stocks, well-to-wheel efficiencies, fuel versatility, infrastructure, availability, economics, and safety. Compared to some of the other leading alternative fuel candidates (i.e., methane, methanol, ethanol, and Fischer-Tropsch fuels), dimethyl ether appears to have the largest potential impact on society, and should be considered as the fuel of choice for eliminating the dependency on petroleum. DME can be used as a clean high-efficiency compression ignition fuel with reduced NO x, SO x, and particulate matter, it can be efficiently reformed to hydrogen at low temperatures, and does not have large issues with toxicity, production, infrastructure, and transportation as do various other fuels. The literature relevant to DME use is reviewed and summarized to demonstrate the viability of DME as an alternative fuel.
NASA Astrophysics Data System (ADS)
Jezzine, Karim; Imperiale, Alexandre; Demaldent, Edouard; Le Bourdais, Florian; Calmon, Pierre; Dominguez, Nicolas
2018-04-01
Models for the simulation of ultrasonic inspections of flat and curved plate-like composite structures, as well as stiffeners, are available in the CIVA-COMPOSITE module released in 2016. A first modelling approach using a ray-based model is able to predict the ultrasonic propagation in an anisotropic effective medium obtained after having homogenized the composite laminate. Fast 3D computations can be performed on configurations featuring delaminations, flat bottom holes or inclusions for example. In addition, computations on ply waviness using this model will be available in CIVA 2017. Another approach is proposed in the CIVA-COMPOSITE module. It is based on the coupling of CIVA ray-based model and a finite difference scheme in time domain (FDTD) developed by AIRBUS. The ray model handles the ultrasonic propagation between the transducer and the FDTD computation zone that surrounds the composite part. In this way, the computational efficiency is preserved and the ultrasound scattering by the composite structure can be predicted. Alternatively, a high order finite element approach is currently developed at CEA but not yet integrated in CIVA. The advantages of this approach will be discussed and first simulation results on Carbon Fiber Reinforced Polymers (CFRP) will be shown. Finally, the application of these modelling tools to the construction of metamodels is discussed.
Howell, Bryan; Lad, Shivanand P.; Grill, Warren M.
2014-01-01
Spinal cord stimulation (SCS) is an alternative or adjunct therapy to treat chronic pain, a prevalent and clinically challenging condition. Although SCS has substantial clinical success, the therapy is still prone to failures, including lead breakage, lead migration, and poor pain relief. The goal of this study was to develop a computational model of SCS and use the model to compare activation of neural elements during intradural and extradural electrode placement. We constructed five patient-specific models of SCS. Stimulation thresholds predicted by the model were compared to stimulation thresholds measured intraoperatively, and we used these models to quantify the efficiency and selectivity of intradural and extradural SCS. Intradural placement dramatically increased stimulation efficiency and reduced the power required to stimulate the dorsal columns by more than 90%. Intradural placement also increased selectivity, allowing activation of a greater proportion of dorsal column fibers before spread of activation to dorsal root fibers, as well as more selective activation of individual dermatomes at different lateral deviations from the midline. Further, the results suggest that current electrode designs used for extradural SCS are not optimal for intradural SCS, and a novel azimuthal tripolar design increased stimulation selectivity, even beyond that achieved with an intradural paddle array. Increased stimulation efficiency is expected to increase the battery life of implantable pulse generators, increase the recharge interval of rechargeable implantable pulse generators, and potentially reduce stimulator volume. The greater selectivity of intradural stimulation may improve the success rate of SCS by mitigating the sensitivity of pain relief to malpositioning of the electrode. The outcome of this effort is a better quantitative understanding of how intradural electrode placement can potentially increase the selectivity and efficiency of SCS, which, in turn, provides predictions that can be tested in future clinical studies assessing the potential therapeutic benefits of intradural SCS. PMID:25536035
An Efficient Variable Length Coding Scheme for an IID Source
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.
Membraneless laminar flow cell for electrocatalytic CO 2 reduction with liquid product separation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monroe, Morgan M.; Lobaccaro, Peter; Lum, Yanwei
The production of liquid fuel products via electrochemical reduction of CO 2 is a potential path to produce sustainable fuels. However, to be practical, a separation strategy is required to isolate the fuel-containing electrolyte produced at the cathode from the anode and also prevent the oxidation products (i.e. O 2) from reaching the cathode. Ion-conducting membranes have been applied in CO 2 reduction reactors to achieve this separation, but they represent an efficiency loss and can be permeable to some product species. An alternative membraneless approach is developed here to maintain product separation through the use of a laminar flowmore » cell. Computational modelling shows that near-unity separation efficiencies are possible at current densities achievable now with metal cathodes via optimization of the spacing between the electrodes and the electrolyte flow rate. Laminar flow reactor prototypes were fabricated with a range of channel widths by 3D printing. CO 2 reduction to formic acid on Sn electrodes was used as the liquid product forming reaction, and the separation efficiency for the dissolved product was evaluated with high performance liquid chromatography. Trends in product separation efficiency with channel width and flow rate were in qualitative agreement with the model, but the separation efficiency was lower, with a maximum value of 90% achieved.« less
Membraneless laminar flow cell for electrocatalytic CO 2 reduction with liquid product separation
Monroe, Morgan M.; Lobaccaro, Peter; Lum, Yanwei; ...
2017-03-16
The production of liquid fuel products via electrochemical reduction of CO 2 is a potential path to produce sustainable fuels. However, to be practical, a separation strategy is required to isolate the fuel-containing electrolyte produced at the cathode from the anode and also prevent the oxidation products (i.e. O 2) from reaching the cathode. Ion-conducting membranes have been applied in CO 2 reduction reactors to achieve this separation, but they represent an efficiency loss and can be permeable to some product species. An alternative membraneless approach is developed here to maintain product separation through the use of a laminar flowmore » cell. Computational modelling shows that near-unity separation efficiencies are possible at current densities achievable now with metal cathodes via optimization of the spacing between the electrodes and the electrolyte flow rate. Laminar flow reactor prototypes were fabricated with a range of channel widths by 3D printing. CO 2 reduction to formic acid on Sn electrodes was used as the liquid product forming reaction, and the separation efficiency for the dissolved product was evaluated with high performance liquid chromatography. Trends in product separation efficiency with channel width and flow rate were in qualitative agreement with the model, but the separation efficiency was lower, with a maximum value of 90% achieved.« less
Castagné, Raphaële; Boulangé, Claire Laurence; Karaman, Ibrahim; Campanella, Gianluca; Santos Ferreira, Diana L; Kaluarachchi, Manuja R; Lehne, Benjamin; Moayyeri, Alireza; Lewis, Matthew R; Spagou, Konstantina; Dona, Anthony C; Evangelos, Vangelis; Tracy, Russell; Greenland, Philip; Lindon, John C; Herrington, David; Ebbels, Timothy M D; Elliott, Paul; Tzoulaki, Ioanna; Chadeau-Hyam, Marc
2017-10-06
1 H NMR spectroscopy of biofluids generates reproducible data allowing detection and quantification of small molecules in large population cohorts. Statistical models to analyze such data are now well-established, and the use of univariate metabolome wide association studies (MWAS) investigating the spectral features separately has emerged as a computationally efficient and interpretable alternative to multivariate models. The MWAS rely on the accurate estimation of a metabolome wide significance level (MWSL) to be applied to control the family wise error rate. Subsequent interpretation requires efficient visualization and formal feature annotation, which, in-turn, call for efficient prioritization of spectral variables of interest. Using human serum 1 H NMR spectroscopic profiles from 3948 participants from the Multi-Ethnic Study of Atherosclerosis (MESA), we have performed a series of MWAS for serum levels of glucose. We first propose an extension of the conventional MWSL that yields stable estimates of the MWSL across the different model parameterizations and distributional features of the outcome. We propose both efficient visualization methods and a strategy based on subsampling and internal validation to prioritize the associations. Our work proposes and illustrates practical and scalable solutions to facilitate the implementation of the MWAS approach and improve interpretation in large cohort studies.
2017-01-01
1H NMR spectroscopy of biofluids generates reproducible data allowing detection and quantification of small molecules in large population cohorts. Statistical models to analyze such data are now well-established, and the use of univariate metabolome wide association studies (MWAS) investigating the spectral features separately has emerged as a computationally efficient and interpretable alternative to multivariate models. The MWAS rely on the accurate estimation of a metabolome wide significance level (MWSL) to be applied to control the family wise error rate. Subsequent interpretation requires efficient visualization and formal feature annotation, which, in-turn, call for efficient prioritization of spectral variables of interest. Using human serum 1H NMR spectroscopic profiles from 3948 participants from the Multi-Ethnic Study of Atherosclerosis (MESA), we have performed a series of MWAS for serum levels of glucose. We first propose an extension of the conventional MWSL that yields stable estimates of the MWSL across the different model parameterizations and distributional features of the outcome. We propose both efficient visualization methods and a strategy based on subsampling and internal validation to prioritize the associations. Our work proposes and illustrates practical and scalable solutions to facilitate the implementation of the MWAS approach and improve interpretation in large cohort studies. PMID:28823158
NASA Astrophysics Data System (ADS)
Di Dato, Mariaines; de Barros, Felipe P. J.; Fiori, Aldo; Bellin, Alberto
2018-03-01
Natural attenuation and in situ oxidation are commonly considered as low-cost alternatives to ex situ remediation. The efficiency of such remediation techniques is hindered by difficulties in obtaining good dilution and mixing of the contaminant, in particular if the plume deformation is physically constrained by an array of wells, which serves as a containment system. In that case, dilution may be enhanced by inducing an engineered sequence of injections and extractions from such pumping system, which also works as a hydraulic barrier. This way, the aquifer acts as a natural mixer, in a manner similar to the industrialized engineered mixers. Improving the efficiency of hydrogeological mixers is a challenging task, owing to the need to use a 3-D setup while relieving the computational burden. Analytical solutions, though approximated, are a suitable and efficient tool to seek the optimum solution among all possible flow configurations. Here we develop a novel physically based model to demonstrate how the combined spatiotemporal fluctuations of the water fluxes control solute trajectories and residence time distributions and therefore, the effectiveness of contaminant plume dilution and mixing. Our results show how external forcing configurations are capable of inducing distinct time-varying groundwater flow patterns which will yield different solute dilution rates.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
NASA Astrophysics Data System (ADS)
Escobar Gómez, J. D.; Torres-Verdín, C.
2018-03-01
Single-well pressure-diffusion simulators enable improved quantitative understanding of hydraulic-testing measurements in the presence of arbitrary spatial variations of rock properties. Simulators of this type implement robust numerical algorithms which are often computationally expensive, thereby making the solution of the forward modeling problem onerous and inefficient. We introduce a time-domain perturbation theory for anisotropic permeable media to efficiently and accurately approximate the transient pressure response of spatially complex aquifers. Although theoretically valid for any spatially dependent rock/fluid property, our single-phase flow study emphasizes arbitrary spatial variations of permeability and anisotropy, which constitute key objectives of hydraulic-testing operations. Contrary to time-honored techniques, the perturbation method invokes pressure-flow deconvolution to compute the background medium's permeability sensitivity function (PSF) with a single numerical simulation run. Subsequently, the first-order term of the perturbed solution is obtained by solving an integral equation that weighs the spatial variations of permeability with the spatial-dependent and time-dependent PSF. Finally, discrete convolution transforms the constant-flow approximation to arbitrary multirate conditions. Multidimensional numerical simulation studies for a wide range of single-well field conditions indicate that perturbed solutions can be computed in less than a few CPU seconds with relative errors in pressure of <5%, corresponding to perturbations in background permeability of up to two orders of magnitude. Our work confirms that the proposed joint perturbation-convolution (JPC) method is an efficient alternative to analytical and numerical solutions for accurate modeling of pressure-diffusion phenomena induced by Neumann or Dirichlet boundary conditions.