Sample records for greater computational efficiency

  1. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    NASA Astrophysics Data System (ADS)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  2. An efficient method for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.

  3. Automated Measurement of Patient-Specific Tibial Slopes from MRI

    PubMed Central

    Amerinatanzi, Amirhesam; Summers, Rodney K.; Ahmadi, Kaveh; Goel, Vijay K.; Hewett, Timothy E.; Nyman, Edward

    2017-01-01

    Background: Multi-planar proximal tibial slopes may be associated with increased likelihood of osteoarthritis and anterior cruciate ligament injury, due in part to their role in checking the anterior-posterior stability of the knee. Established methods suffer repeatability limitations and lack computational efficiency for intuitive clinical adoption. The aims of this study were to develop a novel automated approach and to compare the repeatability and computational efficiency of the approach against previously established methods. Methods: Tibial slope geometries were obtained via MRI and measured using an automated Matlab-based approach. Data were compared for repeatability and evaluated for computational efficiency. Results: Mean lateral tibial slope (LTS) for females (7.2°) was greater than for males (1.66°). Mean LTS in the lateral concavity zone was greater for females (7.8° for females, 4.2° for males). Mean medial tibial slope (MTS) for females was greater (9.3° vs. 4.6°). Along the medial concavity zone, female subjects demonstrated greater MTS. Conclusion: The automated method was more repeatable and computationally efficient than previously identified methods and may aid in the clinical assessment of knee injury risk, inform surgical planning, and implant design efforts. PMID:28952547

  4. Time and Learning Efficiency in Internet-Based Learning: A Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Cook, David A.; Levinson, Anthony J.; Garside, Sarah

    2010-01-01

    Authors have claimed that Internet-based instruction promotes greater learning efficiency than non-computer methods. Objectives Determine, through a systematic synthesis of evidence in health professions education, how Internet-based instruction compares with non-computer instruction in time spent learning, and what features of Internet-based…

  5. Multiscale Reactive Molecular Dynamics

    DTIC Science & Technology

    2012-08-15

    biology cannot be described without considering electronic and nuclear-level dynamics and their coupling to slower, cooperative motions of the system ...coupling to slower, cooperative motions of the system . These inherently multiscale problems require computationally efficient and accurate methods to...condensed phase systems with computational efficiency orders of magnitudes greater than currently possible with ab initio simulation methods, thus

  6. Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheid, Robert E.

    1989-01-01

    The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.

  7. Notebook Computers as Note-Takers for Handicapped Students.

    ERIC Educational Resources Information Center

    James, Veronica; Hammersley, Michael

    1993-01-01

    Describes a study in New South Wales (Australia) that used cable-linked notebook computers as a notetaking system for hearing-impaired students. The Phones for the Deaf Program is explained, and benefits of the notebook computer system are discussed, including greater efficiency, less intrusiveness, and more complete classroom participation. (LRW)

  8. New Class of Excimer-Pumped Atomic Lasers (XPALS)

    DTIC Science & Technology

    2017-01-27

    quantum efficiency greater thnn one, has been demonstrated. We believe this laser to represent a breakthrough in laser technology because the system...navy.mil Prepared by J. G. Eden and A. E. Mironov Laboratory For Optical Physics and Engineering Department of Electrical and Computer Engineering...viability of an atomic laser having a quantum efficiency greater than one. We believe this laser to represent a breakthrough in laser technology

  9. New Media Resistance: Barriers to Implementation of Computer Video Games in the Classroom

    ERIC Educational Resources Information Center

    Rice, John W.

    2007-01-01

    Computer video games are an emerging instructional medium offering strong degrees of cognitive efficiencies for experiential learning, team building, and greater understanding of abstract concepts. As with other new media adopted for use by instructional technologists for pedagogical purposes, barriers to classroom implementation have manifested…

  10. Influence of computational fluid dynamics on experimental aerospace facilities: A fifteen year projection

    NASA Technical Reports Server (NTRS)

    1983-01-01

    An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.

  11. The thermodynamic efficiency of computations made in cells across the range of life

    NASA Astrophysics Data System (ADS)

    Kempes, Christopher P.; Wolpert, David; Cohen, Zachary; Pérez-Mercader, Juan

    2017-11-01

    Biological organisms must perform computation as they grow, reproduce and evolve. Moreover, ever since Landauer's bound was proposed, it has been known that all computation has some thermodynamic cost-and that the same computation can be achieved with greater or smaller thermodynamic cost depending on how it is implemented. Accordingly an important issue concerning the evolution of life is assessing the thermodynamic efficiency of the computations performed by organisms. This issue is interesting both from the perspective of how close life has come to maximally efficient computation (presumably under the pressure of natural selection), and from the practical perspective of what efficiencies we might hope that engineered biological computers might achieve, especially in comparison with current computational systems. Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound. However, this efficiency depends strongly on the size and architecture of the cell in question. In particular, we show that the useful efficiency of an amino acid operation, defined as the bulk energy per amino acid polymerization, decreases for increasing bacterial size and converges to the polymerization cost of the ribosome. This cost of the largest bacteria does not change in cells as we progress through the major evolutionary shifts to both single- and multicellular eukaryotes. However, the rates of total computation per unit mass are non-monotonic in bacteria with increasing cell size, and also change across different biological architectures, including the shift from unicellular to multicellular eukaryotes. This article is part of the themed issue 'Reconceptualizing the origins of life'.

  12. Multi-threading: A new dimension to massively parallel scientific computation

    NASA Astrophysics Data System (ADS)

    Nielsen, Ida M. B.; Janssen, Curtis L.

    2000-06-01

    Multi-threading is becoming widely available for Unix-like operating systems, and the application of multi-threading opens new ways for performing parallel computations with greater efficiency. We here briefly discuss the principles of multi-threading and illustrate the application of multi-threading for a massively parallel direct four-index transformation of electron repulsion integrals. Finally, other potential applications of multi-threading in scientific computing are outlined.

  13. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  14. Quality indexing with computer-aided lexicography

    NASA Technical Reports Server (NTRS)

    Buchan, Ronald L.

    1992-01-01

    Indexing with computers is a far cry from indexing with the first indexing tool, the manual card sorter. With the aid of computer-aided lexicography, both indexing and indexing tools can provide standardization, consistency, and accuracy, resulting in greater quality control than ever before. A brief survey of computer activity in indexing is presented with detailed illustrations from NASA activity. Applications from techniques mentioned, such as Retrospective Indexing (RI), can be made to many indexing systems. In addition to improving the quality of indexing with computers, the improved efficiency with which certain tasks can be done is demonstrated.

  15. Study of basic computer competence among public health nurses in Taiwan.

    PubMed

    Yang, Kuei-Feng; Yu, Shu; Lin, Ming-Sheng; Hsu, Chia-Ling

    2004-03-01

    Rapid advances in information technology and media have made distance learning on the Internet possible. This new model of learning allows greater efficiency and flexibility in knowledge acquisition. Since basic computer competence is a prerequisite for this new learning model, this study was conducted to examine the basic computer competence of public health nurses in Taiwan and explore factors influencing computer competence. A national cross-sectional randomized study was conducted with 329 public health nurses. A questionnaire was used to collect data and was delivered by mail. Results indicate that basic computer competence of public health nurses in Taiwan is still needs to be improved (mean = 57.57 +- 2.83, total score range from 26-130). Among the five most frequently used software programs, nurses were most knowledgeable about Word and least knowledgeable about PowerPoint. Stepwise multiple regression analysis revealed eight variables (weekly number of hours spent online at home, weekly amount of time spent online at work, weekly frequency of computer use at work, previous computer training, computer at workplace and Internet access, job position, education level, and age) that significantly influenced computer competence, which accounted for 39.0 % of the variance. In conclusion, greater computer competence, broader educational programs regarding computer technology, and a greater emphasis on computers at work are necessary to increase the usefulness of distance learning via the Internet in Taiwan. Building a user-friendly environment is important in developing this new media model of learning for the future.

  16. Double Super-Exchange in Silicon Quantum Dots Connected by Short-Bridged Networks

    NASA Astrophysics Data System (ADS)

    Li, Huashan; Wu, Zhigang; Lusk, Mark

    2013-03-01

    Silicon quantum dots (QDs) with diameters in the range of 1-2 nm are attractive for photovoltaic applications. They absorb photons more readily, transport excitons with greater efficiency, and show greater promise in multiple-exciton generation and hot carrier collection paradigms. However, their high excitonic binding energy makes it difficult to dissociate excitons into separate charge carriers. One possible remedy is to create dot assemblies in which a second material creates a Type-II heterojunction with the dot so that exciton dissociation occurs locally. This talk will focus on such a Type-II heterojunction paradigm in which QDs are connected via covalently bonded, short-bridge molecules. For such interpenetrating networks of dots and molecules, our first principles computational investigation shows that it is possible to rapidly and efficiently separate electrons to QDs and holes to bridge units. The bridge network serves as an efficient mediator of electron superexchange between QDs while the dots themselves play the complimentary role of efficient hole superexchange mediators. Dissociation, photoluminescence and carrier transport rates will be presented for bridge networks of silicon QDs that exhibit such double superexchange. This material is based upon work supported by the Renewable Energy Materials Research Science and Engineering Center (REMRSEC) under Grant No. DMR-0820518 and Golden Energy Computing Organization (GECO).

  17. Office Automation in Student Affairs.

    ERIC Educational Resources Information Center

    Johnson, Sharon L.; Hamrick, Florence A.

    1987-01-01

    Offers recommendations to assist in introducing or expanding computer assistance in student affairs. Describes need for automation and considers areas of choosing hardware and software, funding and competitive bidding, installation and training, and system management. Cites greater efficiency in handling tasks and data and increased levels of…

  18. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    USDA-ARS?s Scientific Manuscript database

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  19. Improving Computational Efficiency of Prediction in Model-based Prognostics Using the Unscented Transform

    DTIC Science & Technology

    2010-10-01

    bodies becomes greater as surface as- perities wear down (Hutchings, 1992). We characterize friction damage by a change in the friction coefficient...points are such a set, and satisfy an additional constraint in which the skew ( third moment) is minimized, which reduces the average error for a...On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and Computing, 10, 197–208. Hutchings, I. M. (1992). Tribology : friction

  20. Matrix-vector multiplication using digital partitioning for more accurate optical computing

    NASA Technical Reports Server (NTRS)

    Gary, C. K.

    1992-01-01

    Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.

  1. Time and learning efficiency in Internet-based learning: a systematic review and meta-analysis.

    PubMed

    Cook, David A; Levinson, Anthony J; Garside, Sarah

    2010-12-01

    Authors have claimed that Internet-based instruction promotes greater learning efficiency than non-computer methods. determine, through a systematic synthesis of evidence in health professions education, how Internet-based instruction compares with non-computer instruction in time spent learning, and what features of Internet-based instruction are associated with improved learning efficiency. we searched databases including MEDLINE, CINAHL, EMBASE, and ERIC from 1990 through November 2008. STUDY SELECTION AND DATA ABSTRACTION we included all studies quantifying learning time for Internet-based instruction for health professionals, compared with other instruction. Reviewers worked independently, in duplicate, to abstract information on interventions, outcomes, and study design. we identified 20 eligible studies. Random effects meta-analysis of 8 studies comparing Internet-based with non-Internet instruction (positive numbers indicating Internet longer) revealed pooled effect size (ES) for time -0.10 (p = 0.63). Among comparisons of two Internet-based interventions, providing feedback adds time (ES 0.67, p =0.003, two studies), and greater interactivity generally takes longer (ES 0.25, p = 0.089, five studies). One study demonstrated that adapting to learner prior knowledge saves time without significantly affecting knowledge scores. Other studies revealed that audio narration, video clips, interactive models, and animations increase learning time but also facilitate higher knowledge and/or satisfaction. Across all studies, time correlated positively with knowledge outcomes (r = 0.53, p = 0.021). on average, Internet-based instruction and non-computer instruction require similar time. Instructional strategies to enhance feedback and interactivity typically prolong learning time, but in many cases also enhance learning outcomes. Isolated examples suggest potential for improving efficiency in Internet-based instruction.

  2. Technology advances and market forces: Their impact on high performance architectures

    NASA Technical Reports Server (NTRS)

    Best, D. R.

    1978-01-01

    Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.

  3. Aeroelastic Model Structure Computation for Envelope Expansion

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2007-01-01

    Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modelling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion which may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear aeroelastic systems. The LASSO minimises the residual sum of squares by the addition of an l(sub 1) penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 Active Aeroelastic Wing using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.

  4. A Computational Framework for Efficient Low Temperature Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Verma, Abhishek Kumar; Venkattraman, Ayyaswamy

    2016-10-01

    Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.

  5. Turbocharged Diesels

    NASA Technical Reports Server (NTRS)

    1984-01-01

    In a number of feasibility studies of turbine rotor designs, engineers of Cummins Engine Company, Inc.'s turbocharger group have utilized a computer program from COSMIC. Part of Cummins research effort is aimed toward introduction of advanced turbocharged engines that deliver extra power with greater fuel efficiency. Company claims use of COSMIC program substantially reduced software development costs.

  6. A Computational Study of a New Dual Throat Fluidic Thrust Vectoring Nozzle Concept

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Berrier, Bobby L.; Flamm, Jeffrey D.; Johnson, Stuart K.

    2005-01-01

    A computational investigation of a two-dimensional nozzle was completed to assess the use of fluidic injection to manipulate flow separation and cause thrust vectoring of the primary jet thrust. The nozzle was designed with a recessed cavity to enhance the throat shifting method of fluidic thrust vectoring. Several design cycles with the structured-grid, computational fluid dynamics code PAB3D and with experiments in the NASA Langley Research Center Jet Exit Test Facility have been completed to guide the nozzle design and analyze performance. This paper presents computational results on potential design improvements for best experimental configuration tested to date. Nozzle design variables included cavity divergence angle, cavity convergence angle and upstream throat height. Pulsed fluidic injection was also investigated for its ability to decrease mass flow requirements. Internal nozzle performance (wind-off conditions) and thrust vector angles were computed for several configurations over a range of nozzle pressure ratios from 2 to 7, with the fluidic injection flow rate equal to 3 percent of the primary flow rate. Computational results indicate that increasing cavity divergence angle beyond 10 is detrimental to thrust vectoring efficiency, while increasing cavity convergence angle from 20 to 30 improves thrust vectoring efficiency at nozzle pressure ratios greater than 2, albeit at the expense of discharge coefficient. Pulsed injection was no more efficient than steady injection for the Dual Throat Nozzle concept.

  7. Increasing thermal efficiency of solar flat plate collectors

    NASA Astrophysics Data System (ADS)

    Pona, J.

    A study of methods to increase the efficiency of heat transfer in flat plate solar collectors is presented. In order to increase the heat transfer from the absorber plate to the working fluid inside the tubes, turbulent flow was induced by installing baffles within the tubes. The installation of the baffles resulted in a 7 to 12% increase in collector efficiency. Experiments were run on both 1 sq ft and 2 sq ft collectors each fitted with either slotted baffles or tubular baffles. A computer program was run comparing the baffled collector to the standard collector. The results obtained from the computer show that the baffled collectors have a 2.7% increase in life cycle cost (LCC) savings and a 3.6% increase in net cash flow for use in domestic hot water systems, and even greater increases when used in solar heating systems.

  8. Memristor-Based Analog Computation and Neural Network Classification with a Dot Product Engine.

    PubMed

    Hu, Miao; Graves, Catherine E; Li, Can; Li, Yunning; Ge, Ning; Montgomery, Eric; Davila, Noraica; Jiang, Hao; Williams, R Stanley; Yang, J Joshua; Xia, Qiangfei; Strachan, John Paul

    2018-03-01

    Using memristor crossbar arrays to accelerate computations is a promising approach to efficiently implement algorithms in deep neural networks. Early demonstrations, however, are limited to simulations or small-scale problems primarily due to materials and device challenges that limit the size of the memristor crossbar arrays that can be reliably programmed to stable and analog values, which is the focus of the current work. High-precision analog tuning and control of memristor cells across a 128 × 64 array is demonstrated, and the resulting vector matrix multiplication (VMM) computing precision is evaluated. Single-layer neural network inference is performed in these arrays, and the performance compared to a digital approach is assessed. Memristor computing system used here reaches a VMM accuracy equivalent of 6 bits, and an 89.9% recognition accuracy is achieved for the 10k MNIST handwritten digit test set. Forecasts show that with integrated (on chip) and scaled memristors, a computational efficiency greater than 100 trillion operations per second per Watt is possible. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Study of a rare-gas transverse fast discharge

    NASA Technical Reports Server (NTRS)

    Chubb, D. L.; Michels, C. J.

    1979-01-01

    An experimental and analytical study of a Blumlein-type transverse fast discharge operating with He and Xe are presented. An electro-optical voltage probe was used to measure the discharge voltage, and the measured voltages were in agreement with the computed voltages. The analytical model was used to predict the dependence of the discharge efficiency for producing metastables and ions on the important plasma and external circuit parameters. In He the ion efficiency is greater than the metastable efficiency, while in Xe it is the opposite; the He ion efficiencies are much larger than in Xe, while Xe metastable efficiencies are much larger than in He. These differences between Xe and He are accounted by the large dissociative recombination rate of Xe compared with He.

  10. Air-gas exchange reevaluated: clinically important results of a computer simulation.

    PubMed

    Shunmugam, Manoharan; Shunmugam, Sudhakaran; Williamson, Tom H; Laidlaw, D Alistair

    2011-10-21

    The primary aim of this study was to evaluate the efficiency of air-gas exchange techniques and the factors that influence the final concentration of an intraocular gas tamponade. Parameters were varied to find the optimum method of performing an air-gas exchange in ideal circumstances. A computer model of the eye was designed using 3D software with fluid flow analysis capabilities. Factors such as angular distance between ports, gas infusion gauge, exhaust vent gauge and depth were varied in the model. Flow rate and axial length were also modulated to simulate faster injections and more myopic eyes, respectively. The flush volume of gas required to achieve a 97% intraocular gas fraction concentration were compared. Modulating individual factors did not reveal any clinically significant difference in the angular distance between ports, exhaust vent size, and depth or rate of gas injection. In combination, however, there was a 28% increase in air-gas exchange efficiency comparing the most efficient with the least efficient studied parameters in this model. The gas flush volume required to achieve a 97% gas fill also increased proportionately at a ratio of 5.5 to 6.2 times the volume of the eye. A 35-mL flush is adequate for eyes up to 25 mm in axial length; however, eyes longer than this would require a much greater flush volume, and surgeons should consider using two separate 50-mL gas syringes to ensure optimal gas concentration for eyes greater than 25 mm in axial length.

  11. Energy Awareness and Scheduling in Mobile Devices and High End Computing

    ERIC Educational Resources Information Center

    Pawaskar, Sachin S.

    2013-01-01

    In the context of the big picture as energy demands rise due to growing economies and growing populations, there will be greater emphasis on sustainable supply, conservation, and efficient usage of this vital resource. Even at a smaller level, the need for minimizing energy consumption continues to be compelling in embedded, mobile, and server…

  12. Aeroelastic Model Structure Computation for Envelope Expansion

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2007-01-01

    Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion that may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of non-linear aeroelastic systems. The LASSO minimises the residual sum of squares with the addition of an l(Sub 1) penalty term on the parameter vector of the traditional l(sub 2) minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudo-linear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Active Aeroelastic Wing project using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.

  13. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  14. Air System Information Management

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.

    2004-01-01

    I flew to Washington last week, a trip rich in distributed information management. Buying tickets, at the gate, in flight, landing and at the baggage claim, myriad messages about my reservation, the weather, our flight plans, gates, bags and so forth flew among a variety of travel agency, airline and Federal Aviation Administration (FAA) computers and personnel. By and large, each kind of information ran on a particular application, often specialized to own data formats and communications network. I went to Washington to attend an FAA meeting on System-Wide Information Management (SWIM) for the National Airspace System (NAS) (http://www.nasarchitecture.faa.gov/Tutorials/NAS101.cfm). NAS (and its information infrastructure, SWIM) is an attempt to bring greater regularity, efficiency and uniformity to the collection of stovepipe applications now used to manage air traffic. Current systems hold information about flight plans, flight trajectories, weather, air turbulence, current and forecast weather, radar summaries, hazardous condition warnings, airport and airspace capacity constraints, temporary flight restrictions, and so forth. Information moving among these stovepipe systems is usually mediated by people (for example, air traffic controllers) or single-purpose applications. People, whose intelligence is critical for difficult tasks and unusual circumstances, are not as efficient as computers for tasks that can be automated. Better information sharing can lead to higher system capacity, more efficient utilization and safer operations. Better information sharing through greater automation is possible though not necessarily easy.

  15. Facilitating higher-fidelity simulations of axial compressor instability and other turbomachinery flow conditions

    NASA Astrophysics Data System (ADS)

    Herrick, Gregory Paul

    The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, high-fidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higher-fidelity solutions of complicated geometries (due to the inclusion of grids for "small'' flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wall-clock computation time. The principal modifications implemented in this revision consist of a "multiple grid block per processing core'' software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, time-accurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research---experimental, theoretical, and computational---has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?''. This research begins to address that question.

  16. Experimental and numerical investigations of heat transfer and thermal efficiency of an infrared gas stove

    NASA Astrophysics Data System (ADS)

    Charoenlerdchanya, A.; Rattanadecho, P.; Keangin, P.

    2018-01-01

    An infrared gas stove is a low-pressure gas stove type and it has higher thermal efficiency than the other domestic cooking stoves. This study considers the computationally determine water and air temperature distributions, water and air velocity distributions and thermal efficiency of the infrared gas stove. The goal of this work is to investigate the effect of various pot diameters i.e. 220 mm, 240 mm and 260 mm on the water and air temperature distributions, water and air velocity distributions and thermal efficiency of the infrared gas stove. The time-dependent heat transfer equation involving diffusion and convection coupled with the time-dependent fluid dynamic equation is implemented and is solved by using the finite element method (FEM). The computer simulation study is validated with an experimental study, which is use standard experiment by LPG test for low-pressure gas stove in households (TIS No. 2312-2549). The findings revealed that the water and air temperature distributions increase with greater heating time, which varies with the three different pot diameters (220 mm, 240 mm and 260 mm). Similarly, the greater heating time, the water and air velocity distributions increase that vary by pot diameters (220, 240 and 260 mm). The maximum water temperature in the case of pot diameter of 220 mm is higher than the maximum water velocity in the case of pot diameters of 240 mm and 260 mm, respectively. However, the maximum air temperature in the case of pot diameter of 260 mm is higher than the maximum water velocity in the case of pot diameters of 240 mm and 220 mm, respectively. The obtained results may provide a basis for improving the energy efficiency of infrared gas stoves and other equipment, including helping to reduce energy consumption.

  17. Multithreaded implicitly dealiased convolutions

    NASA Astrophysics Data System (ADS)

    Roberts, Malcolm; Bowman, John C.

    2018-03-01

    Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.

  18. A Model Driven Question-Answering System for a CAI Environment. Final Report (July 1970 to May 1972).

    ERIC Educational Resources Information Center

    Brown, John S.; And Others

    A question answering system which permits a computer-assisted instruction (CAI) student greater initiative in the variety of questions he can ask is described. A method is presented to represent the dynamic processes of a subject matter area by augmented finite state automata, which permits efficient inferencing about dynamic processes and…

  19. Phase Holograms In PMMA

    NASA Technical Reports Server (NTRS)

    Maker, Paul D.; Muller, Richard E.

    1994-01-01

    Complex, computer-generated phase holograms written in thin films of poly(methyl methacrylate) (PMMA) by process of electron-beam exposure followed by chemical development. Spatial variations of phase delay in holograms quasi-continuous, as distinquished from stepwise as in binary phase holograms made by integrated-circuit fabrication. Holograms more precise than binary holograms. Greater continuity and precision results in decreased scattering loss and increased imaging efficiency.

  20. Parallel conjugate gradient algorithms for manipulator dynamic simulation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheld, Robert E.

    1989-01-01

    Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).

  1. Improving the Efficiency of Free Energy Calculations in the Amber Molecular Dynamics Package.

    PubMed

    Kaus, Joseph W; Pierce, Levi T; Walker, Ross C; McCammont, J Andrew

    2013-09-10

    Alchemical transformations are widely used methods to calculate free energies. Amber has traditionally included support for alchemical transformations as part of the sander molecular dynamics (MD) engine. Here we describe the implementation of a more efficient approach to alchemical transformations in the Amber MD package. Specifically we have implemented this new approach within the more computational efficient and scalable pmemd MD engine that is included with the Amber MD package. The majority of the gain in efficiency comes from the improved design of the calculation, which includes better parallel scaling and reduction in the calculation of redundant terms. This new implementation is able to reproduce results from equivalent simulations run with the existing functionality, but at 2.5 times greater computational efficiency. This new implementation is also able to run softcore simulations at the λ end states making direct calculation of free energies more accurate, compared to the extrapolation required in the existing implementation. The updated alchemical transformation functionality will be included in the next major release of Amber (scheduled for release in Q1 2014) and will be available at http://ambermd.org, under the Amber license.

  2. Improving the Efficiency of Free Energy Calculations in the Amber Molecular Dynamics Package

    PubMed Central

    Pierce, Levi T.; Walker, Ross C.; McCammont, J. Andrew

    2013-01-01

    Alchemical transformations are widely used methods to calculate free energies. Amber has traditionally included support for alchemical transformations as part of the sander molecular dynamics (MD) engine. Here we describe the implementation of a more efficient approach to alchemical transformations in the Amber MD package. Specifically we have implemented this new approach within the more computational efficient and scalable pmemd MD engine that is included with the Amber MD package. The majority of the gain in efficiency comes from the improved design of the calculation, which includes better parallel scaling and reduction in the calculation of redundant terms. This new implementation is able to reproduce results from equivalent simulations run with the existing functionality, but at 2.5 times greater computational efficiency. This new implementation is also able to run softcore simulations at the λ end states making direct calculation of free energies more accurate, compared to the extrapolation required in the existing implementation. The updated alchemical transformation functionality will be included in the next major release of Amber (scheduled for release in Q1 2014) and will be available at http://ambermd.org, under the Amber license. PMID:24185531

  3. Quadratic Programming for Allocating Control Effort

    NASA Technical Reports Server (NTRS)

    Singh, Gurkirpal

    2005-01-01

    A computer program calculates an optimal allocation of control effort in a system that includes redundant control actuators. The program implements an iterative (but otherwise single-stage) algorithm of the quadratic-programming type. In general, in the quadratic-programming problem, one seeks the values of a set of variables that minimize a quadratic cost function, subject to a set of linear equality and inequality constraints. In this program, the cost function combines control effort (typically quantified in terms of energy or fuel consumed) and control residuals (differences between commanded and sensed values of variables to be controlled). In comparison with prior control-allocation software, this program offers approximately equal accuracy but much greater computational efficiency. In addition, this program offers flexibility, robustness to actuation failures, and a capability for selective enforcement of control requirements. The computational efficiency of this program makes it suitable for such complex, real-time applications as controlling redundant aircraft actuators or redundant spacecraft thrusters. The program is written in the C language for execution in a UNIX operating system.

  4. Experimental and Computational Sonic Boom Assessment of Lockheed-Martin N+2 Low Boom Models

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Durston, Donald A.; Elmiligui, Alaa A.; Walker, Eric L.; Carter, Melissa B.

    2015-01-01

    Flight at speeds greater than the speed of sound is not permitted over land, primarily because of the noise and structural damage caused by sonic boom pressure waves of supersonic aircraft. Mitigation of sonic boom is a key focus area of the High Speed Project under NASA's Fundamental Aeronautics Program. The project is focusing on technologies to enable future civilian aircraft to fly efficiently with reduced sonic boom, engine and aircraft noise, and emissions. A major objective of the project is to improve both computational and experimental capabilities for design of low-boom, high-efficiency aircraft. NASA and industry partners are developing improved wind tunnel testing techniques and new pressure instrumentation to measure the weak sonic boom pressure signatures of modern vehicle concepts. In parallel, computational methods are being developed to provide rapid design and analysis of supersonic aircraft with improved meshing techniques that provide efficient, robust, and accurate on- and off-body pressures at several body lengths from vehicles with very low sonic boom overpressures. The maturity of these critical parallel efforts is necessary before low-boom flight can be demonstrated and commercial supersonic flight can be realized.

  5. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  6. Prediction of miRNA targets.

    PubMed

    Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis

    2015-01-01

    Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.

  7. Efficient volumetric estimation from plenoptic data

    NASA Astrophysics Data System (ADS)

    Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.

    2013-03-01

    The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.

  8. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  9. Qualitatively similar processing for own- and other-race faces: Evidence from efficiency and equivalent input noise.

    PubMed

    Shafai, Fakhri; Oruc, Ipek

    2018-02-01

    The other-race effect is the finding of diminished performance in recognition of other-race faces compared to those of own-race. It has been suggested that the other-race effect stems from specialized expert processes being tuned exclusively to own-race faces. In the present study, we measured recognition contrast thresholds for own- and other-race faces as well as houses for Caucasian observers. We have factored face recognition performance into two invariant aspects of visual function: efficiency, which is related to neural computations and processing demanded by the task, and equivalent input noise, related to signal degradation within the visual system. We hypothesized that if expert processes are available only to own-race faces, this should translate into substantially greater recognition efficiencies for own-race compared to other-race faces. Instead, we found similar recognition efficiencies for both own- and other-race faces. The other-race effect manifested as increased equivalent input noise. These results argue against qualitatively distinct perceptual processes. Instead they suggest that for Caucasian observers, similar neural computations underlie recognition of own- and other-race faces. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.

    PubMed

    Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa

    2010-01-21

    Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.

  11. Optical Computers and Space Technology

    NASA Technical Reports Server (NTRS)

    Abdeldayem, Hossin A.; Frazier, Donald O.; Penn, Benjamin; Paley, Mark S.; Witherow, William K.; Banks, Curtis; Hicks, Rosilen; Shields, Angela

    1995-01-01

    The rapidly increasing demand for greater speed and efficiency on the information superhighway requires significant improvements over conventional electronic logic circuits. Optical interconnections and optical integrated circuits are strong candidates to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by the conventional electronic logic circuits. The new optical technology has increased the demand for high quality optical materials. NASA's recent involvement in processing optical materials in space has demonstrated that a new and unique class of high quality optical materials are processible in a microgravity environment. Microgravity processing can induce improved orders in these materials and could have a significant impact on the development of optical computers. We will discuss NASA's role in processing these materials and report on some of the associated nonlinear optical properties which are quite useful for optical computers technology.

  12. Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R.

    2004-01-01

    A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.

  13. Computer-automated tinnitus assessment: noise-band matching, maskability, and residual inhibition.

    PubMed

    Henry, James A; Roberts, Larry E; Ellingson, Roger M; Thielman, Emily J

    2013-06-01

    Psychoacoustic measures of tinnitus typically include loudness and pitch match, minimum masking level (MML), and residual inhibition (RI). We previously developed and documented a computer-automated tinnitus evaluation system (TES) capable of subject-guided loudness and pitch matching. The TES was further developed to conduct computer-aided, subject-guided testing for noise-band matching (NBM), MML, and RI. The purpose of the present study was to document the capability of the upgraded TES to obtain measures of NBM, MML, and RI, and to determine the test-retest reliability of the responses obtained. Three subject-guided, computer-automated testing protocols were developed to conduct NBM. For MML and RI testing, a 2-12 kHz band of noise was used. All testing was repeated during a second session. Subjects meeting study criteria were selected from those who had previously been tested for loudness and pitch matching in our laboratory. A total of 21 subjects completed testing, including seven females and 14 males. The upgraded TES was found to be fairly time efficient. Subjects were generally reliable, both within and between sessions, with respect to the type of stimulus they chose as the best match to their tinnitus. Matching to bandwidth was more variable between measurements, with greater consistency seen for subjects reporting tonal tinnitus or wide-band noisy tinnitus than intermediate types. Between-session repeated MMLs were within 10 dB of each other for all but three of the subjects. Subjects who experienced RI during Session 1 tended to be those who experienced it during Session 2. This study may represent the first time that NBM, MML, and RI audiometric testing results have been obtained entirely through a self-contained, computer-automated system designed specifically for use in the clinic. Future plans include refinements to achieve greater testing efficiency. American Academy of Audiology.

  14. Protecting genomic data analytics in the cloud: state of the art and opportunities.

    PubMed

    Tang, Haixu; Jiang, Xiaoqian; Wang, Xiaofeng; Wang, Shuang; Sofia, Heidi; Fox, Dov; Lauter, Kristin; Malin, Bradley; Telenti, Amalio; Xiong, Li; Ohno-Machado, Lucila

    2016-10-13

    The outsourcing of genomic data into public cloud computing settings raises concerns over privacy and security. Significant advancements in secure computation methods have emerged over the past several years, but such techniques need to be rigorously evaluated for their ability to support the analysis of human genomic data in an efficient and cost-effective manner. With respect to public cloud environments, there are concerns about the inadvertent exposure of human genomic data to unauthorized users. In analyses involving multiple institutions, there is additional concern about data being used beyond agreed research scope and being prcoessed in untrused computational environments, which may not satisfy institutional policies. To systematically investigate these issues, the NIH-funded National Center for Biomedical Computing iDASH (integrating Data for Analysis, 'anonymization' and SHaring) hosted the second Critical Assessment of Data Privacy and Protection competition to assess the capacity of cryptographic technologies for protecting computation over human genomes in the cloud and promoting cross-institutional collaboration. Data scientists were challenged to design and engineer practical algorithms for secure outsourcing of genome computation tasks in working software, whereby analyses are performed only on encrypted data. They were also challenged to develop approaches to enable secure collaboration on data from genomic studies generated by multiple organizations (e.g., medical centers) to jointly compute aggregate statistics without sharing individual-level records. The results of the competition indicated that secure computation techniques can enable comparative analysis of human genomes, but greater efficiency (in terms of compute time and memory utilization) are needed before they are sufficiently practical for real world environments.

  15. Evaluation of Intradural Stimulation Efficiency and Selectivity in a Computational Model of Spinal Cord Stimulation

    PubMed Central

    Howell, Bryan; Lad, Shivanand P.; Grill, Warren M.

    2014-01-01

    Spinal cord stimulation (SCS) is an alternative or adjunct therapy to treat chronic pain, a prevalent and clinically challenging condition. Although SCS has substantial clinical success, the therapy is still prone to failures, including lead breakage, lead migration, and poor pain relief. The goal of this study was to develop a computational model of SCS and use the model to compare activation of neural elements during intradural and extradural electrode placement. We constructed five patient-specific models of SCS. Stimulation thresholds predicted by the model were compared to stimulation thresholds measured intraoperatively, and we used these models to quantify the efficiency and selectivity of intradural and extradural SCS. Intradural placement dramatically increased stimulation efficiency and reduced the power required to stimulate the dorsal columns by more than 90%. Intradural placement also increased selectivity, allowing activation of a greater proportion of dorsal column fibers before spread of activation to dorsal root fibers, as well as more selective activation of individual dermatomes at different lateral deviations from the midline. Further, the results suggest that current electrode designs used for extradural SCS are not optimal for intradural SCS, and a novel azimuthal tripolar design increased stimulation selectivity, even beyond that achieved with an intradural paddle array. Increased stimulation efficiency is expected to increase the battery life of implantable pulse generators, increase the recharge interval of rechargeable implantable pulse generators, and potentially reduce stimulator volume. The greater selectivity of intradural stimulation may improve the success rate of SCS by mitigating the sensitivity of pain relief to malpositioning of the electrode. The outcome of this effort is a better quantitative understanding of how intradural electrode placement can potentially increase the selectivity and efficiency of SCS, which, in turn, provides predictions that can be tested in future clinical studies assessing the potential therapeutic benefits of intradural SCS. PMID:25536035

  16. Efficient computation of electrograms and ECGs in human whole heart simulations using a reaction-eikonal model.

    PubMed

    Neic, Aurel; Campos, Fernando O; Prassl, Anton J; Niederer, Steven A; Bishop, Martin J; Vigmond, Edward J; Plank, Gernot

    2017-10-01

    Anatomically accurate and biophysically detailed bidomain models of the human heart have proven a powerful tool for gaining quantitative insight into the links between electrical sources in the myocardium and the concomitant current flow in the surrounding medium as they represent their relationship mechanistically based on first principles. Such models are increasingly considered as a clinical research tool with the perspective of being used, ultimately, as a complementary diagnostic modality. An important prerequisite in many clinical modeling applications is the ability of models to faithfully replicate potential maps and electrograms recorded from a given patient. However, while the personalization of electrophysiology models based on the gold standard bidomain formulation is in principle feasible, the associated computational expenses are significant, rendering their use incompatible with clinical time frames. In this study we report on the development of a novel computationally efficient reaction-eikonal (R-E) model for modeling extracellular potential maps and electrograms. Using a biventricular human electrophysiology model, which incorporates a topologically realistic His-Purkinje system (HPS), we demonstrate by comparing against a high-resolution reaction-diffusion (R-D) bidomain model that the R-E model predicts extracellular potential fields, electrograms as well as ECGs at the body surface with high fidelity and offers vast computational savings greater than three orders of magnitude. Due to their efficiency R-E models are ideally suitable for forward simulations in clinical modeling studies which attempt to personalize electrophysiological model features.

  17. Efficient computation of electrograms and ECGs in human whole heart simulations using a reaction-eikonal model

    NASA Astrophysics Data System (ADS)

    Neic, Aurel; Campos, Fernando O.; Prassl, Anton J.; Niederer, Steven A.; Bishop, Martin J.; Vigmond, Edward J.; Plank, Gernot

    2017-10-01

    Anatomically accurate and biophysically detailed bidomain models of the human heart have proven a powerful tool for gaining quantitative insight into the links between electrical sources in the myocardium and the concomitant current flow in the surrounding medium as they represent their relationship mechanistically based on first principles. Such models are increasingly considered as a clinical research tool with the perspective of being used, ultimately, as a complementary diagnostic modality. An important prerequisite in many clinical modeling applications is the ability of models to faithfully replicate potential maps and electrograms recorded from a given patient. However, while the personalization of electrophysiology models based on the gold standard bidomain formulation is in principle feasible, the associated computational expenses are significant, rendering their use incompatible with clinical time frames. In this study we report on the development of a novel computationally efficient reaction-eikonal (R-E) model for modeling extracellular potential maps and electrograms. Using a biventricular human electrophysiology model, which incorporates a topologically realistic His-Purkinje system (HPS), we demonstrate by comparing against a high-resolution reaction-diffusion (R-D) bidomain model that the R-E model predicts extracellular potential fields, electrograms as well as ECGs at the body surface with high fidelity and offers vast computational savings greater than three orders of magnitude. Due to their efficiency R-E models are ideally suitable for forward simulations in clinical modeling studies which attempt to personalize electrophysiological model features.

  18. Designing for multiple global user populations: increasing resource allocation efficiency for greater sustainability.

    PubMed

    Nadadur, G; Parkinson, M B

    2012-01-01

    This paper proposes a method to identify opportunities for increasing the efficiency of raw material allocation decisions for products that are simultaneously targeted at multiple user populations around the world. The values of 24 body measures at certain key percentiles were used to estimate the best-fitting anthropometric distributions for female and male adults in nine national populations, which were selected to represent the diverse target markets multinational companies must design for. These distributions were then used to synthesize body measure data for combined populations with a 1:1 female:male ratio. An anthropometric range metric (ARM) was proposed for assessing the variation of these body measures across the populations. At any percentile, ARM values were calculated as the percentage difference between the highest and lowest anthropometric values across the considered user populations. Based on their magnitudes, plots of ARM values computed between the 1st and 99 th percentiles for each body measure were grouped into low, medium, and high categories. This classification of body measures was proposed as a means of selecting the most suitable strategies for designing raw material-efficient products. The findings in this study and the contributions of subsequent work along these lines are expected to help achieve greater efficiencies in resource allocation in global product development.

  19. Computer-assisted categorizing of head computed tomography reports for clinical decision rule research.

    PubMed

    Wall, Stephen P; Mayorga, Oliver; Banfield, Christine E; Wall, Mark E; Aisic, Ilan; Auerbach, Carl; Gennis, Paul

    2006-11-01

    To develop software that categorizes electronic head computed tomography (CT) reports into groups useful for clinical decision rule research. Data were obtained from the Second National Emergency X-Radiography Utilization Study, a cohort of head injury patients having received head CT. CT reports were reviewed manually for presence or absence of clinically important subdural or epidural hematoma, defined as greater than 1.0 cm in width or causing mass effect. Manual categorization was done by 2 independent researchers blinded to each other's results. A third researcher adjudicated discrepancies. A random sample of 300 reports with radiologic abnormalities was selected for software development. After excluding reports categorized manually or by software as indeterminate (neither positive nor negative), we calculated sensitivity and specificity by using manual categorization as the standard. System efficiency was defined as the percentage of reports categorized as positive or negative, regardless of accuracy. Software was refined until analysis of the training data yielded sensitivity and specificity approximating 95% and efficiency exceeding 75%. To test the system, we calculated sensitivity, specificity, and efficiency, using the remaining 1,911 reports. Of the 1,911 reports, 160 had clinically important subdural or epidural hematoma. The software exhibited good agreement with manual categorization of all reports, including indeterminate ones (weighted kappa 0.62; 95% confidence interval [CI] 0.58 to 0.65). Sensitivity, specificity, and efficiency of the computerized system for identifying manual positives and negatives were 96% (95% CI 91% to 98%), 98% (95% CI 98% to 99%), and 79% (95% CI 77% to 80%), respectively. Categorizing head CT reports by computer for clinical decision rule research is feasible.

  20. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  1. Computational Study of Fluidic Thrust Vectoring using Separation Control in a Nozzle

    NASA Technical Reports Server (NTRS)

    Deere, Karen; Berrier, Bobby L.; Flamm, Jeffrey D.; Johnson, Stuart K.

    2003-01-01

    A computational investigation of a two- dimensional nozzle was completed to assess the use of fluidic injection to manipulate flow separation and cause thrust vectoring of the primary jet thrust. The nozzle was designed with a recessed cavity to enhance the throat shifting method of fluidic thrust vectoring. The structured-grid, computational fluid dynamics code PAB3D was used to guide the design and analyze over 60 configurations. Nozzle design variables included cavity convergence angle, cavity length, fluidic injection angle, upstream minimum height, aft deck angle, and aft deck shape. All simulations were computed with a static freestream Mach number of 0.05. a nozzle pressure ratio of 3.858, and a fluidic injection flow rate equal to 6 percent of the primary flow rate. Results indicate that the recessed cavity enhances the throat shifting method of fluidic thrust vectoring and allows for greater thrust-vector angles without compromising thrust efficiency.

  2. Recurrent neural network based virtual detection line

    NASA Astrophysics Data System (ADS)

    Kadikis, Roberts

    2018-04-01

    The paper proposes an efficient method for detection of moving objects in the video. The objects are detected when they cross a virtual detection line. Only the pixels of the detection line are processed, which makes the method computationally efficient. A Recurrent Neural Network processes these pixels. The machine learning approach allows one to train a model that works in different and changing outdoor conditions. Also, the same network can be trained for various detection tasks, which is demonstrated by the tests on vehicle and people counting. In addition, the paper proposes a method for semi-automatic acquisition of labeled training data. The labeling method is used to create training and testing datasets, which in turn are used to train and evaluate the accuracy and efficiency of the detection method. The method shows similar accuracy as the alternative efficient methods but provides greater adaptability and usability for different tasks.

  3. Computer Training for Entrepreneurial Meteorologists.

    NASA Astrophysics Data System (ADS)

    Koval, Joseph P.; Young, George S.

    2001-05-01

    Computer applications of increasing diversity form a growing part of the undergraduate education of meteorologists in the early twenty-first century. The advent of the Internet economy, as well as a waning demand for traditional forecasters brought about by better numerical models and statistical forecasting techniques has greatly increased the need for operational and commercial meteorologists to acquire computer skills beyond the traditional techniques of numerical analysis and applied statistics. Specifically, students with the skills to develop data distribution products are in high demand in the private sector job market. Meeting these demands requires greater breadth, depth, and efficiency in computer instruction. The authors suggest that computer instruction for undergraduate meteorologists should include three key elements: a data distribution focus, emphasis on the techniques required to learn computer programming on an as-needed basis, and a project orientation to promote management skills and support student morale. In an exploration of this approach, the authors have reinvented the Applications of Computers to Meteorology course in the Department of Meteorology at The Pennsylvania State University to teach computer programming within the framework of an Internet product development cycle. Because the computer skills required for data distribution programming change rapidly, specific languages are valuable for only a limited time. A key goal of this course was therefore to help students learn how to retrain efficiently as technologies evolve. The crux of the course was a semester-long project during which students developed an Internet data distribution product. As project management skills are also important in the job market, the course teamed students in groups of four for this product development project. The success, failures, and lessons learned from this experiment are discussed and conclusions drawn concerning undergraduate instructional methods for computer applications in meteorology.

  4. Harnessing the genetics of the modern dairy cow to continue improvements in feed efficiency.

    PubMed

    VandeHaar, M J; Armentano, L E; Weigel, K; Spurlock, D M; Tempelman, R J; Veerkamp, R

    2016-06-01

    Feed efficiency, as defined by the fraction of feed energy or dry matter captured in products, has more than doubled for the US dairy industry in the past 100 yr. This increased feed efficiency was the result of increased milk production per cow achieved through genetic selection, nutrition, and management with the desired goal being greater profitability. With increased milk production per cow, more feed is consumed per cow, but a greater portion of the feed is partitioned toward milk instead of maintenance and body growth. This dilution of maintenance has been the overwhelming driver of enhanced feed efficiency in the past, but its effect diminishes with each successive increment in production relative to body size and therefore will be less important in the future. Instead, we must also focus on new ways to enhance digestive and metabolic efficiency. One way to examine variation in efficiency among animals is residual feed intake (RFI), a measure of efficiency that is independent of the dilution of maintenance. Cows that convert feed gross energy to net energy more efficiently or have lower maintenance requirements than expected based on body weight use less feed than expected and thus have negative RFI. Cows with low RFI likely digest and metabolize nutrients more efficiently and should have overall greater efficiency and profitability if they are also healthy, fertile, and produce at a high multiple of maintenance. Genomic technologies will help to identify these animals for selection programs. Nutrition and management also will continue to play a major role in farm-level feed efficiency. Management practices such as grouping and total mixed ration feeding have improved rumen function and therefore efficiency, but they have also decreased our attention on individual cow needs. Nutritional grouping is key to helping each cow reach its genetic potential. Perhaps new computer-driven technologies, combined with genomics, will enable us to optimize management for each individual cow within a herd, or to optimize animal selection to match management environments. In the future, availability of feed resources may shift as competition for land increases. New approaches combining genetic, nutrition, and other management practices will help optimize feed efficiency, profitability, and environmental sustainability. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  5. Genotype Imputation with Millions of Reference Samples

    PubMed Central

    Browning, Brian L.; Browning, Sharon R.

    2016-01-01

    We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle’s throughput was more than 100× greater than Impute2’s throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. PMID:26748515

  6. Optical design of an in vivo laparoscopic lighting system

    NASA Astrophysics Data System (ADS)

    Liu, Xiaolong; Abdolmalaki, Reza Yazdanpanah; Mancini, Gregory J.; Tan, Jindong

    2017-12-01

    This paper proposes an in vivo laparoscopic lighting system design to address the illumination issues, namely poor lighting uniformity and low optical efficiency, existing in the state-of-the-art in vivo laparoscopic cameras. The transformable design of the laparoscopic lighting system is capable of carrying purposefully designed freeform optical lenses for achieving lighting performance with high illuminance uniformity and high optical efficiency in a desired target region. To design freeform optical lenses for extended light sources such as LEDs with Lambertian light intensity distributions, we present an effective and complete freeform optical design method. The procedures include (1) ray map computation by numerically solving a standard Monge-Ampere equation; (2) initial freeform optical surface construction by using Snell's law and a lens volume restriction; (3) correction of surface normal vectors due to accumulated errors from the initially constructed surfaces; and (4) feedback modification of the solution to deal with degraded illuminance uniformity caused by the extended sizes of the LEDs. We employed an optical design software package to evaluate the performance of our laparoscopic lighting system design. The simulation results show that our design achieves greater than 95% illuminance uniformity and greater than 89% optical efficiency (considering Fresnel losses) for illuminating the target surgical region.

  7. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  8. Beam efflux measurements

    NASA Technical Reports Server (NTRS)

    Komatsu, G. K.; Stellen, J. M., Jr.

    1976-01-01

    Measurements have been made of the high energy thrust ions, (Group I), high angle/high energy ions (Group II), and high angle/low energy ions (Group IV) of a mercury electron bombardment thruster in the angular divergence range from 0 deg to greater than 90 deg. The measurements have been made as a function of thrust ion current, propellant utilization efficiency, bombardment discharge voltage, screen and accelerator grid potential (accel-decel ratio) and neutralizer keeper potential. The shape of the Group IV (charge exchange) ion plume has remained essentially fixed within the range of variation of the engine operation parameters. The magnitude of the charge exchange ion flux scales with thrust ion current, for good propellant utilization conditions. For fixed thrust ion current, charge exchange ion flux increases for diminishing propellant utilization efficiency. Facility effects influence experimental accuracies within the range of propellant utilization efficiency used in the experiments. The flux of high angle/high energy Group II ions is significantly diminished by the use of minimum decel voltages on the accelerator grid. A computer model of charge exchange ion production and motion has been developed. The program allows computation of charge exchange ion volume production rate, total production rate, and charge exchange ion trajectories for "genuine" and "facilities effects" particles. In the computed flux deposition patterns, the Group I and Group IV ion plumes exhibit a counter motion.

  9. KITTEN Lightweight Kernel 0.1 Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedretti, Kevin; Levenhagen, Michael; Kelly, Suzanne

    2007-12-12

    The Kitten Lightweight Kernel is a simplified OS (operating system) kernel that is intended to manage a compute node's hardware resources. It provides a set of mechanisms to user-level applications for utilizing hardware resources (e.g., allocating memory, creating processes, accessing the network). Kitten is much simpler than general-purpose OS kernels, such as Linux or Windows, but includes all of the esssential functionality needed to support HPC (high-performance computing) MPI, PGAS and OpenMP applications. Kitten provides unique capabilities such as physically contiguous application memory, transparent large page support, and noise-free tick-less operation, which enable HPC applications to obtain greater efficiency andmore » scalability than with general purpose OS kernels.« less

  10. Probing RNA Native Conformational Ensembles with Structural Constraints.

    PubMed

    Fonseca, Rasmus; van den Bedem, Henry; Bernauer, Julie

    2016-05-01

    Noncoding ribonucleic acids (RNA) play a critical role in a wide variety of cellular processes, ranging from regulating gene expression to post-translational modification and protein synthesis. Their activity is modulated by highly dynamic exchanges between three-dimensional conformational substates, which are difficult to characterize experimentally and computationally. Here, we present an innovative, entirely kinematic computational procedure to efficiently explore the native ensemble of RNA molecules. Our procedure projects degrees of freedom onto a subspace of conformation space defined by distance constraints in the tertiary structure. The dimensionality reduction enables efficient exploration of conformational space. We show that the conformational distributions obtained with our method broadly sample the conformational landscape observed in NMR experiments. Compared to normal mode analysis-based exploration, our procedure diffuses faster through the experimental ensemble while also accessing conformational substates to greater precision. Our results suggest that conformational sampling with a highly reduced but fully atomistic representation of noncoding RNA expresses key features of their dynamic nature.

  11. Factors influencing use of an e-health website in a community sample of older adults.

    PubMed

    Czaja, Sara J; Sharit, Joseph; Lee, Chin Chin; Nair, Sankaran N; Hernández, Mario A; Arana, Neysarí; Fu, Shih Hua

    2013-01-01

    The use of the internet as a source of health information and link to healthcare services has raised concerns about the ability of consumers, especially vulnerable populations such as older adults, to access these applications. This study examined the influence of training on the ability of adults (aged 45+ years) to use the Medicare.gov website to solve problems related to health management. The influence of computer experience and cognitive abilities on performance was also examined. Seventy-one participants, aged 47-92, were randomized into a Multimedia training, Unimodal training, or Cold Start condition and completed three healthcare management problems. MEASUREMENT AND ANALYSES: Computer/internet experience was measured via questionnaire, and cognitive abilities were assessed using standard neuropsychological tests. Performance metrics included measures of navigation, accuracy and efficiency. Data were analyzed using analysis of variance, χ(2) and regression techniques. The data indicate that there was no difference among the three conditions on measures of accuracy, efficiency, or navigation. However, results of the regression analyses showed that, overall, people who received training performed better on the tasks, as evidenced by greater accuracy and efficiency. Performance was also significantly influenced by prior computer experience and cognitive abilities. Participants with more computer experience and higher cognitive abilities performed better. The findings indicate that training, experience, and abilities are important when using complex health websites. However, training alone is not sufficient. The complexity of web content needs to be considered to ensure successful use of these websites by those with lower abilities.

  12. Factors influencing use of an e-health website in a community sample of older adults

    PubMed Central

    Sharit, Joseph; Lee, Chin Chin; Nair, Sankaran N; Hernández, Mario A; Arana, Neysarí; Fu, Shih Hua

    2013-01-01

    Objective The use of the internet as a source of health information and link to healthcare services has raised concerns about the ability of consumers, especially vulnerable populations such as older adults, to access these applications. This study examined the influence of training on the ability of adults (aged 45+ years) to use the Medicare.gov website to solve problems related to health management. The influence of computer experience and cognitive abilities on performance was also examined. Design Seventy-one participants, aged 47–92, were randomized into a Multimedia training, Unimodal training, or Cold Start condition and completed three healthcare management problems. Measurement and analyses Computer/internet experience was measured via questionnaire, and cognitive abilities were assessed using standard neuropsychological tests. Performance metrics included measures of navigation, accuracy and efficiency. Data were analyzed using analysis of variance, χ2 and regression techniques. Results The data indicate that there was no difference among the three conditions on measures of accuracy, efficiency, or navigation. However, results of the regression analyses showed that, overall, people who received training performed better on the tasks, as evidenced by greater accuracy and efficiency. Performance was also significantly influenced by prior computer experience and cognitive abilities. Participants with more computer experience and higher cognitive abilities performed better. Conclusions The findings indicate that training, experience, and abilities are important when using complex health websites. However, training alone is not sufficient. The complexity of web content needs to be considered to ensure successful use of these websites by those with lower abilities. PMID:22802269

  13. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    NASA Technical Reports Server (NTRS)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  14. GA-optimization for rapid prototype system demonstration

    NASA Technical Reports Server (NTRS)

    Kim, Jinwoo; Zeigler, Bernard P.

    1994-01-01

    An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.

  15. Progress in aeronautical research and technology applicable to civil air transports

    NASA Technical Reports Server (NTRS)

    Bower, R. E.

    1981-01-01

    Recent progress in the aeronautical research and technology program being conducted by the United States National Aeronautics and Space Administration is discussed. Emphasis is on computational capability, new testing facilities, drag reduction, turbofan and turboprop propulsion, noise, composite materials, active controls, integrated avionics, cockpit displays, flight management, and operating problems. It is shown that this technology is significantly impacting the efficiency of the new civil air transports. The excitement of emerging research promises even greater benefits to future aircraft developments.

  16. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  17. Field emission electric propulsion thruster modeling and simulation

    NASA Astrophysics Data System (ADS)

    Vanderwyst, Anton Sivaram

    Electric propulsion allows space rockets a much greater range of capabilities with mass efficiencies that are 1.3 to 30 times greater than chemical propulsion. Field emission electric propulsion (FEEP) thrusters provide a specific design that possesses extremely high efficiency and small impulse bits. Depending on mass flow rate, these thrusters can emit both ions and droplets. To date, fundamental experimental work has been limited in FEEP. In particular, detailed individual droplet mechanics have yet to be understood. In this thesis, theoretical and computational investigations are conducted to examine the physical characteristics associated with droplet dynamics relevant to FEEP applications. Both asymptotic analysis and numerical simulations, based on a new approach combining level set and boundary element methods, were used to simulate 2D-planar and 2D-axisymmetric probability density functions of the droplets produced for a given geometry and electrode potential. The combined algorithm allows the simulation of electrostatically-driven liquids up to and after detachment. Second order accuracy in space is achieved using a volume of fluid correction. The simulations indicate that in general, (i) lowering surface tension, viscosity, and potential, or (ii) enlarging electrode rings, and needle tips reduce operational mass efficiency. Among these factors, surface tension and electrostatic potential have the largest impact. A probability density function for the mass to charge ratio (MTCR) of detached droplets is computed, with a peak around 4,000 atoms per electron. High impedance surfaces, strong electric fields, and large liquid surface tension result in a lower MTCR ratio, which governs FEEP droplet evolution via the charge on detached droplets and their corresponding acceleration. Due to the slow mass flow along a FEEP needle, viscosity is of less importance in altering the droplet velocities. The width of the needle, the composition of the propellant, the current and the mass efficiency are interrelated. The numerical simulations indicate that more electric power per Newton of thrust on a narrow needle with a thin, high surface tension fluid layer gives better performance.

  18. Comparison of manual versus semiautomatic milk recording systems in dairy goats.

    PubMed

    Ait-Saidi, A; Caja, G; Carné, S; Salama, A A K; Ghirardi, J J

    2008-04-01

    A total of 24 Murciano-Granadina dairy goats in early-midlactation were used to compare the labor time and data collection efficiency of using manual (M) vs. semiautomated (SA) systems for milk recording. Goats were milked once daily in a 2 x 12 parallel platform, with 6 milking units on each side. The M system used visual identification (ID) by large plastic ear tags, on-paper data recording, and data manually uploaded to a computer. The SA system used electronic ID, automatic ID, manual data recording on reader keyboard, and automatic data uploading to computer by Bluetooth connection. Data were collected for groups of 2 x 12 goats for 15 test days of each system during a period of 70 d. Time data were converted to a decimal scale. No difference in milk recording time between M and SA (1.32 +/- 0.03 and 1.34 +/- 0.03 min/goat, respectively) was observed. Time needed for transferring data to the computer was greater for M when compared with SA (0.20 +/- 0.01 and 0.05 +/- 0.01 min/goat). Overall milk recording time was greater in M than in SA (1.52 +/- 0.04 vs. 1.39 +/- 0.04 min/goat), the latter decreasing with operator training. Time for transferring milk recording data to the computer was 4.81 +/- 0.34 and 1.09 +/- 0.10 min for M and SA groups of 24 goats, respectively, but only increased by 0.19 min in SA for each additional 24 goats. No difference in errors of data acquisition was detected between M and SA systems during milk recording (0.6%), but an additional 1.1% error was found in the M system during data uploading. Predicted differences between M and SA increased with the number of goats processed on the test-day. Reduction in labor time cost ranged from euro0.5 to 12.9 (US$0.7 to 17.4) per milk recording, according to number of goats from 24 to 480 goats and accounted for 40% of the electronic ID costs. In conclusion, electronic ID was more efficient for labor costs and resulted in fewer data errors, the benefit being greater with trained operators and larger goat herds.

  19. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  20. Genotype Imputation with Millions of Reference Samples.

    PubMed

    Browning, Brian L; Browning, Sharon R

    2016-01-07

    We present a genotype imputation method that scales to millions of reference samples. The imputation method, based on the Li and Stephens model and implemented in Beagle v.4.1, is parallelized and memory efficient, making it well suited to multi-core computer processors. It achieves fast, accurate, and memory-efficient genotype imputation by restricting the probability model to markers that are genotyped in the target samples and by performing linear interpolation to impute ungenotyped variants. We compare Beagle v.4.1 with Impute2 and Minimac3 by using 1000 Genomes Project data, UK10K Project data, and simulated data. All three methods have similar accuracy but different memory requirements and different computation times. When imputing 10 Mb of sequence data from 50,000 reference samples, Beagle's throughput was more than 100× greater than Impute2's throughput on our computer servers. When imputing 10 Mb of sequence data from 200,000 reference samples in VCF format, Minimac3 consumed 26× more memory per computational thread and 15× more CPU time than Beagle. We demonstrate that Beagle v.4.1 scales to much larger reference panels by performing imputation from a simulated reference panel having 5 million samples and a mean marker density of one marker per four base pairs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  1. Automated land-use mapping from spacecraft data. [Oakland County, Michigan

    NASA Technical Reports Server (NTRS)

    Chase, P. E. (Principal Investigator); Rogers, R. H.; Reed, L. E.

    1974-01-01

    The author has identified the following significant results. In response to the need for a faster, more economical means of producing land use maps, this study evaluated the suitability of using ERTS-1 computer compatible tape (CCT) data as a basis for automatic mapping. Significant findings are: (1) automatic classification accuracy greater than 90% is achieved on categories of deep and shallow water, tended grass, rangeland, extractive (bare earth), urban, forest land, and nonforested wet lands; (2) computer-generated printouts by target class provide a quantitative measure of land use; and (3) the generation of map overlays showing land use from ERTS-1 CCTs offers a significant breakthrough in the rate at which land use maps are generated. Rather than uncorrected classified imagery or computer line printer outputs, the processing results in geometrically-corrected computer-driven pen drawing of land categories, drawn on a transparent material at a scale specified by the operator. These map overlays are economically produced and provide an efficient means of rapidly updating maps showing land use.

  2. Study on the spectral efficiency of SFH-GMSK in land mobile telephone communication by direct simulation

    NASA Technical Reports Server (NTRS)

    Bao, H.; Kwatra, S. C.; Kim, Junghwan; Stevens, G. H.

    1990-01-01

    A spread spectrum system, slow frequency hopping with GMSK (Gaussian minimum shift keying) modulation (SFH-GMSK), is proposed for mobile telephone communications. The system performance is evaluated using computer simulation and is compared with an unspread system. Results show that under multipath fading conditions, when the signal-to-noise ratio (SNR) is greater than 15 dB, slow frequency hopping gives some bit error rate improvement over the unspread system. Theoretical predictions indicate that a system efficiency of 20-65 users per cell can be achieved in the cellular configuration. Joint use of SFH-GMSK and FM is also investigated. It is shown that FM interference can cause serious degradation to the SFH-GMSK performance.

  3. Radial Inflow Turboexpander Redesign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    William G. Price

    2001-09-24

    Steamboat Envirosystems, LLC (SELC) was awarded a grant in accordance with the DOE Enhanced Geothermal Systems Project Development. Atlas-Copco Rotoflow (ACR), a radial expansion turbine manufacturer, was responsible for the manufacturing of the turbine and the creation of the new computer program. SB Geo, Inc. (SBG), the facility operator, monitored and assisted ACR's activities as well as provided installation and startup assistance. The primary scope of the project is the redesign of an axial flow turbine to a radial inflow turboexpander to provide increased efficiency and reliability at an existing facility. In addition to the increased efficiency and reliability, themore » redesign includes an improved reduction gear design, and improved shaft seal design, and upgraded control system and a greater flexibility of application« less

  4. Mobility, Emotion, and Universality in Future Collaboration

    NASA Astrophysics Data System (ADS)

    Chignell, Mark; Hosono, Naotsune; Fels, Deborah; Lottridge, Danielle; Waterworth, John

    The Graphical user interface has traditionally supported personal productivity, efficiency, and usability. With computer supported cooperative work, the focus has been on typical people, doing typical work in a highly rational model of interaction. Recent trends towards mobility, and emotional and universal design are extending the user interface paradigm beyond the routine. As computing moves into the hand and away from the desktop, there is a greater need for dealing with emotions and distractions. Busy and distracted people represent a new kind of disability, but one that will be increasingly prevalent. In this panel we examine the current state of the art, and prospects for future collaboration in non-normative computing requirements. This panel draws together researchers who are studying the problems of mobility, emotion and universality. The goal of the panel is to discuss how progress in these areas will change the nature of future collaboration.

  5. Nonlinear power flow feedback control for improved stability and performance of airfoil sections

    DOEpatents

    Wilson, David G.; Robinett, III, Rush D.

    2013-09-03

    A computer-implemented method of determining the pitch stability of an airfoil system, comprising using a computer to numerically integrate a differential equation of motion that includes terms describing PID controller action. In one model, the differential equation characterizes the time-dependent response of the airfoil's pitch angle, .alpha.. The computer model calculates limit-cycles of the model, which represent the stability boundaries of the airfoil system. Once the stability boundary is known, feedback control can be implemented, by using, for example, a PID controller to control a feedback actuator. The method allows the PID controller gain constants, K.sub.I, K.sub.p, and K.sub.d, to be optimized. This permits operation closer to the stability boundaries, while preventing the physical apparatus from unintentionally crossing the stability boundaries. Operating closer to the stability boundaries permits greater power efficiencies to be extracted from the airfoil system.

  6. Learning Optimized Local Difference Binaries for Scalable Augmented Reality on Mobile Devices.

    PubMed

    Xin Yang; Kwang-Ting Cheng

    2014-06-01

    The efficiency, robustness and distinctiveness of a feature descriptor are critical to the user experience and scalability of a mobile augmented reality (AR) system. However, existing descriptors are either too computationally expensive to achieve real-time performance on a mobile device such as a smartphone or tablet, or not sufficiently robust and distinctive to identify correct matches from a large database. As a result, current mobile AR systems still only have limited capabilities, which greatly restrict their deployment in practice. In this paper, we propose a highly efficient, robust and distinctive binary descriptor, called Learning-based Local Difference Binary (LLDB). LLDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. To select an optimized set of grid cell pairs, we densely sample grid cells from an image patch and then leverage a modified AdaBoost algorithm to automatically extract a small set of critical ones with the goal of maximizing the Hamming distance between mismatches while minimizing it between matches. Experimental results demonstrate that LLDB is extremely fast to compute and to match against a large database due to its high robustness and distinctiveness. Compared to the state-of-the-art binary descriptors, primarily designed for speed, LLDB has similar efficiency for descriptor construction, while achieving a greater accuracy and faster matching speed when matching over a large database with 2.3M descriptors on mobile devices.

  7. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less

  8. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  9. Optical design of an in vivo laparoscopic lighting system.

    PubMed

    Liu, Xiaolong; Abdolmalaki, Reza Yazdanpanah; Mancini, Gregory J; Tan, Jindong

    2017-12-01

    This paper proposes an in vivo laparoscopic lighting system design to address the illumination issues, namely poor lighting uniformity and low optical efficiency, existing in the state-of-the-art in vivo laparoscopic cameras. The transformable design of the laparoscopic lighting system is capable of carrying purposefully designed freeform optical lenses for achieving lighting performance with high illuminance uniformity and high optical efficiency in a desired target region. To design freeform optical lenses for extended light sources such as LEDs with Lambertian light intensity distributions, we present an effective and complete freeform optical design method. The procedures include (1) ray map computation by numerically solving a standard Monge-Ampere equation; (2) initial freeform optical surface construction by using Snell's law and a lens volume restriction; (3) correction of surface normal vectors due to accumulated errors from the initially constructed surfaces; and (4) feedback modification of the solution to deal with degraded illuminance uniformity caused by the extended sizes of the LEDs. We employed an optical design software package to evaluate the performance of our laparoscopic lighting system design. The simulation results show that our design achieves greater than 95% illuminance uniformity and greater than 89% optical efficiency (considering Fresnel losses) for illuminating the target surgical region. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  10. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE PAGES

    Romano, Paul K.; Siegel, Andrew R.

    2017-07-01

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  11. SURPHEX (tm): New dry photopolymers for replication of surface relief diffractive optics

    NASA Technical Reports Server (NTRS)

    Shvartsman, Felix P.

    1993-01-01

    High efficiency, deep groove, surface relief Diffractive Optical Elements (DOE) with various optical functions can be recorded in a photoresist using conventional interferometric holographic and computer generated photolithographic recording techniques. While photoresist recording media are satisfactory for recording individual surface relief DOE, a reliable and precise method is needed to replicate these diffractive microstructures to maintain the high aspect ratio in each replicated DOE. The term 'high aspect ratio' means that the depth of a groove is substantially greater, i.e. 2, 3, or more times greater, than the width of the groove. A new family of dry photopolymers SURPHEX was developed recently at Du Pont to replicate such highly efficient, deep groove DOE's. SURPHEX photopolymers are being utilized in Du Pont's proprietary Dry Photopolymer Embossing (DPE) technology to replicate with very high degree of precision almost any type of surface relief DOE. Surfaces relief microstructures with width/depth aspect ratio of 1:20 (0.1 micron/2.0 micron) were faithfully replicated by DPE technology. Several types of plastic and glass/quartz optical substrates can be used for economical replication of DOE.

  12. Greater power and computational efficiency for kernel-based association testing of sets of genetic variants

    PubMed Central

    Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer

    2014-01-01

    Motivation: Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test—a score test—with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene–gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. Results: After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test—up to 23 more associations—whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene–gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Availability: Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. Contact: heckerma@microsoft.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25075117

  13. Greater power and computational efficiency for kernel-based association testing of sets of genetic variants.

    PubMed

    Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer

    2014-11-15

    Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test-a score test-with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene-gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test-up to 23 more associations-whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene-gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. heckerma@microsoft.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  14. Genomic cloud computing: legal and ethical points to consider

    PubMed Central

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Burton, Paul; Chisholm, Rex; Fortier, Isabel; Goodwin, Pat; Harris, Jennifer; Hveem, Kristian; Kaye, Jane; Kent, Alistair; Knoppers, Bartha Maria; Lindpaintner, Klaus; Little, Julian; Riegman, Peter; Ripatti, Samuli; Stolk, Ronald; Bobrow, Martin; Cambon-Thomsen, Anne; Dressler, Lynn; Joly, Yann; Kato, Kazuto; Knoppers, Bartha Maria; Rodriguez, Laura Lyman; McPherson, Treasa; Nicolás, Pilar; Ouellette, Francis; Romeo-Casabona, Carlos; Sarin, Rajiv; Wallace, Susan; Wiesner, Georgia; Wilson, Julia; Zeps, Nikolajs; Simkevitz, Howard; De Rienzo, Assunta; Knoppers, Bartha M

    2015-01-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key ‘points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These ‘points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure. PMID:25248396

  15. Genomic cloud computing: legal and ethical points to consider.

    PubMed

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Knoppers, Bartha M

    2015-10-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key 'points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These 'points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure.

  16. Multigrid optimal mass transport for image registration and morphing

    NASA Astrophysics Data System (ADS)

    Rehman, Tauseef ur; Tannenbaum, Allen

    2007-02-01

    In this paper we present a computationally efficient Optimal Mass Transport algorithm. This method is based on the Monge-Kantorovich theory and is used for computing elastic registration and warping maps in image registration and morphing applications. This is a parameter free method which utilizes all of the grayscale data in an image pair in a symmetric fashion. No landmarks need to be specified for correspondence. In our work, we demonstrate significant improvement in computation time when our algorithm is applied as compared to the originally proposed method by Haker et al [1]. The original algorithm was based on a gradient descent method for removing the curl from an initial mass preserving map regarded as 2D vector field. This involves inverting the Laplacian in each iteration which is now computed using full multigrid technique resulting in an improvement in computational time by a factor of two. Greater improvement is achieved by decimating the curl in a multi-resolutional framework. The algorithm was applied to 2D short axis cardiac MRI images and brain MRI images for testing and comparison.

  17. More efficient optimization of long-term water supply portfolios

    NASA Astrophysics Data System (ADS)

    Kirsch, Brian R.; Characklis, Gregory W.; Dillard, Karen E. M.; Kelley, C. T.

    2009-03-01

    The use of temporary transfers, such as options and leases, has grown as utilities attempt to meet increases in demand while reducing dependence on the expansion of costly infrastructure capacity (e.g., reservoirs). Earlier work has been done to construct optimal portfolios comprising firm capacity and transfers, using decision rules that determine the timing and volume of transfers. However, such work has only focused on the short-term (e.g., 1-year scenarios), which limits the utility of these planning efforts. Developing multiyear portfolios can lead to the exploration of a wider range of alternatives but also increases the computational burden. This work utilizes a coupled hydrologic-economic model to simulate the long-term performance of a city's water supply portfolio. This stochastic model is linked with an optimization search algorithm that is designed to handle the high-frequency, low-amplitude noise inherent in many simulations, particularly those involving expected values. This noise is detrimental to the accuracy and precision of the optimized solution and has traditionally been controlled by investing greater computational effort in the simulation. However, the increased computational effort can be substantial. This work describes the integration of a variance reduction technique (control variate method) within the simulation/optimization as a means of more efficiently identifying minimum cost portfolios. Random variation in model output (i.e., noise) is moderated using knowledge of random variations in stochastic input variables (e.g., reservoir inflows, demand), thereby reducing the computing time by 50% or more. Using these efficiency gains, water supply portfolios are evaluated over a 10-year period in order to assess their ability to reduce costs and adapt to demand growth, while still meeting reliability goals. As a part of the evaluation, several multiyear option contract structures are explored and compared.

  18. Performance comparison analysis library communication cluster system using merge sort

    NASA Astrophysics Data System (ADS)

    Wulandari, D. A. R.; Ramadhan, M. E.

    2018-04-01

    Begins by using a single processor, to increase the speed of computing time, the use of multi-processor was introduced. The second paradigm is known as parallel computing, example cluster. The cluster must have the communication potocol for processing, one of it is message passing Interface (MPI). MPI have many library, both of them OPENMPI and MPICH2. Performance of the cluster machine depend on suitable between performance characters of library communication and characters of the problem so this study aims to analyze the comparative performances libraries in handling parallel computing process. The case study in this research are MPICH2 and OpenMPI. This case research execute sorting’s problem to know the performance of cluster system. The sorting problem use mergesort method. The research method is by implementing OpenMPI and MPICH2 on a Linux-based cluster by using five computer virtual then analyze the performance of the system by different scenario tests and three parameters for to know the performance of MPICH2 and OpenMPI. These performances are execution time, speedup and efficiency. The results of this study showed that the addition of each data size makes OpenMPI and MPICH2 have an average speed-up and efficiency tend to increase but at a large data size decreases. increased data size doesn’t necessarily increased speed up and efficiency but only execution time example in 100000 data size. OpenMPI has a execution time greater than MPICH2 example in 1000 data size average execution time with MPICH2 is 0,009721 and OpenMPI is 0,003895 OpenMPI can customize communication needs.

  19. Impact of Middle vs. Inferior Total Turbinectomy on Nasal Aerodynamics

    PubMed Central

    Dayal, Anupriya; Rhee, John S.; Garcia, Guilherme J. M.

    2016-01-01

    Objectives This computational study aims to: (1) Use virtual surgery to theoretically investigate the maximum possible change in nasal aerodynamics after turbinate surgery; (2) Quantify the relative contributions of the middle and inferior turbinates to nasal resistance and air conditioning; (3) Quantify to what extent total turbinectomy impairs the nasal air conditioning capacity. Study Design Virtual surgery and computational fluid dynamics (CFD). Setting Academic tertiary medical center. Subjects and Methods Ten patients with inferior turbinate hypertrophy were studied. Three-dimensional models of their nasal anatomies were built based on pre-surgery computed tomography scans. Virtual surgery was applied to create models representing either total inferior turbinectomy (TIT) or total middle turbinectomy (TMT). Airflow, heat transfer, and humidity transport were simulated at a 15 L/min steady-state inhalation rate. The surface area stimulated by mucosal cooling was defined as the area where heat fluxes exceed 50 W/cm2. Results In both virtual total turbinectomy models, nasal resistance decreased and airflow increased. However, the surface area where heat fluxes exceed 50 W/cm2 either decreased (TIT) or did not change significantly (TMT), suggesting that total turbinectomy may reduce the stimulation of cold receptors by inspired air. Nasal heating and humidification efficiencies decreased significantly after both TIT and TMT. All changes were greater in the TIT models than in the TMT models. Conclusion TIT yields greater increases in nasal airflow, but also impairs the nasal air conditioning capacity to a greater extent than TMT. Radical resection of the turbinates may decrease the surface area stimulated by mucosal cooling. PMID:27165673

  20. Impact of Middle versus Inferior Total Turbinectomy on Nasal Aerodynamics.

    PubMed

    Dayal, Anupriya; Rhee, John S; Garcia, Guilherme J M

    2016-09-01

    This computational study aims to (1) use virtual surgery to theoretically investigate the maximum possible change in nasal aerodynamics after turbinate surgery, (2) quantify the relative contributions of the middle and inferior turbinates to nasal resistance and air conditioning, and (3) quantify to what extent total turbinectomy impairs the nasal air-conditioning capacity. Virtual surgery and computational fluid dynamics. Academic tertiary medical center. Ten patients with inferior turbinate hypertrophy were studied. Three-dimensional models of their nasal anatomies were built according to presurgery computed tomography scans. Virtual surgery was applied to create models representing either total inferior turbinectomy (TIT) or total middle turbinectomy (TMT). Airflow, heat transfer, and humidity transport were simulated at a steady-state inhalation rate of 15 L/min. The surface area stimulated by mucosal cooling was defined as the area where heat fluxes exceed 50 W/m(2). In both virtual total turbinectomy models, nasal resistance decreased and airflow increased. However, the surface area where heat fluxes exceed 50 W/m(2) either decreased (TIT) or did not change significantly (TMT), suggesting that total turbinectomy may reduce the stimulation of cold receptors by inspired air. Nasal heating and humidification efficiencies decreased significantly after both TIT and TMT. All changes were greater in the TIT models than in the TMT models. TIT yields greater increases in nasal airflow but also impairs the nasal air-conditioning capacity to a greater extent than TMT. Radical resection of the turbinates may decrease the surface area stimulated by mucosal cooling. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.

  1. Evaluation of the matrix exponential for use in ground-water-flow and solute-transport simulations; theoretical framework

    USGS Publications Warehouse

    Umari, A.M.; Gorelick, S.M.

    1986-01-01

    It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)

  2. Computational Study of an Axisymmetric Dual Throat Fluidic Thrust Vectoring Nozzle for a Supersonic Aircraft Application

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Flamm, Jeffrey D.; Berrier, Bobby L.; Johnson, Stuart K.

    2007-01-01

    A computational investigation of an axisymmetric Dual Throat Nozzle concept has been conducted. This fluidic thrust-vectoring nozzle was designed with a recessed cavity to enhance the throat shifting technique for improved thrust vectoring. The structured-grid, unsteady Reynolds- Averaged Navier-Stokes flow solver PAB3D was used to guide the nozzle design and analyze performance. Nozzle design variables included extent of circumferential injection, cavity divergence angle, cavity length, and cavity convergence angle. Internal nozzle performance (wind-off conditions) and thrust vector angles were computed for several configurations over a range of nozzle pressure ratios from 1.89 to 10, with the fluidic injection flow rate equal to zero and up to 4 percent of the primary flow rate. The effect of a variable expansion ratio on nozzle performance over a range of freestream Mach numbers up to 2 was investigated. Results indicated that a 60 circumferential injection was a good compromise between large thrust vector angles and efficient internal nozzle performance. A cavity divergence angle greater than 10 was detrimental to thrust vector angle. Shortening the cavity length improved internal nozzle performance with a small penalty to thrust vector angle. Contrary to expectations, a variable expansion ratio did not improve thrust efficiency at the flight conditions investigated.

  3. Highway-runoff quality, and treatment efficiencies of a hydrodynamic-settling device and a stormwater-filtration device in Milwaukee, Wisconsin

    USGS Publications Warehouse

    Horwatich, Judy A.; Bannerman, Roger T.; Pearson, Robert

    2011-01-01

    The treatment efficiencies of two prefabricated stormwater-treatment devices were tested at a freeway site in a high-density urban part of Milwaukee, Wisconsin. One treatment device is categorized as a hydrodynamic-settling device (HSD), which removes pollutants by sedimentation and flotation. The other treatment device is categorized as a stormwater-filtration device (SFD), which removes pollutants by filtration and sedimentation. During runoff events, flow measurements were recorded and water-quality samples were collected at the inlet and outlet of each device. Efficiency-ratio and summation-of-load (SOL) calculations were used to estimate the treatment efficiency of each device. Event-mean concentrations and loads that were decreased by passing through the HSD include total suspended solids (TSS), suspended sediment (SS), total phosphorus (TP), total copper (TCu), and total zinc (TZn). The efficiency ratios for these constituents were 42, 57, 17, 33, and 23 percent, respectively. The SOL removal rates for these constituents were 25, 49, 10, 27, and 16 percent, respectively. Event-mean concentrations and loads that increased by passing through the HSD include chloride (Cl), total dissolved solids (TDS), and dissolved zinc (DZn). The efficiency ratios for these constituents were -347, -177, and 20 percent, respectively. Four constituents—dissolved phosphorus (DP), chemical oxygen demand (COD), total polycyclic aromatic hydrocarbon (PAH), and dissolved copper (DCu)—are not included in the list of computed efficiency ratio and SOL because the variability between sampled inlet and outlet pairs were not significantly different. Event-mean concentrations and loads that decreased by passing through the SFD include TSS, SS, TP, DCu, TCu, DZn, TZn, and COD. The efficiency ratios for these constituents were 59, 90, 40, 21, 66, 23, 66, and 18, respectively. The SOLs for these constituents were 50, 89, 37, 19, 60, 20, 65, and 21, respectively. Two constituents—DP and PAH—are not included in the lists of computed efficiency ratio and SOL because the variability between sampled inlet and outlet pairs were not significantly different. Similar to the HSD, the average efficiency ratios and SOLs for TDS and Cl were negative. Flow rates, high concentrations of SS, and particle-size distributions (PSD) can affect the treatment efficacies of the two devices. Flow rates equal to or greater than the design flow rate of the HSD had minimal or negative removal efficiencies for TSS and SS loads. Similar TSS removal efficiencies were observed at the SFD, but SS was consistently removed throughout the flow regime. Removal efficiencies were high for both devices when concentrations of SS and TSS approached 200 mg/L. A small number of runoff events were analyzed for PSD; the average sand content at the HSD was 33 percent and at the SFD was 71 percent. The 71-percent sand content may reflect the 90-percent removal efficiency of SS at the SFD. Particles retained at the bottom of both devices were largely sand-size or greater.

  4. Ultralow power artificial synapses using nanotextured magnetic Josephson junctions.

    PubMed

    Schneider, Michael L; Donnelly, Christine A; Russek, Stephen E; Baek, Burm; Pufall, Matthew R; Hopkins, Peter F; Dresselhaus, Paul D; Benz, Samuel P; Rippard, William H

    2018-01-01

    Neuromorphic computing promises to markedly improve the efficiency of certain computational tasks, such as perception and decision-making. Although software and specialized hardware implementations of neural networks have made tremendous accomplishments, both implementations are still many orders of magnitude less energy efficient than the human brain. We demonstrate a new form of artificial synapse based on dynamically reconfigurable superconducting Josephson junctions with magnetic nanoclusters in the barrier. The spiking energy per pulse varies with the magnetic configuration, but in our demonstration devices, the spiking energy is always less than 1 aJ. This compares very favorably with the roughly 10 fJ per synaptic event in the human brain. Each artificial synapse is composed of a Si barrier containing Mn nanoclusters with superconducting Nb electrodes. The critical current of each synapse junction, which is analogous to the synaptic weight, can be tuned using input voltage spikes that change the spin alignment of Mn nanoclusters. We demonstrate synaptic weight training with electrical pulses as small as 3 aJ. Further, the Josephson plasma frequencies of the devices, which determine the dynamical time scales, all exceed 100 GHz. These new artificial synapses provide a significant step toward a neuromorphic platform that is faster, more energy-efficient, and thus can attain far greater complexity than has been demonstrated with other technologies.

  5. Ultralow power artificial synapses using nanotextured magnetic Josephson junctions

    PubMed Central

    Schneider, Michael L.; Donnelly, Christine A.; Russek, Stephen E.; Baek, Burm; Pufall, Matthew R.; Hopkins, Peter F.; Dresselhaus, Paul D.; Benz, Samuel P.; Rippard, William H.

    2018-01-01

    Neuromorphic computing promises to markedly improve the efficiency of certain computational tasks, such as perception and decision-making. Although software and specialized hardware implementations of neural networks have made tremendous accomplishments, both implementations are still many orders of magnitude less energy efficient than the human brain. We demonstrate a new form of artificial synapse based on dynamically reconfigurable superconducting Josephson junctions with magnetic nanoclusters in the barrier. The spiking energy per pulse varies with the magnetic configuration, but in our demonstration devices, the spiking energy is always less than 1 aJ. This compares very favorably with the roughly 10 fJ per synaptic event in the human brain. Each artificial synapse is composed of a Si barrier containing Mn nanoclusters with superconducting Nb electrodes. The critical current of each synapse junction, which is analogous to the synaptic weight, can be tuned using input voltage spikes that change the spin alignment of Mn nanoclusters. We demonstrate synaptic weight training with electrical pulses as small as 3 aJ. Further, the Josephson plasma frequencies of the devices, which determine the dynamical time scales, all exceed 100 GHz. These new artificial synapses provide a significant step toward a neuromorphic platform that is faster, more energy-efficient, and thus can attain far greater complexity than has been demonstrated with other technologies. PMID:29387787

  6. Efficient generation of connectivity in neuronal networks from simulator-independent descriptions

    PubMed Central

    Djurfeldt, Mikael; Davison, Andrew P.; Eppler, Jochen M.

    2014-01-01

    Simulator-independent descriptions of connectivity in neuronal networks promise greater ease of model sharing, improved reproducibility of simulation results, and reduced programming effort for computational neuroscientists. However, until now, enabling the use of such descriptions in a given simulator in a computationally efficient way has entailed considerable work for simulator developers, which must be repeated for each new connectivity-generating library that is developed. We have developed a generic connection generator interface that provides a standard way to connect a connectivity-generating library to a simulator, such that one library can easily be replaced by another, according to the modeler's needs. We have used the connection generator interface to connect C++ and Python implementations of the previously described connection-set algebra to the NEST simulator. We also demonstrate how the simulator-independent modeling framework PyNN can transparently take advantage of this, passing a connection description through to the simulator layer for rapid processing in C++ where a simulator supports the connection generator interface and falling-back to slower iteration in Python otherwise. A set of benchmarks demonstrates the good performance of the interface. PMID:24795620

  7. Reaching out to clinicians: implementation of a computerized alert system.

    PubMed

    Degnan, Dan; Merryfield, Dave; Hultgren, Steve

    2004-01-01

    Several published articles have identified that providing automated, computer-generated clinical alerts about potentially critical clinical situations should result in better quality of care. In 1999, the pharmacy department at a community hospital network implemented and refined a commercially available, computerized clinical alert system. This case report discusses the implementation process, gives examples of how the system is used, and describes results following implementation. The use of the clinical alert system in this hospital network resulted in improved patient safety as well as in greater efficiency and decreased costs.

  8. Tablet computers in assessing performance in a high stakes exam: opinion matters.

    PubMed

    Currie, G P; Sinha, S; Thomson, F; Cleland, J; Denison, A R

    2017-06-01

    Background Tablet computers have emerged as a tool to capture, process and store data in examinations, yet evidence relating to their acceptability and usefulness in assessment is limited. Methods We performed an observational study to explore opinions and attitudes relating to tablet computer use in recording performance in a final year objective structured clinical examination at a single UK medical school. Examiners completed a short questionnaire encompassing background, forced-choice and open questions. Forced choice questions were analysed using descriptive statistics and open questions by framework analysis. Results Ninety-two (97% response rate) examiners completed the questionnaire of whom 85% had previous use of tablet computers. Ninety per cent felt checklist mark allocation was 'very/quite easy', while approximately half considered recording 'free-type' comments was 'easy/very easy'. Greater overall efficiency of marking and resource savings were considered the main advantages of tablet computers, while concerns relating to technological failure and ability to record free type comments were raised. Discussion In a context where examiners were familiar with tablet computers, they were preferred to paper checklists, although concerns were raised. This study adds to the limited literature underpinning the use of electronic devices as acceptable tools in objective structured clinical examinations.

  9. Computer vision syndrome: a review of ocular causes and potential treatments.

    PubMed

    Rosenfield, Mark

    2011-09-01

    Computer vision syndrome (CVS) is the combination of eye and vision problems associated with the use of computers. In modern western society the use of computers for both vocational and avocational activities is almost universal. However, CVS may have a significant impact not only on visual comfort but also occupational productivity since between 64% and 90% of computer users experience visual symptoms which may include eyestrain, headaches, ocular discomfort, dry eye, diplopia and blurred vision either at near or when looking into the distance after prolonged computer use. This paper reviews the principal ocular causes for this condition, namely oculomotor anomalies and dry eye. Accommodation and vergence responses to electronic screens appear to be similar to those found when viewing printed materials, whereas the prevalence of dry eye symptoms is greater during computer operation. The latter is probably due to a decrease in blink rate and blink amplitude, as well as increased corneal exposure resulting from the monitor frequently being positioned in primary gaze. However, the efficacy of proposed treatments to reduce symptoms of CVS is unproven. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will enable practitioners to optimize visual comfort and efficiency during computer operation. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  10. A survey of the computer literacy of undergraduate dental students at a University Dental School in Ireland during the academic year 1997-98.

    PubMed

    Ray, N J; Hannigan, A

    1999-05-01

    As dental practice management becomes more computer-based, the efficient functioning of the dentist will become dependent on adequate computer literacy. A survey has been carried out into the computer literacy of a cohort of 140 undergraduate dental students at a University Dental School in Ireland (years 1-5), in the academic year 1997-98. Aspects investigated by anonymous questionnaire were: (1) keyboard skills; (2) computer skills; (3) access to computer facilities; (4) software competencies and (5) use of medical library computer facilities. The students are relatively unfamiliar with basic computer hardware and software: 51.1% considered their expertise with computers as "poor"; 34.3% had taken a formal typewriting or computer keyboarding course; 7.9% had taken a formal computer course at university level and 67.2% were without access to computer facilities at their term-time residences. A majority of students had never used either word-processing, spreadsheet, or graphics programs. Programs relating to "informatics" were more popular, such as literature searching, accessing the Internet and the use of e-mail which represent the major use of the computers in the medical library. The lack of experience with computers may be addressed by including suitable computing courses at the secondary level (age 13-18 years) and/or tertiary level (FE/HE) education programmes. Such training may promote greater use of generic softwares, particularly in the library, with a more electronic-based approach to data handling.

  11. Fast patient-specific Monte Carlo brachytherapy dose calculations via the correlated sampling variance reduction technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sampson, Andrew; Le Yi; Williamson, Jeffrey F.

    2012-02-15

    Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled inmore » a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and heterogeneous doses. On an AMD 1090T processor, computing times of 38 and 21 sec were required to achieve an average statistical uncertainty of 2% within the prostate (1 x 1 x 1 mm{sup 3}) and breast (0.67 x 0.67 x 0.8 mm{sup 3}) CTVs, respectively. Conclusions: CMC supports an additional average 38-60 fold improvement in average efficiency relative to conventional uncorrelated MC techniques, although some voxels experience no gain or even efficiency losses. However, for the two investigated case studies, the maximum variance within clinically significant structures was always reduced (on average by a factor of 6) in the therapeutic dose range generally. CMC takes only seconds to produce an accurate, high-resolution, low-uncertainly dose distribution for the low-energy PSB implants investigated in this study.« less

  12. Enhanced Sampling in the Well-Tempered Ensemble

    NASA Astrophysics Data System (ADS)

    Bonomi, M.; Parrinello, M.

    2010-05-01

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi , J. Comput. Chem. 30, 1615 (2009)JCCHDD0192-865110.1002/jcc.21305]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.

  13. Enhanced sampling in the well-tempered ensemble.

    PubMed

    Bonomi, M; Parrinello, M

    2010-05-14

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi, J. Comput. Chem. 30, 1615 (2009)]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.

  14. Efficient estimation of diffusion during dendritic solidification

    NASA Technical Reports Server (NTRS)

    Yeum, K. S.; Poirier, D. R.; Laxmanan, V.

    1989-01-01

    A very efficient finite difference method has been developed to estimate the solute redistribution during solidification with diffusion in the solid. This method is validated by comparing the computed results with the results of an analytical solution derived by Kobayashi (1988) for the assumptions of a constant diffusion coefficient, a constant equilibrium partition ratio, and a parabolic rate of the advancement of the solid/liquid interface. The flexibility of the method is demonstrated by applying it to the dendritic solidification of a Pb-15 wt pct Sn alloy, for which the equilibrium partition ratio and diffusion coefficient vary substantially during solidification. The fraction eutectic at the end of solidification is also obtained by estimating the fraction solid, in greater resolution, where the concentration of solute in the interdendritic liquid reaches the eutectic composition of the alloy.

  15. Solid-state Isotopic Power Source for Computer Memory Chips

    NASA Technical Reports Server (NTRS)

    Brown, Paul M.

    1993-01-01

    Recent developments in materials technology now make it possible to fabricate nonthermal thin-film radioisotopic energy converters (REC) with a specific power of 24 W/kg and a 10 year working life at 5 to 10 watts. This creates applications never before possible, such as placing the power supply directly on integrated circuit chips. The efficiency of the REC is about 25 percent which is two to three times greater than the 6 to 8 percent capabilities of current thermoelectric systems. Radio isotopic energy converters have the potential to meet many future space power requirements for a wide variety of applications with less mass, better efficiency, and less total area than other power conversion options. These benefits result in significant dollar savings over the projected mission lifetime.

  16. Reducing Bottlenecks to Improve the Efficiency of the Lung Cancer Care Delivery Process: A Process Engineering Modeling Approach to Patient-Centered Care.

    PubMed

    Ju, Feng; Lee, Hyo Kyung; Yu, Xinhua; Faris, Nicholas R; Rugless, Fedoria; Jiang, Shan; Li, Jingshan; Osarogiagbon, Raymond U

    2017-12-01

    The process of lung cancer care from initial lesion detection to treatment is complex, involving multiple steps, each introducing the potential for substantial delays. Identifying the steps with the greatest delays enables a focused effort to improve the timeliness of care-delivery, without sacrificing quality. We retrospectively reviewed clinical events from initial detection, through histologic diagnosis, radiologic and invasive staging, and medical clearance, to surgery for all patients who had an attempted resection of a suspected lung cancer in a community healthcare system. We used a computer process modeling approach to evaluate delays in care delivery, in order to identify potential 'bottlenecks' in waiting time, the reduction of which could produce greater care efficiency. We also conducted 'what-if' analyses to predict the relative impact of simulated changes in the care delivery process to determine the most efficient pathways to surgery. The waiting time between radiologic lesion detection and diagnostic biopsy, and the waiting time from radiologic staging to surgery were the two most critical bottlenecks impeding efficient care delivery (more than 3 times larger compared to reducing other waiting times). Additionally, instituting surgical consultation prior to cardiac consultation for medical clearance and decreasing the waiting time between CT scans and diagnostic biopsies, were potentially the most impactful measures to reduce care delays before surgery. Rigorous computer simulation modeling, using clinical data, can provide useful information to identify areas for improving the efficiency of care delivery by process engineering, for patients who receive surgery for lung cancer.

  17. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    NASA Technical Reports Server (NTRS)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.

  18. Efficient parallelization for AMR MHD multiphysics calculations; implementation in AstroBEAR

    NASA Astrophysics Data System (ADS)

    Carroll-Nellenback, Jonathan J.; Shroyer, Brandon; Frank, Adam; Ding, Chen

    2013-03-01

    Current adaptive mesh refinement (AMR) simulations require algorithms that are highly parallelized and manage memory efficiently. As compute engines grow larger, AMR simulations will require algorithms that achieve new levels of efficient parallelization and memory management. We have attempted to employ new techniques to achieve both of these goals. Patch or grid based AMR often employs ghost cells to decouple the hyperbolic advances of each grid on a given refinement level. This decoupling allows each grid to be advanced independently. In AstroBEAR we utilize this independence by threading the grid advances on each level with preference going to the finer level grids. This allows for global load balancing instead of level by level load balancing and allows for greater parallelization across both physical space and AMR level. Threading of level advances can also improve performance by interleaving communication with computation, especially in deep simulations with many levels of refinement. While we see improvements of up to 30% on deep simulations run on a few cores, the speedup is typically more modest (5-20%) for larger scale simulations. To improve memory management we have employed a distributed tree algorithm that requires processors to only store and communicate local sections of the AMR tree structure with neighboring processors. Using this distributed approach we are able to get reasonable scaling efficiency (>80%) out to 12288 cores and up to 8 levels of AMR - independent of the use of threading.

  19. Computationally efficient multibody simulations

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Kumar, Manoj

    1994-01-01

    Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.

  20. Progress in Computational Simulation of Earthquakes

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay; Lyzenga, Gregory; Judd, Michele; Li, P. Peggy; Norton, Charles; Tisdale, Edwin; Granat, Robert

    2006-01-01

    GeoFEST(P) is a computer program written for use in the QuakeSim project, which is devoted to development and improvement of means of computational simulation of earthquakes. GeoFEST(P) models interacting earthquake fault systems from the fault-nucleation to the tectonic scale. The development of GeoFEST( P) has involved coupling of two programs: GeoFEST and the Pyramid Adaptive Mesh Refinement Library. GeoFEST is a message-passing-interface-parallel code that utilizes a finite-element technique to simulate evolution of stress, fault slip, and plastic/elastic deformation in realistic materials like those of faulted regions of the crust of the Earth. The products of such simulations are synthetic observable time-dependent surface deformations on time scales from days to decades. Pyramid Adaptive Mesh Refinement Library is a software library that facilitates the generation of computational meshes for solving physical problems. In an application of GeoFEST(P), a computational grid can be dynamically adapted as stress grows on a fault. Simulations on workstations using a few tens of thousands of stress and displacement finite elements can now be expanded to multiple millions of elements with greater than 98-percent scaled efficiency on over many hundreds of parallel processors (see figure).

  1. Efficient compression of molecular dynamics trajectory files.

    PubMed

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James

    2012-10-15

    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.

  2. A physics-based algorithm for real-time simulation of electrosurgery procedures in minimally invasive surgery.

    PubMed

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu

    2014-12-01

    High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Are participants in markets for water rights more efficient in the use of water than non-participants? A case study for Limarí Valley (Chile).

    PubMed

    Molinos-Senante, María; Donoso, Guillermo; Sala-Garrido, Ramon

    2016-06-01

    The need to increase water productivity in agriculture has been stressed as one of the most important factors to achieve greater agricultural productivity and sustainability. The main aim of this paper is to investigate whether there are differences in water use efficiency (WUE) between farmers who participate in water markets and farmers who do not participate in them. Moreover, the use of a non-radial data envelopment analysis model allows to compute global efficiency (GE), WUE as well the efficiency in the use of other inputs such as fertilizers, pesticides, energy, and labor. In a second stage, external factors that may affect GE and WUE are explored. The empirical application focuses on a sample of farmers located in Limarí Valley (Chile) where regulated permanent water rights (WR) markets for surface water have a long tradition. Results illustrate that WR sellers are the most efficient in the use of water while non-traders are the farmers that present the lowest WUE. From a policy perspective, significant conclusions are drawn from the assessment of agricultural water productivity in the framework of water markets.

  4. Actions for productivity improvement in crew training

    NASA Technical Reports Server (NTRS)

    Miller, G. E.

    1985-01-01

    Improvement of the productivity of astronaut crew instructors in the Space Shuttle program and beyond is proposed. It is suggested that instructor certification plans should be established to shorten the time required for trainers to develop their skills and improve their ability to convey those skills. Members of the training cadre should be thoroughly cross trained in their task. This provides better understanding of the overall task and greater flexibility in instructor utilization. Improved facility access will give instructors the benefit of practical application experience. Former crews should be integrated into the training of upcoming crews to bridge some of the gap between simulated conditions and the real world. The information contained in lengthy and complex training manuals can be presented more clearly and efficiently as computer lessons. The illustration, animation and interactive capabilities of the computer combine an effective means of explanation.

  5. Benchmark Lisp And Ada Programs

    NASA Technical Reports Server (NTRS)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  6. Tractable Goal Selection with Oversubscribed Resources

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg; Chien, Steve; McLaren, David

    2009-01-01

    We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.

  7. Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures

    DTIC Science & Technology

    2017-10-04

    Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These

  8. SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).

    PubMed

    Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J

    2012-06-01

    To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.

  9. Multi-mounted X-ray cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Fu, Jian; Wang, Jingzheng; Guo, Wei; Peng, Peng

    2018-04-01

    As a powerful nondestructive inspection technique, X-ray computed tomography (X-CT) has been widely applied to clinical diagnosis, industrial production and cutting-edge research. Imaging efficiency is currently one of the major obstacles for the applications of X-CT. In this paper, a multi-mounted three dimensional cone-beam X-CT (MM-CBCT) method is reported. It consists of a novel multi-mounted cone-beam scanning geometry and the corresponding three dimensional statistical iterative reconstruction algorithm. The scanning geometry is the most iconic design and significantly different from the current CBCT systems. Permitting the cone-beam scanning of multiple objects simultaneously, the proposed approach has the potential to achieve an imaging efficiency orders of magnitude greater than the conventional methods. Although multiple objects can be also bundled together and scanned simultaneously by the conventional CBCT methods, it will lead to the increased penetration thickness and signal crosstalk. In contrast, MM-CBCT avoids substantially these problems. This work comprises a numerical study of the method and its experimental verification using a dataset measured with a developed MM-CBCT prototype system. This technique will provide a possible solution for the CT inspection in a large scale.

  10. Sediment characteristics and sedimentation rates in Lake Michie, Durham County, North Carolina, 1990-92

    USGS Publications Warehouse

    Weaver, J.C.

    1994-01-01

    A reservoir sedimentation study was conducted at 508-acre Lake Michie, a municipal water-supply reservoir in northeastern Durham County, North Carolina, during 1990-92. The effects of sedimentation in Lake Michie were investigated, and current and historical rates of sedimentation were evaluated. Particle-size distributions of lake-bottom sediment indicate that, overall, Lake Michie is rich in silt and clay. Nearly all sand is deposited in the upstream region of the lake, and its percentage in the sediment decreases to less than 2 percent in the lower half of the lake. The average specific weight of lake-bottom sediment in Lake Michie is 73.6 pounds per cubic foot. The dry-weight percentage of total organic carbon in lake-bottom sediment ranges from 1.1 to 3.8 percent. Corresponding carbon-nitrogen ratios range form 8.6 to 17.6. Correlation of the total organic carbon percentages with carbon-nitrogen ratios indicates that plant and leaf debris are the primary sources of organic material in Lake Michie. Sedimentation rates were computed using comparisons of bathymetric volumes. Comparing the current and previous bathymetric volumes, the net amount of sediment deposited (trapped) in Lake Michie during 1926-92 is estimated to be about 2,541 acre-feet or slightly more than 20 percent of the original storage volume computed in 1935. Currently (1992), the average sedimentation rate is 38 acre-feet per year, down from 45.1 acre-feet per year in 1935. To confirm the evidence that sedimentation rates have decreased at Lake Michie since its construction in 1926, sediment accretion rates were computed using radionuclide profiles of lake-bottom sediment. Sediment accretion rates estimated from radiochemical analyses of Cesium-137 and lead-210 and radionuclides in the lake-bottom sediment indicate that rates were higher in the lake?s early years prior to 1962. Estimated suspended-sediment yields for inflow and outflow sites during 1983-91 indicate a suspended-sediment trap efficiency of 89 percent. An overall trap efficiency for the period of 1983-91 was computed using the capacity-inflow ratio. The use of this ratio indicates that the trap efficiency for Lake Michie is 85 percent. However, the suspended-sediment trap efficiency indicates that the actual overall trap efficiency for Lake Michie was probably greater than 89 percent during this period.

  11. Interaction and Impact Studies for Distributed Energy Resource, Transactive Energy, and Electric Grid, using High Performance Computing ?based Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelley, B. M.

    The electric utility industry is undergoing significant transformations in its operation model, including a greater emphasis on automation, monitoring technologies, and distributed energy resource management systems (DERMS). With these changes and new technologies, while driving greater efficiencies and reliability, these new models may introduce new vectors of cyber attack. The appropriate cybersecurity controls to address and mitigate these newly introduced attack vectors and potential vulnerabilities are still widely unknown and performance of the control is difficult to vet. This proposal argues that modeling and simulation (M&S) is a necessary tool to address and better understand these problems introduced by emergingmore » technologies for the grid. M&S will provide electric utilities a platform to model its transmission and distribution systems and run various simulations against the model to better understand the operational impact and performance of cybersecurity controls.« less

  12. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    PubMed

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  13. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  14. Energy-efficient STDP-based learning circuits with memristor synapses

    NASA Astrophysics Data System (ADS)

    Wu, Xinyu; Saxena, Vishal; Campbell, Kristy A.

    2014-05-01

    It is now accepted that the traditional von Neumann architecture, with processor and memory separation, is ill suited to process parallel data streams which a mammalian brain can efficiently handle. Moreover, researchers now envision computing architectures which enable cognitive processing of massive amounts of data by identifying spatio-temporal relationships in real-time and solving complex pattern recognition problems. Memristor cross-point arrays, integrated with standard CMOS technology, are expected to result in massively parallel and low-power Neuromorphic computing architectures. Recently, significant progress has been made in spiking neural networks (SNN) which emulate data processing in the cortical brain. These architectures comprise of a dense network of neurons and the synapses formed between the axons and dendrites. Further, unsupervised or supervised competitive learning schemes are being investigated for global training of the network. In contrast to a software implementation, hardware realization of these networks requires massive circuit overhead for addressing and individually updating network weights. Instead, we employ bio-inspired learning rules such as the spike-timing-dependent plasticity (STDP) to efficiently update the network weights locally. To realize SNNs on a chip, we propose to use densely integrating mixed-signal integrate-andfire neurons (IFNs) and cross-point arrays of memristors in back-end-of-the-line (BEOL) of CMOS chips. Novel IFN circuits have been designed to drive memristive synapses in parallel while maintaining overall power efficiency (<1 pJ/spike/synapse), even at spike rate greater than 10 MHz. We present circuit design details and simulation results of the IFN with memristor synapses, its response to incoming spike trains and STDP learning characterization.

  15. Robust and Imperceptible Watermarking of Video Streams for Low Power Devices

    NASA Astrophysics Data System (ADS)

    Ishtiaq, Muhammad; Jaffar, M. Arfan; Khan, Muhammad A.; Jan, Zahoor; Mirza, Anwar M.

    With the advent of internet, every aspect of life is going online. From online working to watching videos, everything is now available on the internet. With the greater business benefits, increased availability and other online business advantages, there is a major challenge of security and ownership of data. Videos downloaded from an online store can easily be shared among non-intended or unauthorized users. Invisible watermarking is used to hide copyright protection information in the videos. The existing methods of watermarking are less robust and imperceptible and also the computational complexity of these methods does not suit low power devices. In this paper, we have proposed a new method to address the problem of robustness and imperceptibility. Experiments have shown that our method has better robustness and imperceptibility as well as our method is computationally efficient than previous approaches in practice. Hence our method can easily be applied on low power devices.

  16. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  17. A goodness-of-fit test for capture-recapture model M(t) under closure

    USGS Publications Warehouse

    Stanley, T.R.; Burnham, K.P.

    1999-01-01

    A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.

  18. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.

    PubMed

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-12-01

    Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. An ultrafast, reliable and scalable 4D CBCT∕CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment.

  19. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment

    PubMed Central

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-01-01

    Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. Conclusions: An ultrafast, reliable and scalable 4D CBCT/CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment. PMID:22149842

  20. Novel computer vision analysis of nasal shape in children with unilateral cleft lip.

    PubMed

    Mercan, Ezgi; Morrison, Clinton S; Stuhaug, Erik; Shapiro, Linda G; Tse, Raymond W

    2018-01-01

    Optimization of treatment of the unilateral cleft lip nasal deformity (uCLND) is hampered by lack of objective means to assess initial severity and changes produced by treatment and growth. The purpose of this study was to develop automated 3D image analysis specific to the uCLND; assess the correlation of these measures to esthetic appraisal; measure changes that occur with treatment and differences amongst cleft types. Dorsum Deviation, Tip-Alar Volume Ratio, Alar-Cheek Definition, and Columellar Angle were assessed using computer-vision techniques. Subjects included infants before and after primary cleft lip repair (N = 50) and children aged 8-10 years with previous cleft lip (N = 50). Two expert surgeons ranked subjects according to esthetic nose appearance. Computer-based measurements strongly correlated with rankings of infants pre-repair (r = 0.8, 0.75, 0.41 and 0.54 for Dorsum Deviation, Tip-Alar Volume Ratio, Alar-Cheek Definition, and Columellar Angle, p < 0.01) while all measurements except Alar-Cheek Definition correlated moderately with rankings of older children post-repair (r ∼ 0.35, p < 0.01). Measurements were worse with greater severity of cleft type but improved following initial repair. Abnormal Dorsum Deviation and Columellar Angle persisted after surgery and were more severe with greater cleft type. Four fully-automated measures were developed that are clinically relevant, agree with expert evaluations and can be followed through initial surgery and in older children. Computer vision analysis techniques can quantify the nasal deformity at different stages, offering efficient and standardized tools for large studies and data-driven conclusions. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  1. Virtual surgical planning and 3D printing in prosthetic orbital reconstruction with percutaneous implants: a technical case report

    PubMed Central

    Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis

    2016-01-01

    Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis. PMID:27843356

  2. Virtual surgical planning and 3D printing in prosthetic orbital reconstruction with percutaneous implants: a technical case report.

    PubMed

    Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis

    2016-01-01

    Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis.

  3. Gaussian process regression of chirplet decomposed ultrasonic B-scans of a simulated design case

    NASA Astrophysics Data System (ADS)

    Wertz, John; Homa, Laura; Welter, John; Sparkman, Daniel; Aldrin, John

    2018-04-01

    The US Air Force seeks to implement damage tolerant lifecycle management of composite structures. Nondestructive characterization of damage is a key input to this framework. One approach to characterization is model-based inversion of the ultrasonic response from damage features; however, the computational expense of modeling the ultrasonic waves within composites is a major hurdle to implementation. A surrogate forward model with sufficient accuracy and greater computational efficiency is therefore critical to enabling model-based inversion and damage characterization. In this work, a surrogate model is developed on the simulated ultrasonic response from delamination-like structures placed at different locations within a representative composite layup. The resulting B-scans are decomposed via the chirplet transform, and a Gaussian process model is trained on the chirplet parameters. The quality of the surrogate is tested by comparing the B-scan for a delamination configuration not represented within the training data set. The estimated B-scan has a maximum error of ˜15% for an estimated reduction in computational runtime of ˜95% for 200 function calls. This considerable reduction in computational expense makes full 3D characterization of impact damage tractable.

  4. A risk assessment method for multi-site damage

    NASA Astrophysics Data System (ADS)

    Millwater, Harry Russell, Jr.

    This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.

  5. Improved Collision-Detection Method for Robotic Manipulator

    NASA Technical Reports Server (NTRS)

    Leger, Chris

    2003-01-01

    An improved method has been devised for the computational prediction of a collision between (1) a robotic manipulator and (2) another part of the robot or an external object in the vicinity of the robot. The method is intended to be used to test commanded manipulator trajectories in advance so that execution of the commands can be stopped before damage is done. The method involves utilization of both (1) mathematical models of the robot and its environment constructed manually prior to operation and (2) similar models constructed automatically from sensory data acquired during operation. The representation of objects in this method is simpler and more efficient (with respect to both computation time and computer memory), relative to the representations used in most prior methods. The present method was developed especially for use on a robotic land vehicle (rover) equipped with a manipulator arm and a vision system that includes stereoscopic electronic cameras. In this method, objects are represented and collisions detected by use of a previously developed technique known in the art as the method of oriented bounding boxes (OBBs). As the name of this technique indicates, an object is represented approximately, for computational purposes, by a box that encloses its outer boundary. Because many parts of a robotic manipulator are cylindrical, the OBB method has been extended in this method to enable the approximate representation of cylindrical parts by use of octagonal or other multiple-OBB assemblies denoted oriented bounding prisms (OBPs), as in the example of Figure 1. Unlike prior methods, the OBB/OBP method does not require any divisions or transcendental functions; this feature leads to greater robustness and numerical accuracy. The OBB/OBP method was selected for incorporation into the present method because it offers the best compromise between accuracy on the one hand and computational efficiency (and thus computational speed) on the other hand.

  6. Analytical Cost Metrics : Days of Future Past

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov

    As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less

  7. Computer simulations show that Neanderthal facial morphology represents adaptation to cold and high energy demands, but not heavy biting.

    PubMed

    Wroe, Stephen; Parr, William C H; Ledogar, Justin A; Bourke, Jason; Evans, Samuel P; Fiorenza, Luca; Benazzi, Stefano; Hublin, Jean-Jacques; Stringer, Chris; Kullmer, Ottmar; Curry, Michael; Rae, Todd C; Yokley, Todd R

    2018-04-11

    Three adaptive hypotheses have been forwarded to explain the distinctive Neanderthal face: (i) an improved ability to accommodate high anterior bite forces, (ii) more effective conditioning of cold and/or dry air and, (iii) adaptation to facilitate greater ventilatory demands. We test these hypotheses using three-dimensional models of Neanderthals, modern humans, and a close outgroup ( Homo heidelbergensis ), applying finite-element analysis (FEA) and computational fluid dynamics (CFD). This is the most comprehensive application of either approach applied to date and the first to include both. FEA reveals few differences between H. heidelbergensis , modern humans, and Neanderthals in their capacities to sustain high anterior tooth loadings. CFD shows that the nasal cavities of Neanderthals and especially modern humans condition air more efficiently than does that of H. heidelbergensis , suggesting that both evolved to better withstand cold and/or dry climates than less derived Homo We further find that Neanderthals could move considerably more air through the nasal pathway than could H. heidelbergensis or modern humans, consistent with the propositions that, relative to our outgroup Homo , Neanderthal facial morphology evolved to reflect improved capacities to better condition cold, dry air, and, to move greater air volumes in response to higher energetic requirements. © 2018 The Author(s).

  8. An efficient, self-orienting, vertical-array, sand trap

    NASA Astrophysics Data System (ADS)

    Hilton, Michael; Nickling, Bill; Wakes, Sarah; Sherman, Douglas; Konlechner, Teresa; Jermy, Mark; Geoghegan, Patrick

    2017-04-01

    There remains a need for an efficient, low-cost, portable, passive sand trap, which can provide estimates of vertical sand flux over topography and within vegetation and which self-orients into the wind. We present a design for a stacked vertical trap that has been modelled (computational fluid dynamics, CFD) and evaluated in the field and in the wind tunnel. The 'swinging' trap orients to within 10° of the flow in the wind tunnel at 8 m s-1, and more rapidly in the field, where natural variability in wind direction accelerates orientation. The CFD analysis indicates flow is steered into the trap during incident wind flow. The trap has a low profile and there is only a small decrease in mass flow rate for multiple traps, poles and rows of poles. The efficiency of the trap was evaluated against an isokinetic sampler and found to be greater than 95%. The centre pole is a key element of the design, minimally decreasing trap efficiency. Finally, field comparisons with the trap of Sherman et al. (2014) yielded comparable estimates of vertical sand flux. The trap described in this paper provides accurate estimates of sand transport in a wide range of field conditions.

  9. Experimental Realization of High-Efficiency Counterfactual Computation.

    PubMed

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-21

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  10. Experimental Realization of High-Efficiency Counterfactual Computation

    NASA Astrophysics Data System (ADS)

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-01

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  11. Testing the Wildlink activity-detection system on wolves and white-tailed deer

    USGS Publications Warehouse

    Kunkel, K.E.; Chapman, R.C.; Mech, L.D.; Gese, E.M.

    1991-01-01

    We tested the reliability and predictive capabilities of the activity meter in the new Wildlink Data Acquisition and Recapture System by comparing activity counts with concurrent observations of captive wolf (Canis lupus) and free-ranging white-tailed deer (Odocoileus virginianus) activity. The Wildlink system stores activity data in a computer within a radio collar with which a biologist can communicate. Three levels of activity could be detected. The Wildlink system provided greater activity discrimination and was more reliable, adaptable, and efficient and was easier to use than conventional telemetry activity systems. The Wildlink system could be highly useful for determining wildlife energy budgets.

  12. Goal Selection for Embedded Systems with Oversubscribed Resources

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg; Chien, Steve; McLaren, David

    2010-01-01

    We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.

  13. Body frame close coupling wave packet approach to gas phase atom-rigid rotor inelastic collisions

    NASA Technical Reports Server (NTRS)

    Sun, Y.; Judson, R. S.; Kouri, D. J.

    1989-01-01

    The close coupling wave packet (CCWP) method is formulated in a body-fixed representation for atom-rigid rotor inelastic scattering. For J greater than j-max (where J is the total angular momentum and j is the rotational quantum number), the computational cost of propagating the coupled channel wave packets in the body frame is shown to scale approximately as N exp 3/2, where N is the total number of channels. For large numbers of channels, this will be much more efficient than the space frame CCWP method previously developed which scales approximately as N-squared under the same conditions.

  14. Pilot Task Profiles, Human Factors, And Image Realism

    NASA Astrophysics Data System (ADS)

    McCormick, Dennis

    1982-06-01

    Computer Image Generation (CIG) visual systems provide real time scenes for state-of-the-art flight training simulators. The visual system reauires a greater understanding of training tasks, human factors, and the concept of image realism to produce an effective and efficient training scene than is required by other types of visual systems. Image realism must be defined in terms of pilot visual information reauirements. Human factors analysis of training and perception is necessary to determine the pilot's information requirements. System analysis then determines how the CIG and display device can best provide essential information to the pilot. This analysis procedure ensures optimum training effectiveness and system performance.

  15. Onboard Run-Time Goal Selection for Autonomous Operations

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg; Chien, Steve; McLaren, David

    2010-01-01

    We describe an efficient, online goal selection algorithm for use onboard spacecraft and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.

  16. Unobtrusive measurement of daily computer use to detect mild cognitive impairment

    PubMed Central

    Kaye, Jeffrey; Mattek, Nora; Dodge, Hiroko H; Campbell, Ian; Hayes, Tamara; Austin, Daniel; Hatt, William; Wild, Katherine; Jimison, Holly; Pavel, Michael

    2013-01-01

    Background Mild disturbances of higher order activities of daily living are present in people diagnosed with mild cognitive impairment (MCI). These deficits may be difficult to detect among those still living independently. Unobtrusive continuous assessment of a complex activity such as home computer use may detect mild functional changes and identify MCI. We sought to determine whether long-term changes in remotely monitored computer use differ in persons with MCI in comparison to cognitively intact volunteers. Methods Participants enrolled in a longitudinal cohort study of unobtrusive in-home technologies to detect cognitive and motor decline in independently living seniors were assessed for computer usage (number of days with use, mean daily usage and coefficient of variation of use) measured by remotely monitoring computer session start and end times. Results Over 230,000 computer sessions from 113 computer users (mean age, 85; 38 with MCI) were acquired during a mean of 36 months. In mixed effects models there was no difference in computer usage at baseline between MCI and intact participants controlling for age, sex, education, race and computer experience. However, over time, between MCI and intact participants, there was a significant decrease in number of days with use (p=0.01), mean daily usage (~1% greater decrease/month; p=0.009) and an increase in day-to-day use variability (p=0.002). Conclusions Computer use change can be unobtrusively monitored and indicate individuals with MCI. With 79% of those 55–64 years old now online, this may be an ecologically valid and efficient approach to track subtle clinically meaningful change with aging. PMID:23688576

  17. Hybrid Grid Techniques for Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Koomullil, Roy P.; Soni, Bharat K.; Thornburg, Hugh J.

    1996-01-01

    During the past decade, computational simulation of fluid flow for propulsion activities has progressed significantly, and many notable successes have been reported in the literature. However, the generation of a high quality mesh for such problems has often been reported as a pacing item. Hence, much effort has been expended to speed this portion of the simulation process. Several approaches have evolved for grid generation. Two of the most common are structured multi-block, and unstructured based procedures. Structured grids tend to be computationally efficient, and have high aspect ratio cells necessary for efficently resolving viscous layers. Structured multi-block grids may or may not exhibit grid line continuity across the block interface. This relaxation of the continuity constraint at the interface is intended to ease the grid generation process, which is still time consuming. Flow solvers supporting non-contiguous interfaces require specialized interpolation procedures which may not ensure conservation at the interface. Unstructured or generalized indexing data structures offer greater flexibility, but require explicit connectivity information and are not easy to generate for three dimensional configurations. In addition, unstructured mesh based schemes tend to be less efficient and it is difficult to resolve viscous layers. Recently hybrid or generalized element solution and grid generation techniques have been developed with the objective of combining the attractive features of both structured and unstructured techniques. In the present work, recently developed procedures for hybrid grid generation and flow simulation are critically evaluated, and compared to existing structured and unstructured procedures in terms of accuracy and computational requirements.

  18. Large-scale molecular dynamics simulation of DNA: implementation and validation of the AMBER98 force field in LAMMPS.

    PubMed

    Grindon, Christina; Harris, Sarah; Evans, Tom; Novik, Keir; Coveney, Peter; Laughton, Charles

    2004-07-15

    Molecular modelling played a central role in the discovery of the structure of DNA by Watson and Crick. Today, such modelling is done on computers: the more powerful these computers are, the more detailed and extensive can be the study of the dynamics of such biological macromolecules. To fully harness the power of modern massively parallel computers, however, we need to develop and deploy algorithms which can exploit the structure of such hardware. The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a scalable molecular dynamics code including long-range Coulomb interactions, which has been specifically designed to function efficiently on parallel platforms. Here we describe the implementation of the AMBER98 force field in LAMMPS and its validation for molecular dynamics investigations of DNA structure and flexibility against the benchmark of results obtained with the long-established code AMBER6 (Assisted Model Building with Energy Refinement, version 6). Extended molecular dynamics simulations on the hydrated DNA dodecamer d(CTTTTGCAAAAG)(2), which has previously been the subject of extensive dynamical analysis using AMBER6, show that it is possible to obtain excellent agreement in terms of static, dynamic and thermodynamic parameters between AMBER6 and LAMMPS. In comparison with AMBER6, LAMMPS shows greatly improved scalability in massively parallel environments, opening up the possibility of efficient simulations of order-of-magnitude larger systems and/or for order-of-magnitude greater simulation times.

  19. Micro-computed tomography in murine models of cerebral cavernous malformations as a paradigm for brain disease.

    PubMed

    Girard, Romuald; Zeineddine, Hussein A; Orsbon, Courtney; Tan, Huan; Moore, Thomas; Hobson, Nick; Shenkar, Robert; Lightle, Rhonda; Shi, Changbin; Fam, Maged D; Cao, Ying; Shen, Le; Neander, April I; Rorrer, Autumn; Gallione, Carol; Tang, Alan T; Kahn, Mark L; Marchuk, Douglas A; Luo, Zhe-Xi; Awad, Issam A

    2016-09-15

    Cerebral cavernous malformations (CCMs) are hemorrhagic brain lesions, where murine models allow major mechanistic discoveries, ushering genetic manipulations and preclinical assessment of therapies. Histology for lesion counting and morphometry is essential yet tedious and time consuming. We herein describe the application and validations of X-ray micro-computed tomography (micro-CT), a non-destructive technique allowing three-dimensional CCM lesion count and volumetric measurements, in transgenic murine brains. We hereby describe a new contrast soaking technique not previously applied to murine models of CCM disease. Volumetric segmentation and image processing paradigm allowed for histologic correlations and quantitative validations not previously reported with the micro-CT technique in brain vascular disease. Twenty-two hyper-dense areas on micro-CT images, identified as CCM lesions, were matched by histology. The inter-rater reliability analysis showed strong consistency in the CCM lesion identification and staging (K=0.89, p<0.0001) between the two techniques. Micro-CT revealed a 29% greater CCM lesion detection efficiency, and 80% improved time efficiency. Serial integrated lesional area by histology showed a strong positive correlation with micro-CT estimated volume (r(2)=0.84, p<0.0001). Micro-CT allows high throughput assessment of lesion count and volume in pre-clinical murine models of CCM. This approach complements histology with improved accuracy and efficiency, and can be applied for lesion burden assessment in other brain diseases. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Catalytic ignition model in a monolithic reactor with in-depth reaction

    NASA Technical Reports Server (NTRS)

    Tien, Ta-Ching; Tien, James S.

    1990-01-01

    Two transient models have been developed to study the catalytic ignition in a monolithic catalytic reactor. The special feature in these models is the inclusion of thermal and species structures in the porous catalytic layer. There are many time scales involved in the catalytic ignition problem, and these two models are developed with different time scales. In the full transient model, the equations are non-dimensionalized by the shortest time scale (mass diffusion across the catalytic layer). It is therefore accurate but is computationally costly. In the energy-integral model, only the slowest process (solid heat-up) is taken as nonsteady. It is approximate but computationally efficient. In the computations performed, the catalyst is platinum and the reactants are rich mixtures of hydrogen and oxygen. One-step global chemical reaction rates are used for both gas-phase homogeneous reaction and catalytic heterogeneous reaction. The computed results reveal the transient ignition processes in detail, including the structure variation with time in the reactive catalytic layer. An ignition map using reactor length and catalyst loading is constructed. The comparison of computed results between the two transient models verifies the applicability of the energy-integral model when the time is greater than the second largest time scale of the system. It also suggests that a proper combined use of the two models can catch all the transient phenomena while minimizing the computational cost.

  1. Delivering better power: the role of simulation in reducing the environmental impact of aircraft engines.

    PubMed

    Menzies, Kevin

    2014-08-13

    The growth in simulation capability over the past 20 years has led to remarkable changes in the design process for gas turbines. The availability of relatively cheap computational power coupled to improvements in numerical methods and physical modelling in simulation codes have enabled the development of aircraft propulsion systems that are more powerful and yet more efficient than ever before. However, the design challenges are correspondingly greater, especially to reduce environmental impact. The simulation requirements to achieve a reduced environmental impact are described along with the implications of continued growth in available computational power. It is concluded that achieving the environmental goals will demand large-scale multi-disciplinary simulations requiring significantly increased computational power, to enable optimization of the airframe and propulsion system over the entire operational envelope. However even with massive parallelization, the limits imposed by communications latency will constrain the time required to achieve a solution, and therefore the position of such large-scale calculations in the industrial design process. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. Towards a Coupled Vortex Particle and Acoustic Boundary Element Solver to Predict the Noise Production of Bio-Inspired Propulsion

    NASA Astrophysics Data System (ADS)

    Wagenhoffer, Nathan; Moored, Keith; Jaworski, Justin

    2016-11-01

    The design of quiet and efficient bio-inspired propulsive concepts requires a rapid, unified computational framework that integrates the coupled fluid dynamics with the noise generation. Such a framework is developed where the fluid motion is modeled with a two-dimensional unsteady boundary element method that includes a vortex-particle wake. The unsteady surface forces from the potential flow solver are then passed to an acoustic boundary element solver to predict the radiated sound in low-Mach-number flows. The use of the boundary element method for both the hydrodynamic and acoustic solvers permits dramatic computational acceleration by application of the fast multiple method. The reduced order of calculations due to the fast multipole method allows for greater spatial resolution of the vortical wake per unit of computational time. The coupled flow-acoustic solver is validated against canonical vortex-sound problems. The capability of the coupled solver is demonstrated by analyzing the performance and noise production of an isolated bio-inspired swimmer and of tandem swimmers.

  3. Ceramic matrix composite behavior -- Computational simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamis, C.C.; Murthy, P.L.N.; Mital, S.K.

    Development of analytical modeling and computational capabilities for the prediction of high temperature ceramic matrix composite behavior has been an ongoing research activity at NASA-Lewis Research Center. These research activities have resulted in the development of micromechanics based methodologies to evaluate different aspects of ceramic matrix composite behavior. The basis of the approach is micromechanics together with a unique fiber substructuring concept. In this new concept the conventional unit cell (the smallest representative volume element of the composite) of micromechanics approach has been modified by substructuring the unit cell into several slices and developing the micromechanics based equations at themore » slice level. Main advantage of this technique is that it can provide a much greater detail in the response of composite behavior as compared to a conventional micromechanics based analysis and still maintains a very high computational efficiency. This methodology has recently been extended to model plain weave ceramic composites. The objective of the present paper is to describe the important features of the modeling and simulation and illustrate with select examples of laminated as well as woven composites.« less

  4. A computable expression of closure to efficient causation.

    PubMed

    Mossio, Matteo; Longo, Giuseppe; Stewart, John

    2009-04-07

    In this paper, we propose a mathematical expression of closure to efficient causation in terms of lambda-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in lambda-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability.

  5. Home Media and Children’s Achievement and Behavior

    PubMed Central

    Hofferth, Sandra L.

    2010-01-01

    This study provides a national picture of the time American 6–12 year olds spent playing video games, using the computer, and watching television at home in 1997 and 2003 and the association of early use with their achievement and behavior as adolescents. Girls benefited from computers more than boys and Black children’s achievement benefited more from greater computer use than did that of White children. Greater computer use in middle childhood was associated with increased achievement for White and Black girls and Black boys, but not White boys. Greater computer play was also associated with a lower risk of becoming socially isolated among girls. Computer use does not crowd out positive learning-related activities, whereas video game playing does. Consequently, increased video game play had both positive and negative associations with the achievement of girls but not boys. For boys, increased video game play was linked to increased aggressive behavior problems. PMID:20840243

  6. Evaluating biomechanics of user-selected sitting and standing computer workstation.

    PubMed

    Lin, Michael Y; Barbir, Ana; Dennerlein, Jack T

    2017-11-01

    A standing computer workstation has now become a popular modern work place intervention to reduce sedentary behavior at work. However, user's interaction related to a standing computer workstation and its differences with a sitting workstation need to be understood to assist in developing recommendations for use and set up. The study compared the differences in upper extremity posture and muscle activity between user-selected sitting and standing workstation setups. Twenty participants (10 females, 10 males) volunteered for the study. 3-D posture, surface electromyography, and user-reported discomfort were measured while completing simulated tasks with each participant's self-selected workstation setups. Sitting computer workstation associated with more non-neutral shoulder postures and greater shoulder muscle activity, while standing computer workstation induced greater wrist adduction angle and greater extensor carpi radialis muscle activity. Sitting computer workstation also associated with greater shoulder abduction postural variation (90th-10th percentile) while standing computer workstation associated with greater variation for should rotation and wrist extension. Users reported similar overall discomfort levels within the first 10 min of work but had more than twice as much discomfort while standing than sitting after 45 min; with most discomfort reported in the low back for standing and shoulder for sitting. These different measures provide understanding in users' different interactions with sitting and standing and by alternating between the two configurations in short bouts may be a way of changing the loading pattern on the upper extremity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Statutory and Regulatory Barriers to Greater Efficiencies in the Arizona University System.

    ERIC Educational Resources Information Center

    Johnson, Edward

    One of the working papers in the final report of the Arizona Board of Regents' Task Force on Excellence, Efficiency and Competitiveness, this document organizes the responses of Arizona's universities to questions on statutory and regulatory barriers to greater efficiency. Each statute, regulation, or policy is noted with commentary and…

  8. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Patrick

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  9. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.« less

  10. Evaluation of the discrete vortex wake cross flow model using vector computers. Part 1: Theory and application

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The current program had the objective to modify a discrete vortex wake method to efficiently compute the aerodynamic forces and moments on high fineness ratio bodies (f approximately 10.0). The approach is to increase computational efficiency by structuring the program to take advantage of new computer vector software and by developing new algorithms when vector software can not efficiently be used. An efficient program was written and substantial savings achieved. Several test cases were run for fineness ratios up to f = 16.0 and angles of attack up to 50 degrees.

  11. Final Technical Report: "Achieving Regional Energy Efficiency Potential in the Southeast”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahoney, Mandy

    The overall objective of this award was to facilitate sharing of DOE resources and best practices as well as provide technical assistance to key stakeholders to support greater compliance with energy efficiency standards and increased energy savings. The outcomes of this award include greater awareness among key stakeholders on energy efficiency topics, increased deployment and utilization of DOE resources, and effective policies and programs to support energy efficiency in the Southeast.

  12. Achieving Energy Efficiency Through Real-Time Feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nesse, Ronald J.

    2011-09-01

    Through the careful implementation of simple behavior change measures, opportunities exist to achieve strategic gains, including greater operational efficiencies, energy cost savings, greater tenant health and ensuing productivity and an improved brand value through sustainability messaging and achievement.

  13. CCOMP: An efficient algorithm for complex roots computation of determinantal equations

    NASA Astrophysics Data System (ADS)

    Zouros, Grigorios P.

    2018-01-01

    In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.

  14. Unobtrusive measurement of daily computer use to detect mild cognitive impairment.

    PubMed

    Kaye, Jeffrey; Mattek, Nora; Dodge, Hiroko H; Campbell, Ian; Hayes, Tamara; Austin, Daniel; Hatt, William; Wild, Katherine; Jimison, Holly; Pavel, Michael

    2014-01-01

    Mild disturbances of higher order activities of daily living are present in people diagnosed with mild cognitive impairment (MCI). These deficits may be difficult to detect among those still living independently. Unobtrusive continuous assessment of a complex activity such as home computer use may detect mild functional changes and identify MCI. We sought to determine whether long-term changes in remotely monitored computer use differ in persons with MCI in comparison with cognitively intact volunteers. Participants enrolled in a longitudinal cohort study of unobtrusive in-home technologies to detect cognitive and motor decline in independently living seniors were assessed for computer use (number of days with use, mean daily use, and coefficient of variation of use) measured by remotely monitoring computer session start and end times. More than 230,000 computer sessions from 113 computer users (mean age, 85 years; 38 with MCI) were acquired during a mean of 36 months. In mixed-effects models, there was no difference in computer use at baseline between MCI and intact participants controlling for age, sex, education, race, and computer experience. However, over time, between MCI and intact participants, there was a significant decrease in number of days with use (P = .01), mean daily use (∼1% greater decrease/month; P = .009), and an increase in day-to-day use variability (P = .002). Computer use change can be monitored unobtrusively and indicates individuals with MCI. With 79% of those 55 to 64 years old now online, this may be an ecologically valid and efficient approach to track subtle, clinically meaningful change with aging. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  15. Toward an optimal solver for time-spectral fluid-dynamic and aeroelastic solutions on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Mundis, Nathan L.; Mavriplis, Dimitri J.

    2017-09-01

    The time-spectral method applied to the Euler and coupled aeroelastic equations theoretically offers significant computational savings for purely periodic problems when compared to standard time-implicit methods. However, attaining superior efficiency with time-spectral methods over traditional time-implicit methods hinges on the ability rapidly to solve the large non-linear system resulting from time-spectral discretizations which become larger and stiffer as more time instances are employed or the period of the flow becomes especially short (i.e. the maximum resolvable wave-number increases). In order to increase the efficiency of these solvers, and to improve robustness, particularly for large numbers of time instances, the Generalized Minimal Residual Method (GMRES) is used to solve the implicit linear system over all coupled time instances. The use of GMRES as the linear solver makes time-spectral methods more robust, allows them to be applied to a far greater subset of time-accurate problems, including those with a broad range of harmonic content, and vastly improves the efficiency of time-spectral methods. In previous work, a wave-number independent preconditioner that mitigates the increased stiffness of the time-spectral method when applied to problems with large resolvable wave numbers has been developed. This preconditioner, however, directly inverts a large matrix whose size increases in proportion to the number of time instances. As a result, the computational time of this method scales as the cube of the number of time instances. In the present work, this preconditioner has been reworked to take advantage of an approximate-factorization approach that effectively decouples the spatial and temporal systems. Once decoupled, the time-spectral matrix can be inverted in frequency space, where it has entries only on the main diagonal and therefore can be inverted quite efficiently. This new GMRES/preconditioner combination is shown to be over an order of magnitude more efficient than the previous wave-number independent preconditioner for problems with large numbers of time instances and/or large reduced frequencies.

  16. Developing the next generation of diverse computer scientists: the need for enhanced, intersectional computing identity theory

    NASA Astrophysics Data System (ADS)

    Rodriguez, Sarah L.; Lehman, Kathleen

    2017-10-01

    This theoretical paper explores the need for enhanced, intersectional computing identity theory for the purpose of developing a diverse group of computer scientists for the future. Greater theoretical understanding of the identity formation process specifically for computing is needed in order to understand how students come to understand themselves as computer scientists. To ensure that the next generation of computer scientists is diverse, this paper presents a case for examining identity development intersectionally, understanding the ways in which women and underrepresented students may have difficulty identifying as computer scientists and be systematically oppressed in their pursuit of computer science careers. Through a review of the available scholarship, this paper suggests that creating greater theoretical understanding of the computing identity development process will inform the way in which educational stakeholders consider computer science practices and policies.

  17. Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level

    DOE PAGES

    Chakma, Gangotree; Adnan, Md Musabbir; Wyer, Austin R.; ...

    2017-11-23

    Neuromorphic computing is non-von Neumann computer architecture for the post Moore’s law era of computing. Since a main focus of the post Moore’s law era is energy-efficient computing with fewer resources and less area, neuromorphic computing contributes effectively in this research. Here in this paper, we present a memristive neuromorphic system for improved power and area efficiency. Our particular mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system. We also provide device-level considerations that make the system more energy-efficient. The proposed systemmore » additionally includes synchronous digital long term plasticity, an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead.« less

  18. Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakma, Gangotree; Adnan, Md Musabbir; Wyer, Austin R.

    Neuromorphic computing is non-von Neumann computer architecture for the post Moore’s law era of computing. Since a main focus of the post Moore’s law era is energy-efficient computing with fewer resources and less area, neuromorphic computing contributes effectively in this research. Here in this paper, we present a memristive neuromorphic system for improved power and area efficiency. Our particular mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system. We also provide device-level considerations that make the system more energy-efficient. The proposed systemmore » additionally includes synchronous digital long term plasticity, an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead.« less

  19. Emerging Nanophotonic Applications Explored with Advanced Scientific Parallel Computing

    NASA Astrophysics Data System (ADS)

    Meng, Xiang

    The domain of nanoscale optical science and technology is a combination of the classical world of electromagnetics and the quantum mechanical regime of atoms and molecules. Recent advancements in fabrication technology allows the optical structures to be scaled down to nanoscale size or even to the atomic level, which are far smaller than the wavelength they are designed for. These nanostructures can have unique, controllable, and tunable optical properties and their interactions with quantum materials can have important near-field and far-field optical response. Undoubtedly, these optical properties can have many important applications, ranging from the efficient and tunable light sources, detectors, filters, modulators, high-speed all-optical switches; to the next-generation classical and quantum computation, and biophotonic medical sensors. This emerging research of nanoscience, known as nanophotonics, is a highly interdisciplinary field requiring expertise in materials science, physics, electrical engineering, and scientific computing, modeling and simulation. It has also become an important research field for investigating the science and engineering of light-matter interactions that take place on wavelength and subwavelength scales where the nature of the nanostructured matter controls the interactions. In addition, the fast advancements in the computing capabilities, such as parallel computing, also become as a critical element for investigating advanced nanophotonic devices. This role has taken on even greater urgency with the scale-down of device dimensions, and the design for these devices require extensive memory and extremely long core hours. Thus distributed computing platforms associated with parallel computing are required for faster designs processes. Scientific parallel computing constructs mathematical models and quantitative analysis techniques, and uses the computing machines to analyze and solve otherwise intractable scientific challenges. In particular, parallel computing are forms of computation operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently. In this dissertation, we report a series of new nanophotonic developments using the advanced parallel computing techniques. The applications include the structure optimizations at the nanoscale to control both the electromagnetic response of materials, and to manipulate nanoscale structures for enhanced field concentration, which enable breakthroughs in imaging, sensing systems (chapter 3 and 4) and improve the spatial-temporal resolutions of spectroscopies (chapter 5). We also report the investigations on the confinement study of optical-matter interactions at the quantum mechanical regime, where the size-dependent novel properties enhanced a wide range of technologies from the tunable and efficient light sources, detectors, to other nanophotonic elements with enhanced functionality (chapter 6 and 7).

  20. Implications of recent new concepts on the future of mainstream laser processing

    NASA Astrophysics Data System (ADS)

    La Rocca, Aldo V.

    2000-07-01

    According to one of today's most accepted visualizations of the first viable realizations of The Computer Integrated Manufacturing Plant, C.I.M.P., the manufacturing systems herein discussed tend to be multiprocessing, and tend to incorporate the lasers to take advantage of the unique capacities of the laser as a processing tool. Finally also the present laser sources, while having been for a long time more than sufficient, inevitably tend also to new generations. Said visualizations stand in the belief that the first realizations of the C I M P most likely will use flexible multiprocessing machines, which, for flexibility requirements, grow in multi-station cells, in their aggregation in isles and finally in complete manufacturing centers. To constitute the CIMP all partaking elements must be the most easily amenable to Computer Aided Design, CAD, and Computer Aided Manufacturing, CAM. Another basic requirement is that all elements constituting the CIMP must possess the highest System Efficiency and Energy Efficiency at the level of the single element and of its aggregations throughout the various combinations at each and every operating level of said aggregations, up to that of the CIMP. The mastering of the CIMP design constitute a New Discipline that presents very formidable but necessary tasks. Of these the first examples were those related to the early flexible manufacturing system Design Programs. For what concerns the laser processing machines and their integration in manufacturing systems, attention must be given to not repeat the events that hindered their diffusion in the production field keeping it at a level much lower than the expectations and their true potential. Said events stemmed from the confusion between System Efficiency and Energy Efficiency, which persisted for too long and is still common. This has taken place at the levels of introduction of a single element into the combination of the several elements constituting a linear arrangement such as a Transfer Production Line. It because greater and with graver consequences in the case of arrangements possessing more than one degree of product routings, arrangements, as previously mentioned, which evolved in the Flexible Manufacturing Centers.

  1. Computer modeling of a two-junction, monolithic cascade solar cell

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.; Abbott, D.

    1979-01-01

    The theory and design criteria for monolithic, two-junction cascade solar cells are described. The departure from the conventional solar cell analytical method and the reasons for using the integral form of the continuity equations are briefly discussed. The results of design optimization are presented. The energy conversion efficiency that is predicted for the optimized structure is greater than 30% at 300 K, AMO and one sun. The analytical method predicts device performance characteristics as a function of temperature. The range is restricted to 300 to 600 K. While the analysis is capable of determining most of the physical processes occurring in each of the individual layers, only the more significant device performance characteristics are presented.

  2. Fourier transform for fermionic systems and the spectral tensor network.

    PubMed

    Ferris, Andrew J

    2014-07-04

    Leveraging the decomposability of the fast Fourier transform, I propose a new class of tensor network that is efficiently contractible and able to represent many-body systems with local entanglement that is greater than the area law. Translationally invariant systems of free fermions in arbitrary dimensions as well as 1D systems solved by the Jordan-Wigner transformation are shown to be exactly represented in this class. Further, it is proposed that these tensor networks be used as generic structures to variationally describe more complicated systems, such as interacting fermions. This class shares some similarities with the Evenbly-Vidal branching multiscale entanglement renormalization ansatz, but with some important differences and greatly reduced computational demands.

  3. STS-78 Flight Day 8

    NASA Technical Reports Server (NTRS)

    1996-01-01

    On this eighth day of the STS-78 mission, the flight crew, Cmdr. Terence T. Henricks, Pilot Kevin R. Kregel, Payload Cmdr. Susan J. Helms, Mission Specialists Richard M. Linnehan, Charles E. Brady, Jr., and Payload Specialists Jean-Jacques Favier, Ph.D. and Robert B. Thirsk, M.D., continue to conduct experiments primarily focusing on the effects of weightlessness on human physiology. Results from the studies of muscle activity, task performance, and sleep will help future mission planners organize crew schedules for greater efficiency and productivity. For a second consecutive day, Henricks, Kregel, Thirsk, and Favier continue to enter responses to a battery of problem-solving tasks on the Performance Assessment Work Station, a laptop computer.

  4. Latent spatial models and sampling design for landscape genetics

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.

  5. Computed Intranasal Spray Penetration: Comparisons Before and After Nasal Surgery

    PubMed Central

    Frank, Dennis O.; Kimbell, Julia S.; Cannon, Daniel; Rhee, John S.

    2012-01-01

    Background Quantitative methods for comparing intranasal drug delivery efficiencies pre- and postoperatively have not been fully utilized. The objective of this study is to use computational fluid dynamics techniques to evaluate aqueous nasal spray penetration efficiencies before and after surgical correction of intranasal anatomic deformities. Methods Ten three-dimensional models of the nasal cavities were created from pre- and postoperative computed tomography scans in five subjects. Spray simulations were conducted using a particle size distribution ranging from 10–110μm, a spray speed of 3m/s, plume angle of 68°, and with steady state, resting inspiratory airflow present. Two different nozzle positions were compared. Statistical analysis was conducted using Student T-test for matched pairs. Results On the obstructed side, posterior particle deposition after surgery increased by 118% and was statistically significant (p-value=0.036), while anterior particle deposition decreased by 13% and was also statistically significant (p-value=0.020). The fraction of particles that by-passed the airways either pre- or post-operatively was less than 5%. Posterior particle deposition differences between obstructed and contralateral sides of the airways were 113% and 30% for pre- and post-surgery, respectively. Results showed that nozzle positions can influence spray delivery. Conclusions Simulations predicted that surgical correction of nasal anatomic deformities can improve spray penetration to areas where medications can have greater effect. Particle deposition patterns between both sides of the airways are more evenly distributed after surgery. These findings suggest that correcting anatomic deformities may improve intranasal medication delivery. For enhanced particle penetration, patients with nasal deformities may explore different nozzle positions. PMID:22927179

  6. Geospatial Representation, Analysis and Computing Using Bandlimited Functions

    DTIC Science & Technology

    2010-02-19

    navigation of aircraft and missiles require detailed representations of gravity and efficient methods for determining orbits and trajectories. However, many...efficient on today’s computers. Under this grant new, computationally efficient, localized representations of gravity have been developed and tested. As a...step in developing a new approach to estimating gravitational potentials, a multiresolution representation for gravity estimation has been proposed

  7. Efficient Computational Prototyping of Mixed Technology Microfluidic Components and Systems

    DTIC Science & Technology

    2002-08-01

    AFRL-IF-RS-TR-2002-190 Final Technical Report August 2002 EFFICIENT COMPUTATIONAL PROTOTYPING OF MIXED TECHNOLOGY MICROFLUIDIC...SUBTITLE EFFICIENT COMPUTATIONAL PROTOTYPING OF MIXED TECHNOLOGY MICROFLUIDIC COMPONENTS AND SYSTEMS 6. AUTHOR(S) Narayan R. Aluru, Jacob White...Aided Design (CAD) tools for microfluidic components and systems were developed in this effort. Innovative numerical methods and algorithms for mixed

  8. Enhanced Thermostability of Glucose Oxidase through Computer-Aided Molecular Design.

    PubMed

    Ning, Xiaoyan; Zhang, Yanli; Yuan, Tiantian; Li, Qingbin; Tian, Jian; Guan, Weishi; Liu, Bo; Zhang, Wei; Xu, Xinxin; Zhang, Yuhong

    2018-01-31

    Glucose oxidase (GOD, EC.1.1.3.4) specifically catalyzes the reaction of β-d-glucose to gluconic acid and hydrogen peroxide in the presence of oxygen, which has become widely used in the food industry, gluconic acid production and the feed industry. However, the poor thermostability of the current commercial GOD is a key limiting factor preventing its widespread application. In the present study, amino acids closely related to the thermostability of glucose oxidase from Penicillium notatum were predicted with a computer-aided molecular simulation analysis, and mutant libraries were established following a saturation mutagenesis strategy. Two mutants with significantly improved thermostabilities, S100A and D408W, were subsequently obtained. Their protein denaturing temperatures were enhanced by about 4.4 °C and 1.2 °C, respectively, compared with the wild-type enzyme. Treated at 55 °C for 3 h, the residual activities of the mutants were greater than 72%, while that of the wild-type enzyme was only 20%. The half-lives of S100A and D408W were 5.13- and 4.41-fold greater, respectively, than that of the wild-type enzyme at the same temperature. This work provides novel and efficient approaches for enhancing the thermostability of GOD by reducing the protein free unfolding energy or increasing the interaction of amino acids with the coenzyme.

  9. Enhanced Thermostability of Glucose Oxidase through Computer-Aided Molecular Design

    PubMed Central

    Ning, Xiaoyan; Zhang, Yanli; Yuan, Tiantian; Li, Qingbin; Tian, Jian; Guan, Weishi; Liu, Bo; Zhang, Wei; Xu, Xinxin

    2018-01-01

    Glucose oxidase (GOD, EC.1.1.3.4) specifically catalyzes the reaction of β-d-glucose to gluconic acid and hydrogen peroxide in the presence of oxygen, which has become widely used in the food industry, gluconic acid production and the feed industry. However, the poor thermostability of the current commercial GOD is a key limiting factor preventing its widespread application. In the present study, amino acids closely related to the thermostability of glucose oxidase from Penicillium notatum were predicted with a computer-aided molecular simulation analysis, and mutant libraries were established following a saturation mutagenesis strategy. Two mutants with significantly improved thermostabilities, S100A and D408W, were subsequently obtained. Their protein denaturing temperatures were enhanced by about 4.4 °C and 1.2 °C, respectively, compared with the wild-type enzyme. Treated at 55 °C for 3 h, the residual activities of the mutants were greater than 72%, while that of the wild-type enzyme was only 20%. The half-lives of S100A and D408W were 5.13- and 4.41-fold greater, respectively, than that of the wild-type enzyme at the same temperature. This work provides novel and efficient approaches for enhancing the thermostability of GOD by reducing the protein free unfolding energy or increasing the interaction of amino acids with the coenzyme. PMID:29385094

  10. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  11. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  12. Improved Analysis of Time Series with Temporally Correlated Errors: An Algorithm that Reduces the Computation Time.

    NASA Astrophysics Data System (ADS)

    Langbein, J. O.

    2016-12-01

    Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.

  13. Exact and efficient simulation of concordant computation

    NASA Astrophysics Data System (ADS)

    Cable, Hugo; Browne, Daniel E.

    2015-11-01

    Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.

  14. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  15. Exploratory Study of Rural Physicians' Self-Directed Learning Experiences in a Digital Age.

    PubMed

    Curran, Vernon; Fleet, Lisa; Simmons, Karla; Ravalia, Mohamed; Snow, Pamela

    2016-01-01

    The nature and characteristics of self-directed learning (SDL) by physicians has been transformed with the growth in digital, social, and mobile technologies (DSMTs). Although these technologies present opportunities for greater "just-in-time" information seeking, there are issues for ensuring effective and efficient usage to compliment one's repertoire for continuous learning. The purpose of this study was to explore the SDL experiences of rural physicians and the potential of DSMTs for supporting their continuing professional development (CPD). Semistructured interviews were conducted with a purposive sample of rural physicians. Interview data were transcribed verbatim and analyzed using NVivo analytical software and thematic analysis. Fourteen (N = 14) interviews were conducted and key thematic categories that emerged included key triggers, methods of undertaking SDL, barriers, and supports. Methods and resources for undertaking SDL have evolved considerably, and rural physicians report greater usage of mobile phones, tablets, and laptop computers for updating their knowledge and skills and in responding to patient questions/problems. Mobile technologies, and some social media, can serve as "triggers" in instigating SDL and a greater usage of DSMTs, particularly at "point of care," may result in higher levels of SDL. Social media is met with some scrutiny and ambivalence, mainly because of the "credibility" of information and risks associated with digital professionalism. DSMTs are growing in popularity as a key resource to support SDL for rural physicians. Mobile technologies are enabling greater "point-of-care" learning and more efficient information seeking. Effective use of DSMTs for SDL has implications for enhancing just-in-time learning and quality of care. Increasing use of DSMTs and their new effect on SDL raises the need for reflection on conceptualizations of the SDL process. The "digital age" has implications for our CPD credit systems and the roles of CPD providers in supporting SDL using DSMTs.

  16. Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor; Kurth, Thorsten; Carrier, Pierre; Wichmann, Nathan; Prendergast, David; Kent, Paul; Deslippe, Jack

    Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides.

  17. Integrative genomic mining for enzyme function to enable engineering of a non-natural biosynthetic pathway.

    PubMed

    Mak, Wai Shun; Tran, Stephen; Marcheschi, Ryan; Bertolani, Steve; Thompson, James; Baker, David; Liao, James C; Siegel, Justin B

    2015-11-24

    The ability to biosynthetically produce chemicals beyond what is commonly found in Nature requires the discovery of novel enzyme function. Here we utilize two approaches to discover enzymes that enable specific production of longer-chain (C5-C8) alcohols from sugar. The first approach combines bioinformatics and molecular modelling to mine sequence databases, resulting in a diverse panel of enzymes capable of catalysing the targeted reaction. The median catalytic efficiency of the computationally selected enzymes is 75-fold greater than a panel of naively selected homologues. This integrative genomic mining approach establishes a unique avenue for enzyme function discovery in the rapidly expanding sequence databases. The second approach uses computational enzyme design to reprogramme specificity. Both approaches result in enzymes with >100-fold increase in specificity for the targeted reaction. When enzymes from either approach are integrated in vivo, longer-chain alcohol production increases over 10-fold and represents >95% of the total alcohol products.

  18. The dimension split element-free Galerkin method for three-dimensional potential problems

    NASA Astrophysics Data System (ADS)

    Meng, Z. J.; Cheng, H.; Ma, L. D.; Cheng, Y. M.

    2018-06-01

    This paper presents the dimension split element-free Galerkin (DSEFG) method for three-dimensional potential problems, and the corresponding formulae are obtained. The main idea of the DSEFG method is that a three-dimensional potential problem can be transformed into a series of two-dimensional problems. For these two-dimensional problems, the improved moving least-squares (IMLS) approximation is applied to construct the shape function, which uses an orthogonal function system with a weight function as the basis functions. The Galerkin weak form is applied to obtain a discretized system equation, and the penalty method is employed to impose the essential boundary condition. The finite difference method is selected in the splitting direction. For the purposes of demonstration, some selected numerical examples are solved using the DSEFG method. The convergence study and error analysis of the DSEFG method are presented. The numerical examples show that the DSEFG method has greater computational precision and computational efficiency than the IEFG method.

  19. Emerging MRI and metabolic neuroimaging techniques in mild traumatic brain injury.

    PubMed

    Lu, Liyan; Wei, Xiaoer; Li, Minghua; Li, Yuehua; Li, Wenbin

    2014-01-01

    Traumatic brain injury (TBI) is one of the leading causes of death worldwide, and mild traumatic brain injury (mTBI) is the most common traumatic injury. It is difficult to detect mTBI using a routine neuroimaging. Advanced techniques with greater sensitivity and specificity for the diagnosis and treatment of mTBI are required. The aim of this review is to offer an overview of various emerging neuroimaging methodologies that can solve the clinical health problems associated with mTBI. Important findings and improvements in neuroimaging that hold value for better detection, characterization and monitoring of objective brain injuries in patients with mTBI are presented. Conventional computed tomography (CT) and magnetic resonance imaging (MRI) are not very efficient for visualizing mTBI. Moreover, techniques such as diffusion tensor imaging, magnetization transfer imaging, susceptibility-weighted imaging, functional MRI, single photon emission computed tomography, positron emission tomography and magnetic resonance spectroscopy imaging were found to be useful for mTBI imaging.

  20. Methane Sensitivity to Perturbations in Tropospheric Oxidizing Capacity

    NASA Technical Reports Server (NTRS)

    Yegorova, Elena; Duncan, Bryan

    2011-01-01

    Methane is an important greenhouse gas and has a 25 times greater global warming potential than CO2 on a century timescale. Yet there are considerable uncertainties in the magnitude and variability of its sources and sinks. The response of the coupled non-linear methane-carbon monoxide-hydroxyl radical (OH) system is important in determining the tropospheric oxidizing capacity. Using the NASA Goddard Earth Observing System, Version 5 (GEOS-5) chemistry climate model, we study the response of methane to perturbations of OH and wetland emissions. We use a computationally-efficient option of the GEOS-5 CCM that includes an OH parameterization that accurately represents OH predicted by a full chemical mechanism. The OH parameterization allows for studying non-linear CH4-CO-OH feedbacks in computationally fast sensitivity experiments. We compare our results with surface observations (GMD) and discuss the range of uncertainty in OH and wetland emissions required to bring modeling results in better agreement with surface observations. Our results can be used to improve projections of methane emissions and methane growth.

  1. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  2. An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.

    PubMed

    Chen, Lei; Wei, Guoyuan; Shen, Zhenyao

    2015-10-21

    To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.

  3. An efficient algorithm to compute marginal posterior genotype probabilities for every member of a pedigree with loops

    PubMed Central

    2009-01-01

    Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551

  4. The neural processing of foreign-accented speech and its relationship to listener bias

    PubMed Central

    Yi, Han-Gyol; Smiljanic, Rajka; Chandrasekaran, Bharath

    2014-01-01

    Foreign-accented speech often presents a challenging listening condition. In addition to deviations from the target speech norms related to the inexperience of the nonnative speaker, listener characteristics may play a role in determining intelligibility levels. We have previously shown that an implicit visual bias for associating East Asian faces and foreignness predicts the listeners' perceptual ability to process Korean-accented English audiovisual speech (Yi et al., 2013). Here, we examine the neural mechanism underlying the influence of listener bias to foreign faces on speech perception. In a functional magnetic resonance imaging (fMRI) study, native English speakers listened to native- and Korean-accented English sentences, with or without faces. The participants' Asian-foreign association was measured using an implicit association test (IAT), conducted outside the scanner. We found that foreign-accented speech evoked greater activity in the bilateral primary auditory cortices and the inferior frontal gyri, potentially reflecting greater computational demand. Higher IAT scores, indicating greater bias, were associated with increased BOLD response to foreign-accented speech with faces in the primary auditory cortex, the early node for spectrotemporal analysis. We conclude the following: (1) foreign-accented speech perception places greater demand on the neural systems underlying speech perception; (2) face of the talker can exaggerate the perceived foreignness of foreign-accented speech; (3) implicit Asian-foreign association is associated with decreased neural efficiency in early spectrotemporal processing. PMID:25339883

  5. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  6. Biological disintegration of microalgae for biomethane recovery-prediction of biodegradability and computation of energy balance.

    PubMed

    Kavitha, S; Yukesh Kannah, R; Rajesh Banu, J; Kaliappan, S; Johnson, M

    2017-11-01

    The present study investigates the synergistic effect of combined bacterial disintegration on mixed microalgal biomass for energy efficient biomethane generation. The rate of microalgal biomass lysis, enhanced biodegradability, and methane generation were used as indices to assess efficiency of the disintegration. A maximal dissolvable organics release and algal biomass lysis rate of about 1100, 950 and 800mg/L and 26, 23 and 18% was achieved in PA+C (protease, amylase+cellulase secreting bacteria), C (cellulase alone) and PA (protease, amylase) microalgal disintegration. During anaerobic fermentation, a greater production of volatile fatty acids (1000mg/L) was noted in PA+C bacterial disintegration of microalgal biomass. PA+C bacterial disintegration improve the amenability of microalgal biomass to biomethanation process with higher biodegradability of about 0.27gCOD/gCOD, respectively. The energy balance analysis of this combined bacterial disintegration of microalgal biomass provides surplus positive net energy (1.14GJ/d) by compensating the input energy requirements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Examples of Mesh and NURBS modelling for in vivo lung counting studies.

    PubMed

    Farah, Jad; Broggio, David; Franck, Didier

    2011-03-01

    Realistic calibration coefficients for in vivo counting installations are assessed using voxel phantoms and Monte Carlo calculations. However, voxel phantoms construction is time consuming and their flexibility extremely limited. This paper involves Mesh and non-uniform rational B-splines graphical formats, of greater flexibility, to optimise the calibration of in vivo counting installations. Two studies validating the use of such phantoms and involving geometry deformation and modelling were carried out to study the morphologic effect on lung counting efficiency. The created 3D models fitted with the reference ones, with volumetric differences of <5 %. Moreover, it was found that counting efficiency varies with the inverse of lungs' volume and that the latter primes when compared with chest wall thickness. Finally, a series of different thoracic female phantoms of various cup sizes, chest girths and internal organs' volumes were created starting from the International Commission on Radiological Protection (ICRP) adult female reference computational phantom to give correction factors for the lung monitoring of female workers.

  8. High Performance Computing Meets Energy Efficiency - Continuum Magazine |

    Science.gov Websites

    NREL High Performance Computing Meets Energy Efficiency High Performance Computing Meets Energy turbines. Simulation by Patrick J. Moriarty and Matthew J. Churchfield, NREL The new High Performance Computing Data Center at the National Renewable Energy Laboratory (NREL) hosts high-speed, high-volume data

  9. Hierarchical and Parallelizable Direct Volume Rendering for Irregular and Multiple Grids

    NASA Technical Reports Server (NTRS)

    Wilhelms, Jane; VanGelder, Allen; Tarantino, Paul; Gibbs, Jonathan

    1996-01-01

    A general volume rendering technique is described that efficiently produces images of excellent quality from data defined over irregular grids having a wide variety of formats. Rendering is done in software, eliminating the need for special graphics hardware, as well as any artifacts associated with graphics hardware. Images of volumes with about one million cells can be produced in one to several minutes on a workstation with a 150 MHz processor. A significant advantage of this method for applications such as computational fluid dynamics is that it can process multiple intersecting grids. Such grids present problems for most current volume rendering techniques. Also, the wide range of cell sizes (by a factor of 10,000 or more), which is typical of such applications, does not present difficulties, as it does for many techniques. A spatial hierarchical organization makes it possible to access data from a restricted region efficiently. The tree has greater depth in regions of greater detail, determined by the number of cells in the region. It also makes it possible to render useful 'preview' images very quickly (about one second for one-million-cell grids) by displaying each region associated with a tree node as one cell. Previews show enough detail to navigate effectively in very large data sets. The algorithmic techniques include use of a kappa-d tree, with prefix-order partitioning of triangles, to reduce the number of primitives that must be processed for one rendering, coarse-grain parallelism for a shared-memory MIMD architecture, a new perspective transformation that achieves greater numerical accuracy, and a scanline algorithm with depth sorting and a new clipping technique.

  10. Propulsive efficiency of frog swimming with different feet and swimming patterns

    PubMed Central

    Jizhuang, Fan; Wei, Zhang; Bowen, Yuan; Gangfeng, Liu

    2017-01-01

    ABSTRACT Aquatic and terrestrial animals have different swimming performances and mechanical efficiencies based on their different swimming methods. To explore propulsion in swimming frogs, this study calculated mechanical efficiencies based on data describing aquatic and terrestrial webbed-foot shapes and swimming patterns. First, a simplified frog model and dynamic equation were established, and hydrodynamic forces on the foot were computed according to computational fluid dynamic calculations. Then, a two-link mechanism was used to stand in for the diverse and complicated hind legs found in different frog species, in order to simplify the input work calculation. Joint torques were derived based on the virtual work principle to compute the efficiency of foot propulsion. Finally, two feet and swimming patterns were combined to compute propulsive efficiency. The aquatic frog demonstrated a propulsive efficiency (43.11%) between those of drag-based and lift-based propulsions, while the terrestrial frog efficiency (29.58%) fell within the range of drag-based propulsion. The results illustrate the main factor of swimming patterns for swimming performance and efficiency. PMID:28302669

  11. Simulation and Experimental Study of Metal Organic Frameworks Used in Adsorption Cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenks, Jeromy J.; Motkuri, Radha K.; TeGrotenhuis, Ward

    2016-10-11

    Metal-organic frameworks (MOFs) have recently attracted enormous interest over the past few years in energy storage and gas separation, yet there have been few reports for adsorption cooling applications. Adsorption cooling technology is an established alternative to mechanical vapor compression refrigeration systems and is an excellent alternative in industrial environments where waste heat is available. We explored the use of MOFs that have very high mass loading and relatively low heats of adsorption, with certain combinations of refrigerants to demonstrate a new type of highly efficient adsorption chiller. Computational fluid dynamics combined with a system level lumped-parameter model have beenmore » used to project size and performance for chillers with a cooling capacity ranging from a few kW to several thousand kW. These systems rely on stacked micro/mini-scale architectures to enhance heat and mass transfer. Recent computational studies of an adsorption chiller based on MOFs suggests that a thermally-driven coefficient of performance greater than one may be possible, which would represent a fundamental breakthrough in performance of adsorption chiller technology. Presented herein are computational and experimental results for hydrophyilic and fluorophilic MOFs.« less

  12. Concurrent processing simulation of the space station

    NASA Technical Reports Server (NTRS)

    Gluck, R.; Hale, A. L.; Sunkel, John W.

    1989-01-01

    The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.

  13. Correlating Abdominal Wall Thickness and Body Mass Index to Predict Usefulness of Right Lower Quadrant Ultrasound for Evaluation of Pediatric Appendicitis.

    PubMed

    Kwon, Jeannie K; Trexler, Nowice; Reisch, Joan; Pfeifer, Cory M; Ginos, Jason; Powell, Jerry Allen; Veltkamp, Jennifer; Anene, Alvin; Fernandes, Neil; Chen, Li Ern

    2017-11-06

    To inform selective and efficient use of appendix ultrasound (US) beyond adult parameters of body mass index (BMI) of less than 25 kg/m, we correlate abdominal wall thickness (AWT) with age and BMI to generate parameters for male and female children. Information presented in chart format can aid in the decision to utilize US for the evaluation of appendicitis. In this observational study, 1600 pediatric computed tomography scans of the abdomen and pelvis were analyzed to obtain measurements of AWT in the right lower quadrant. Measurements were correlated by patient age, BMI, and sex. Results and consensus-based recommendations were presented in chart format with color-coded groupings to allow for convenient referencing in the clinical setting. One thousand four hundred eighty-eight computed tomography scans and AWT measurements were included. All age groups with BMI of less than 25 kg/m and all male and female groups younger than 6 years regardless of BMI had median AWT of less than 4 cm resulting in strong recommendation for US. Males older than 6 years and all female age groups with BMI of greater than 30 kg/m and female older than 15 years and BMI of greater than 25 kg/m had AWT of more than 5 cm resulting in low recommendation for US. While the BMI cutoff standard of less than 25 kg/m for usefulness of appendix US holds in the adult population, our data expand the acceptable range in children younger than 9 years regardless of BMI and male children with BMI up to 30 kg/m. Female children younger than 15 years with a BMI up to 30 kg/m may also be amenable to right lower quadrant US based on AWT. These parameters inform selective and efficient use of US for appendix evaluation.

  14. Efficiency and robustness of different bus network designs

    NASA Astrophysics Data System (ADS)

    Pang, John Zhen Fu; Bin Othman, Nasri; Ng, Keng Meng; Monterola, Christopher

    2015-07-01

    We compare the efficiencies and robustness of four transport networks that can be possibly formed as a result of deliberate city planning. The networks are constructed based on their spatial resemblance to the cities of Manhattan (lattice), Sudan (random), Beijing (single-blob) and Greater Cairo (dual-blob). For a given type, a genetic algorithm is employed to obtain an optimized set of the bus routes. We then simulate how commuter travels using Yen's algorithms for k shortest paths on an adjacency matrix. The cost of traveling such as walking between stations is captured by varying the weighted sums of matrices. We also consider the number of transfers a posteriori by looking at the computed shortest paths. With consideration to distances via radius of gyration, redundancies of travel and number of bus transfers, our simulations indicate that random and dual-blob are more efficient than single-blob and lattice networks. Moreover, dual-blob type is least robust when node removals are targeted but is most resilient when node failures are random. The work hopes to guide and provide technical perspectives on how geospatial distribution of a city limits the optimality of transport designs.

  15. An efficient formulation of robot arm dynamics for control and computer simulation

    NASA Astrophysics Data System (ADS)

    Lee, C. S. G.; Nigam, R.

    This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.

  16. Linking big models to big data: efficient ecosystem model calibration through Bayesian model emulation

    NASA Astrophysics Data System (ADS)

    Fer, I.; Kelly, R.; Andrews, T.; Dietze, M.; Richardson, A. D.

    2016-12-01

    Our ability to forecast ecosystems is limited by how well we parameterize ecosystem models. Direct measurements for all model parameters are not always possible and inverse estimation of these parameters through Bayesian methods is computationally costly. A solution to computational challenges of Bayesian calibration is to approximate the posterior probability surface using a Gaussian Process that emulates the complex process-based model. Here we report the integration of this method within an ecoinformatics toolbox, Predictive Ecosystem Analyzer (PEcAn), and its application with two ecosystem models: SIPNET and ED2.1. SIPNET is a simple model, allowing application of MCMC methods both to the model itself and to its emulator. We used both approaches to assimilate flux (CO2 and latent heat), soil respiration, and soil carbon data from Bartlett Experimental Forest. This comparison showed that emulator is reliable in terms of convergence to the posterior distribution. A 10000-iteration MCMC analysis with SIPNET itself required more than two orders of magnitude greater computation time than an MCMC run of same length with its emulator. This difference would be greater for a more computationally demanding model. Validation of the emulator-calibrated SIPNET against both the assimilated data and out-of-sample data showed improved fit and reduced uncertainty around model predictions. We next applied the validated emulator method to the ED2, whose complexity precludes standard Bayesian data assimilation. We used the ED2 emulator to assimilate demographic data from a network of inventory plots. For validation of the calibrated ED2, we compared the model to results from Empirical Succession Mapping (ESM), a novel synthesis of successional patterns in Forest Inventory and Analysis data. Our results revealed that while the pre-assimilation ED2 formulation cannot capture the emergent demographic patterns from ESM analysis, constrained model parameters controlling demographic processes increased their agreement considerably.

  17. Predicting the disinfection efficiency range in chlorine contact tanks through a CFD-based approach.

    PubMed

    Angeloudis, Athanasios; Stoesser, Thorsten; Falconer, Roger A

    2014-09-01

    In this study three-dimensional computational fluid dynamics (CFD) models, incorporating appropriately selected kinetic models, were developed to simulate the processes of chlorine decay, pathogen inactivation and the formation of potentially carcinogenic by-products in disinfection contact tanks (CTs). Currently, the performance of CT facilities largely relies on Hydraulic Efficiency Indicators (HEIs), extracted from experimentally derived Residence Time Distribution (RTD) curves. This approach has more recently been aided with the application of CFD models, which can be calibrated to predict accurately RTDs, enabling the assessment of disinfection facilities prior to their construction. However, as long as it depends on HEIs, the CT design process does not directly take into consideration the disinfection biochemistry which needs to be optimized. The main objective of this study is to address this issue by refining the modelling practices to simulate some reactive processes of interest, while acknowledging the uneven contact time stemming from the RTD curves. Initially, the hydraulic performances of seven CT design variations were reviewed through available experimental and computational data. In turn, the same design configurations were tested using numerical modelling techniques, featuring kinetic models that enable the quantification of disinfection operational parameters. Results highlight that the optimization of the hydrodynamic conditions facilitates a more uniform disinfectant contact time, which correspond to greater levels of pathogen inactivation and a more controlled by-product accumulation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Computing Bounds on Resource Levels for Flexible Plans

    NASA Technical Reports Server (NTRS)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow algorithm applied to an auxiliary flow network of 2N nodes. The algorithm is believed to be efficient in practice; experimental analysis shows the practical cost of maxflow to be as low as O(N1.5). The algorithm could be enhanced following at least two approaches. In the first approach, incremental subalgorithms for the computation of the envelope could be developed. By use of temporal scanning of the events in the temporal network, it may be possible to significantly reduce the size of the networks on which it is necessary to run the maximum-flow subalgorithm, thereby significantly reducing the time required for envelope calculation. In the second approach, the practical effectiveness of resource envelopes in the inner loops of search algorithms could be tested for multi-capacity resource scheduling. This testing would include inner-loop backtracking and termination tests and variable and value-ordering heuristics that exploit the properties of resource envelopes more directly.

  19. The Brazilian Science Data Center (BSDC)

    NASA Astrophysics Data System (ADS)

    de Almeida, Ulisses Barres; Bodmann, Benno; Giommi, Paolo; Brandt, Carlos H.

    Astrophysics and Space Science are becoming increasingly characterised by what is now known as “big data”, the bottlenecks for progress partly shifting from data acquisition to “data mining”. Truth is that the amount and rate of data accumulation in many fields already surpasses the local capabilities for its processing and exploitation, and the efficient conversion of scientific data into knowledge is everywhere a challenge. The result is that, to a large extent, isolated data archives risk being progressively likened to “data graveyards”, where the information stored is not reused for scientific work. Responsible and efficient use of these large data-sets means democratising access and extracting the most science possible from it, which in turn signifies improving data accessibility and integration. Improving data processing capabilities is another important issue specific to researchers and computer scientists of each field. The project presented here wishes to exploit the enormous potential opened up by information technology at our age to advance a model for a science data center in astronomy which aims to expand data accessibility and integration to the largest possible extent and with the greatest efficiency for scientific and educational use. Greater access to data means more people producing and benefiting from information, whereas larger integration of related data from different origins means a greater research potential and increased scientific impact. The project of the BSDC is preoccupied, primarily, with providing tools and solutions for the Brazilian astronomical community. It nevertheless capitalizes on extensive international experience, and is developed in full cooperation with the ASI Science Data Center (ASDC), from the Italian Space Agency, granting it an essential ingredient of internationalisation. The BSDC is Virtual Observatory-complient and part of the “Open Universe”, a global initiative built under the auspices of the United Nations.

  20. Energy Innovation Hubs: A Home for Scientific Collaboration

    ScienceCinema

    Chu, Steven

    2017-12-11

    Secretary Chu will host a live, streaming Q&A session with the directors of the Energy Innovation Hubs on Tuesday, March 6, at 2:15 p.m. EST. The directors will be available for questions regarding their teams' work and the future of American energy. Ask your questions in the comments below, or submit them on Facebook, Twitter (@energy), or send an e-mail to newmedia@hq.doe.gov, prior or during the live event. Dr. Hank Foley is the director of the Greater Philadelphia Innovation Cluster for Energy-Efficient Buildings, which is pioneering new data intensive techniques for designing and operating energy efficient buildings, including advanced computer modeling. Dr. Douglas Kothe is the director of the Consortium for Advanced Simulation of Light Water Reactors, which uses powerful supercomputers to create "virtual" reactors that will help improve the safety and performance of both existing and new nuclear reactors. Dr. Nathan Lewis is the director of the Joint Center for Artificial Photosynthesis, which focuses on how to produce fuels from sunlight, water, and carbon dioxide. The Energy Innovation Hubs are major integrated research centers, with researchers from many different institutions and technical backgrounds. Each hub is focused on a specific high priority goal, rapidly accelerating scientific discoveries and shortening the path from laboratory innovation to technological development and commercial deployment of critical energy technologies. Ask your questions in the comments below, or submit them on Facebook, Twitter (@energy), or send an e-mail to newmedia@energy.gov, prior or during the live event. The Energy Innovation Hubs are major integrated research centers, with researchers from many different institutions and technical backgrounds. Each Hub is focused on a specific high priority goal, rapidly accelerating scientific discoveries and shortening the path from laboratory innovation to technological development and commercial deployment of critical energy technologies. Dr. Hank Holey is the director of the Greater Philadelphia Innovation Cluster for Energy-Efficient Buildings, which is pioneering new data intensive techniques for designing and operating energy efficient buildings, including advanced computer modeling. Dr. Douglas Kothe is the director of the Modeling and Simulation for Nuclear Reactors Hub, which uses powerful supercomputers to create "virtual" reactors that will help improve the safety and performance of both existing and new nuclear reactors. Dr. Nathan Lewis is the director of the Joint Center for Artificial Photosynthesis Hub, which focuses on how to produce biofuels from sunlight, water, and carbon dioxide.

  1. Blocking of valinomycin-mediated bilayer membrane conductance by substituted benzimidazoles.

    PubMed Central

    Kuo, K H; Fukuto, T R; Miller, T A; Bruner, L J

    1976-01-01

    Valinomycin selectively transports alkali cations, e.g. potassium ions, across lipid bilayer membranes. The blocking of this carrier-mediated transport by four substituted benzimidazoles has been investigated. The compounds are 4,5,6,7-tetrachloro-2-trifluoromethylbenzimidazole, (TTFB); 4,5,6,7,-tetrachloro-2-methylbenzimidazole, (TMB); 2-trifluoromethylbenzimidazole, (TFB); and 2-methylbenzimidazole, (MBM). Because of its low acidic dissociation constant (pKa = 5.04), the blocking efficiency of TTFB in both neutral and anionic forms in the aqueous phase could be studied. The compounds exhibit the blocking efficiency sequence, TTFB- greater than TTFB0 greater than TMB0 greater than TFB0 greater than MBM0. The corresponding scale of decreasing lipophilicity, as determined by octanol/water partitioning, is TTFB0 greater than TMB0 greater than TTFB- greater than TFB0 greater than MBM0. Comparison of neutral species establishes a positive correlation of blocking efficiency with lipophilicity, with the latter being conferred primarily by chlorination of the benzenoid nucleus. Anionic TTFB, on the other hand, is the most effective blocking agent studied in spite of the fact that its dissociation in the aqueous phase markedly impedes its entry (presumably as a neutral species) into a bulk hydrocarbon phase. This observation suggests that the blocking of valinomycin-mediated bilayer membrane conductance takes place at the membrane/solution interface. PMID:1247644

  2. Efficient universal blind quantum computation.

    PubMed

    Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G

    2013-12-06

    We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party's quantum computer without revealing either which computation is performed, or its input and output. The first party's computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.

  3. Computational efficiency for the surface renewal method

    NASA Astrophysics Data System (ADS)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  4. Energy Efficiency Challenges of 5G Small Cell Networks.

    PubMed

    Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang

    2017-05-01

    The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks.

  5. Energy Efficiency Challenges of 5G Small Cell Networks

    PubMed Central

    Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang

    2017-01-01

    The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. PMID:28757670

  6. Efficient computation of photonic crystal waveguide modes with dispersive material.

    PubMed

    Schmidt, Kersten; Kappeler, Roman

    2010-03-29

    The optimization of PhC waveguides is a key issue for successfully designing PhC devices. Since this design task is computationally expensive, efficient methods are demanded. The available codes for computing photonic bands are also applied to PhC waveguides. They are reliable but not very efficient, which is even more pronounced for dispersive material. We present a method based on higher order finite elements with curved cells, which allows to solve for the band structure taking directly into account the dispersiveness of the materials. This is accomplished by reformulating the wave equations as a linear eigenproblem in the complex wave-vectors k. For this method, we demonstrate the high efficiency for the computation of guided PhC waveguide modes by a convergence analysis.

  7. Analytical prediction with multidimensional computer programs and experimental verification of the performance, at a variety of operating conditions, of two traveling wave tubes with depressed collectors

    NASA Technical Reports Server (NTRS)

    Dayton, J. A., Jr.; Kosmahl, H. G.; Ramins, P.; Stankiewicz, N.

    1979-01-01

    Experimental and analytical results are compared for two high performance, octave bandwidth TWT's that use depressed collectors (MDC's) to improve the efficiency. The computations were carried out with advanced, multidimensional computer programs that are described here in detail. These programs model the electron beam as a series of either disks or rings of charge and follow their multidimensional trajectories from the RF input of the ideal TWT, through the slow wave structure, through the magnetic refocusing system, to their points of impact in the depressed collector. Traveling wave tube performance, collector efficiency, and collector current distribution were computed and the results compared with measurements for a number of TWT-MDC systems. Power conservation and correct accounting of TWT and collector losses were observed. For the TWT's operating at saturation, very good agreement was obtained between the computed and measured collector efficiencies. For a TWT operating 3 and 6 dB below saturation, excellent agreement between computed and measured collector efficiencies was obtained in some cases but only fair agreement in others. However, deviations can largely be explained by small differences in the computed and actual spent beam energy distributions. The analytical tools used here appear to be sufficiently refined to design efficient collectors for this class of TWT. However, for maximum efficiency, some experimental optimization (e.g., collector voltages and aperture sizes) will most likely be required.

  8. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.

  9. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  10. The Lighter Side of Things: The Inevitable Convergence of the Internet of Things and Cybersecurity

    NASA Technical Reports Server (NTRS)

    Davis, Jerry

    2017-01-01

    By the year 2020 it is estimated that there will be more than 50 billion devices connected to the Internet. These devices not only include traditional electronics such as smartphones and other mobile compute devices, but also eEnabled technologies such as cars, airplanes and smartgrids. The IoT brings with it the promise of efficiency, greater remote management of industrial processes and further opens the doors to world of vehicle autonomy. However, IoT enabled technology will have to operate and contend in the contested domain of cyberspace. This discussion will touch on the impact that cybersecurity has on IoT and the people, processes and technology required to mitigate cyber risks.

  11. Efficient computation of hashes

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.

    2014-06-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  12. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Forest M.; Bochev, Pavel B.; Cameron-Smith, Philip J..

    The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

  13. Three-Dimensional Computational Model for Flow in an Over-Expanded Nozzle With Porous Surfaces

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, K. S.; Elmiligui, Alaa; Hunter, Craig A.; Massey, Steven J.

    2006-01-01

    A three-Dimensional computational model is used to simulate flow in a non-axisymmetric, convergent-divergent nozzle incorporating porous cavities for shock-boundary layer interaction control. The nozzle has an expansion ratio (exit area/throat area) of 1.797 and a design nozzle pressure ratio of 8.78. Flow fields for the baseline nozzle (no porosity) and for the nozzle with porous surfaces of 10% openness are computed for Nozzle Pressure Ratio (NPR) varying from 1.29 to 9.54. The three dimensional computational results indicate that baseline (no porosity) nozzle performance is dominated by unstable, shock-induced, boundary-layer separation at over-expanded conditions. For NPR less than or equal to 1.8, the separation is three dimensional, somewhat unsteady, and confined to a bubble (with partial reattachment over the nozzle flap). For NPR greater than or equal to 2.0, separation is steady and fully detached, and becomes more two dimensional as NPR increased. Numerical simulation of porous configurations indicates that a porous patch is capable of controlling off design separation in the nozzle by either alleviating separation or by encouraging stable separation of the exhaust flow. In the present paper, computational simulation results, wall centerline pressure, mach contours, and thrust efficiency ratio are presented, discussed and compared with experimental data. Results indicate that comparisons are in good agreement with experimental data. The three-dimensional simulation improves the comparisons for over-expanded flow conditions as compared with two-dimensional assumptions.

  14. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    NASA Technical Reports Server (NTRS)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  15. Fog Computing and Edge Computing Architectures for Processing Data From Diabetes Devices Connected to the Medical Internet of Things.

    PubMed

    Klonoff, David C

    2017-07-01

    The Internet of Things (IoT) is generating an immense volume of data. With cloud computing, medical sensor and actuator data can be stored and analyzed remotely by distributed servers. The results can then be delivered via the Internet. The number of devices in IoT includes such wireless diabetes devices as blood glucose monitors, continuous glucose monitors, insulin pens, insulin pumps, and closed-loop systems. The cloud model for data storage and analysis is increasingly unable to process the data avalanche, and processing is being pushed out to the edge of the network closer to where the data-generating devices are. Fog computing and edge computing are two architectures for data handling that can offload data from the cloud, process it nearby the patient, and transmit information machine-to-machine or machine-to-human in milliseconds or seconds. Sensor data can be processed near the sensing and actuating devices with fog computing (with local nodes) and with edge computing (within the sensing devices). Compared to cloud computing, fog computing and edge computing offer five advantages: (1) greater data transmission speed, (2) less dependence on limited bandwidths, (3) greater privacy and security, (4) greater control over data generated in foreign countries where laws may limit use or permit unwanted governmental access, and (5) lower costs because more sensor-derived data are used locally and less data are transmitted remotely. Connected diabetes devices almost all use fog computing or edge computing because diabetes patients require a very rapid response to sensor input and cannot tolerate delays for cloud computing.

  16. Texture functions in image analysis: A computationally efficient solution

    NASA Technical Reports Server (NTRS)

    Cox, S. C.; Rose, J. F.

    1983-01-01

    A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.

  17. Structure Computation of Quiet Spike[Trademark] Flight-Test Data During Envelope Expansion

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2008-01-01

    System identification or mathematical modeling is used in the aerospace community for development of simulation models for robust control law design. These models are often described as linear time-invariant processes. Nevertheless, it is well known that the underlying process is often nonlinear. The reason for using a linear approach has been due to the lack of a proper set of tools for the identification of nonlinear systems. Over the past several decades, the controls and biomedical communities have made great advances in developing tools for the identification of nonlinear systems. These approaches are robust and readily applicable to aerospace systems. In this paper, we show the application of one such nonlinear system identification technique, structure detection, for the analysis of F-15B Quiet Spike(TradeMark) aeroservoelastic flight-test data. Structure detection is concerned with the selection of a subset of candidate terms that best describe the observed output. This is a necessary procedure to compute an efficient system description that may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance for the development of robust parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion, which may save significant development time and costs. The objectives of this study are to demonstrate via analysis of F-15B Quiet Spike aeroservoelastic flight-test data for several flight conditions that 1) linear models are inefficient for modeling aeroservoelastic data, 2) nonlinear identification provides a parsimonious model description while providing a high percent fit for cross-validated data, and 3) the model structure and parameters vary as the flight condition is altered.

  18. Air traffic surveillance and control using hybrid estimation and protocol-based conflict resolution

    NASA Astrophysics Data System (ADS)

    Hwang, Inseok

    The continued growth of air travel and recent advances in new technologies for navigation, surveillance, and communication have led to proposals by the Federal Aviation Administration (FAA) to provide reliable and efficient tools to aid Air Traffic Control (ATC) in performing their tasks. In this dissertation, we address four problems frequently encountered in air traffic surveillance and control; multiple target tracking and identity management, conflict detection, conflict resolution, and safety verification. We develop a set of algorithms and tools to aid ATC; These algorithms have the provable properties of safety, computational efficiency, and convergence. Firstly, we develop a multiple-maneuvering-target tracking and identity management algorithm which can keep track of maneuvering aircraft in noisy environments and of their identities. Secondly, we propose a hybrid probabilistic conflict detection algorithm between multiple aircraft which uses flight mode estimates as well as aircraft current state estimates. Our algorithm is based on hybrid models of aircraft, which incorporate both continuous dynamics and discrete mode switching. Thirdly, we develop an algorithm for multiple (greater than two) aircraft conflict avoidance that is based on a closed-form analytic solution and thus provides guarantees of safety. Finally, we consider the problem of safety verification of control laws for safety critical systems, with application to air traffic control systems. We approach safety verification through reachability analysis, which is a computationally expensive problem. We develop an over-approximate method for reachable set computation using polytopic approximation methods and dynamic optimization. These algorithms may be used either in a fully autonomous way, or as supporting tools to increase controllers' situational awareness and to reduce their work load.

  19. Bank Terminals

    NASA Technical Reports Server (NTRS)

    1978-01-01

    In the photo, employees of the UAB Bank, Knoxville, Tennessee, are using Teller Transaction Terminals manufactured by SCI Systems, Inc., Huntsville, Alabama, an electronics firm which has worked on a number of space projects under contract with NASA. The terminals are part of an advanced, computerized financial transaction system that offers high efficiency in bank operations. The key to the system's efficiency is a "multiplexing" technique developed for NASA's Space Shuttle. Multiplexing is simultaneous transmission of large amounts of data over a single transmission link at very high rates of speed. In the banking application, a small multiplex "data bus" interconnects all the terminals and a central computer which stores information on clients' accounts. The data bus replaces the maze-of wiring that would be needed to connect each terminal separately and it affords greater speed in recording transactions. The SCI system offers banks real-time data management through constant updating of the central computer. For example, a check is immediately cancelled at the teller's terminal and the computer is simultaneously advised of the transaction; under other methods, the check would be cancelled and the transaction recorded at the close of business. Teller checkout at the end of the day, conventionally a time-consuming matter of processing paper, can be accomplished in minutes by calling up a summary of the day's transactions. SCI manufactures other types of terminals for use in the system, such as an administrative terminal that provides an immediate printout of a client's account, and another for printing and recording savings account deposits and withdrawals. SCI systems have been installed in several banks in Tennessee, Arizona, and Oregon and additional installations are scheduled this year.

  20. Fast nonlinear gravity inversion in spherical coordinates with application to the South American Moho

    NASA Astrophysics Data System (ADS)

    Uieda, Leonardo; Barbosa, Valéria C. F.

    2017-01-01

    Estimating the relief of the Moho from gravity data is a computationally intensive nonlinear inverse problem. What is more, the modelling must take the Earths curvature into account when the study area is of regional scale or greater. We present a regularized nonlinear gravity inversion method that has a low computational footprint and employs a spherical Earth approximation. To achieve this, we combine the highly efficient Bott's method with smoothness regularization and a discretization of the anomalous Moho into tesseroids (spherical prisms). The computational efficiency of our method is attained by harnessing the fact that all matrices involved are sparse. The inversion results are controlled by three hyperparameters: the regularization parameter, the anomalous Moho density-contrast, and the reference Moho depth. We estimate the regularization parameter using the method of hold-out cross-validation. Additionally, we estimate the density-contrast and the reference depth using knowledge of the Moho depth at certain points. We apply the proposed method to estimate the Moho depth for the South American continent using satellite gravity data and seismological data. The final Moho model is in accordance with previous gravity-derived models and seismological data. The misfit to the gravity and seismological data is worse in the Andes and best in oceanic areas, central Brazil and Patagonia, and along the Atlantic coast. Similarly to previous results, the model suggests a thinner crust of 30-35 km under the Andean foreland basins. Discrepancies with the seismological data are greatest in the Guyana Shield, the central Solimões and Amazonas Basins, the Paraná Basin, and the Borborema province. These differences suggest the existence of crustal or mantle density anomalies that were unaccounted for during gravity data processing.

  1. An efficient pseudomedian filter for tiling microrrays.

    PubMed

    Royce, Thomas E; Carriero, Nicholas J; Gerstein, Mark B

    2007-06-07

    Tiling microarrays are becoming an essential technology in the functional genomics toolbox. They have been applied to the tasks of novel transcript identification, elucidation of transcription factor binding sites, detection of methylated DNA and several other applications in several model organisms. These experiments are being conducted at increasingly finer resolutions as the microarray technology enjoys increasingly greater feature densities. The increased densities naturally lead to increased data analysis requirements. Specifically, the most widely employed algorithm for tiling array analysis involves smoothing observed signals by computing pseudomedians within sliding windows, a O(n2logn) calculation in each window. This poor time complexity is an issue for tiling array analysis and could prove to be a real bottleneck as tiling microarray experiments become grander in scope and finer in resolution. We therefore implemented Monahan's HLQEST algorithm that reduces the runtime complexity for computing the pseudomedian of n numbers to O(nlogn) from O(n2logn). For a representative tiling microarray dataset, this modification reduced the smoothing procedure's runtime by nearly 90%. We then leveraged the fact that elements within sliding windows remain largely unchanged in overlapping windows (as one slides across genomic space) to further reduce computation by an additional 43%. This was achieved by the application of skip lists to maintaining a sorted list of values from window to window. This sorted list could be maintained with simple O(log n) inserts and deletes. We illustrate the favorable scaling properties of our algorithms with both time complexity analysis and benchmarking on synthetic datasets. Tiling microarray analyses that rely upon a sliding window pseudomedian calculation can require many hours of computation. We have eased this requirement significantly by implementing efficient algorithms that scale well with genomic feature density. This result not only speeds the current standard analyses, but also makes possible ones where many iterations of the filter may be required, such as might be required in a bootstrap or parameter estimation setting. Source code and executables are available at http://tiling.gersteinlab.org/pseudomedian/.

  2. An efficient pseudomedian filter for tiling microrrays

    PubMed Central

    Royce, Thomas E; Carriero, Nicholas J; Gerstein, Mark B

    2007-01-01

    Background Tiling microarrays are becoming an essential technology in the functional genomics toolbox. They have been applied to the tasks of novel transcript identification, elucidation of transcription factor binding sites, detection of methylated DNA and several other applications in several model organisms. These experiments are being conducted at increasingly finer resolutions as the microarray technology enjoys increasingly greater feature densities. The increased densities naturally lead to increased data analysis requirements. Specifically, the most widely employed algorithm for tiling array analysis involves smoothing observed signals by computing pseudomedians within sliding windows, a O(n2logn) calculation in each window. This poor time complexity is an issue for tiling array analysis and could prove to be a real bottleneck as tiling microarray experiments become grander in scope and finer in resolution. Results We therefore implemented Monahan's HLQEST algorithm that reduces the runtime complexity for computing the pseudomedian of n numbers to O(nlogn) from O(n2logn). For a representative tiling microarray dataset, this modification reduced the smoothing procedure's runtime by nearly 90%. We then leveraged the fact that elements within sliding windows remain largely unchanged in overlapping windows (as one slides across genomic space) to further reduce computation by an additional 43%. This was achieved by the application of skip lists to maintaining a sorted list of values from window to window. This sorted list could be maintained with simple O(log n) inserts and deletes. We illustrate the favorable scaling properties of our algorithms with both time complexity analysis and benchmarking on synthetic datasets. Conclusion Tiling microarray analyses that rely upon a sliding window pseudomedian calculation can require many hours of computation. We have eased this requirement significantly by implementing efficient algorithms that scale well with genomic feature density. This result not only speeds the current standard analyses, but also makes possible ones where many iterations of the filter may be required, such as might be required in a bootstrap or parameter estimation setting. Source code and executables are available at . PMID:17555595

  3. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  4. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2011-05-10

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  5. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2011-01-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly. PMID:21552350

  6. Design of microfluidic channels for magnetic separation of malaria-infected red blood cells

    PubMed Central

    Wu, Wei-Tao; Martin, Andrea Blue; Gandini, Alberto; Aubry, Nadine; Massoudi, Mehrdad; Antaki, James F.

    2016-01-01

    This study is motivated by the development of a blood cell filtration device for removal of malaria-infected, parasitized red blood cells (pRBCs). The blood was modeled as a multi-component fluid using the computational fluid dynamics discrete element method (CFD-DEM), wherein plasma was treated as a Newtonian fluid and the red blood cells (RBCs) were modeled as soft-sphere solid particles which move under the influence of drag, collisions with other RBCs, and a magnetic force. The CFD-DEM model was first validated by a comparison with experimental data from Han et al. 2006 (Han and Frazier 2006) involving a microfluidic magnetophoretic separator for paramagnetic deoxygenated blood cells. The computational model was then applied to a parametric study of a parallel-plate separator having hematocrit of 40% with a 10% of the RBCs as pRBCs. Specifically, we investigated the hypothesis of introducing an upstream constriction to the channel to divert the magnetic cells within the near-wall layer where the magnetic force is greatest. Simulations compared the efficacy of various geometries upon the stratification efficiency of the pRBCs. For a channel with nominal height of 100 µm, the addition of an upstream constriction of 80% improved the proportion of pRBCs retained adjacent to the magnetic wall (separation efficiency) by almost 2 fold, from 26% to 49%. Further addition of a downstream diffuser reduced remixing, hence improved separation efficiency to 72%. The constriction introduced a greater pressure drop (from 17 to 495 Pa), which should be considered when scaling-up this design for a clinical-sized system. Overall, the advantages of this design include its ability to accommodate physiological hematocrit and high throughput – which is critical for clinical implementation as a blood-filtration system. PMID:27761107

  7. Robust efficient video fingerprinting

    NASA Astrophysics Data System (ADS)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  8. Parallel Computation of Unsteady Flows on a Network of Workstations

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.

  9. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  10. Application of microarray analysis on computer cluster and cloud platforms.

    PubMed

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  11. Supercomputers Of The Future

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  12. Strategic Resource Allocation in the Human Brain Supports Cognitive Coordination of Object and Spatial Working Memory

    PubMed Central

    Jackson, Margaret C; Morgan, Helen M; Shapiro, Kimron L; Mohr, Harald; Linden, David EJ

    2011-01-01

    The ability to integrate different types of information (e.g., object identity and spatial orientation) and maintain or manipulate them concurrently in working memory (WM) facilitates the flow of ongoing tasks and is essential for normal human cognition. Research shows that object and spatial information is maintained and manipulated in WM via separate pathways in the brain (object/ventral versus spatial/dorsal). How does the human brain coordinate the activity of different specialized systems to conjoin different types of information? Here we used functional magnetic resonance imaging to investigate conjunction- versus single-task manipulation of object (compute average color blend) and spatial (compute intermediate angle) information in WM. Object WM was associated with ventral (inferior frontal gyrus, occipital cortex), and spatial WM with dorsal (parietal cortex, superior frontal, and temporal sulci) regions. Conjoined object/spatial WM resulted in intermediate activity in these specialized areas, but greater activity in different prefrontal and parietal areas. Unique to our study, we found lower temporo-occipital activity and greater deactivation in temporal and medial prefrontal cortices for conjunction- versus single-tasks. Using structural equation modeling, we derived a conjunction-task connectivity model that comprises a frontoparietal network with a bidirectional DLPFC-VLPFC connection, and a direct parietal-extrastriate pathway. We suggest that these activation/deactivation patterns reflect efficient resource allocation throughout the brain and propose a new extended version of the biased competition model of WM. Hum Brain Mapp, 2011. © 2010 Wiley-Liss, Inc. PMID:20715083

  13. Estimation of relative free energies of binding using pre-computed ensembles based on the single-step free energy perturbation and the site-identification by Ligand competitive saturation approaches.

    PubMed

    Raman, E Prabhu; Lakkaraju, Sirish Kaushik; Denny, Rajiah Aldrin; MacKerell, Alexander D

    2017-06-05

    Accurate and rapid estimation of relative binding affinities of ligand-protein complexes is a requirement of computational methods for their effective use in rational ligand design. Of the approaches commonly used, free energy perturbation (FEP) methods are considered one of the most accurate, although they require significant computational resources. Accordingly, it is desirable to have alternative methods of similar accuracy but greater computational efficiency to facilitate ligand design. In the present study relative free energies of binding are estimated for one or two non-hydrogen atom changes in compounds targeting the proteins ACK1 and p38 MAP kinase using three methods. The methods include standard FEP, single-step free energy perturbation (SSFEP) and the site-identification by ligand competitive saturation (SILCS) ligand grid free energy (LGFE) approach. Results show the SSFEP and SILCS LGFE methods to be competitive with or better than the FEP results for the studied systems, with SILCS LGFE giving the best agreement with experimental results. This is supported by additional comparisons with published FEP data on p38 MAP kinase inhibitors. While both the SSFEP and SILCS LGFE approaches require a significant upfront computational investment, they offer a 1000-fold computational savings over FEP for calculating the relative affinities of ligand modifications once those pre-computations are complete. An illustrative example of the potential application of these methods in the context of screening large numbers of transformations is presented. Thus, the SSFEP and SILCS LGFE approaches represent viable alternatives for actively driving ligand design during drug discovery and development. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Exhaustive Versus Randomized Searchers for Nonlinear Optimization in 21st Century Computing: Solar Application

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; AliShaykhian, Gholam

    2010-01-01

    We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.

  15. Higher levels of trait emotional awareness are associated with more efficient global information integration throughout the brain: A graph-theoretic analysis of resting state functional connectivity.

    PubMed

    Smith, Ryan; Sanova, Anna; Alkozei, Anna; Lane, Richard D; Killgore, William D S

    2018-06-21

    Previous studies have suggested that trait differences in emotional awareness (tEA) are clinically relevant, and associated with differences in neural structure/function. While multiple leading theories suggest that conscious awareness requires widespread information integration across the brain, no study has yet tested the hypothesis that higher tEA corresponds to more efficient brain-wide information exchange. Twenty-six healthy volunteers (13 female) underwent a resting state functional magnetic resonance imaging scan, and completed the Levels of Emotional Awareness Scale (LEAS; a measure of tEA) and the Wechsler Abbreviated Scale of Intelligence (WASI-II; a measure of general intelligence [IQ]). Using a whole-brain (functionally defined) region-of-interest (ROI) atlas, we computed several graph theory metrics to assess the efficiency of brain-wide information exchange. After statistically controlling for differences in age, gender, and IQ, we first observed a significant relationship between higher LEAS scores and greater average degree (i.e., overall whole-brain network density). When controlling for average degree, we found that higher LEAS scores were also associated with shorter average path lengths across the collective network of all included ROIs. These results jointly suggest that individuals with higher tEA display more efficient global information exchange throughout the brain. This is consistent with the idea that conscious awareness requires global accessibility of represented information.

  16. Productivity change of surgeons in an academic year.

    PubMed

    Nakata, Yoshinori; Watanabe, Yuichi; Otake, Hiroshi; Nakamura, Toshihito; Oiso, Giichiro; Sawa, Tomohiro

    2015-01-01

    The goal of this study was to calculate total factor productivity of surgeons in an academic year and to evaluate the effect of surgical trainees on their productivity. We analyzed all the surgical procedures performed from April 1 through September 30, 2013 in the Teikyo University Hospital. The nonradial and nonoriented Malmquist model under the variable returns-to-scale assumptions was employed. A decision-making unit is defined as a surgeon with the highest academic rank in the surgery. Inputs were defined as the number of physicians who assisted in surgery, and the time of surgical operation from skin incision to skin closure. The output was defined as the surgical fee for each surgery. April is the beginning month of a new academic year in Japan, and we divided the study period into April to June and July to September 2013. We computed each surgeon's Malmquist index, efficiency change, and technical change. We analyzed 2789 surgical procedures that were performed by 105 surgeons. The Malmquist index of all surgeons was significantly greater than 1 (p = 0.0033). The technical change was significantly greater than 1 (p < 0.0001). However, the efficiency change was not statistically significantly different from 1 (p = 0.1817). The surgeons are less productive in the beginning months of a new academic year. The main factor of this productivity loss is considered to be surgical training. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  17. Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.

    2000-01-01

    The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.

  18. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  19. I/O-Efficient Scientific Computation Using TPIE

    NASA Technical Reports Server (NTRS)

    Vengroff, Darren Erik; Vitter, Jeffrey Scott

    1996-01-01

    In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.

  20. Computer-Based Learning: Interleaving Whole and Sectional Representation of Neuroanatomy

    ERIC Educational Resources Information Center

    Pani, John R.; Chariker, Julia H.; Naaz, Farah

    2013-01-01

    The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously…

  1. Decomposing Fuel Economy and Greenhouse Gas Regulatory Standards in the Energy Conversion Efficiency and Tractive Energy Domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pannone, Greg; Thomas, John F; Reale, Michael

    The three foundational elements that determine mobile source energy use and tailpipe carbon dioxide (CO2) emissions are the tractive energy requirements of the vehicle, the on-cycle energy conversion efficiency of the propulsion system, and the energy source. The tractive energy requirements are determined by the vehicle's mass, aerodynamic drag, tire rolling resistance, and parasitic drag. Oncycle energy conversion of the propulsion system is dictated by the tractive efficiency, non-tractive energy use, kinetic energy recovery, and parasitic losses. The energy source determines the mobile source CO2 emissions. For current vehicles, tractive energy requirements and overall energy conversion efficiency are readily availablemore » from the decomposition of test data. For future applications, plausible levels of mass reduction, aerodynamic drag improvements, and tire rolling resistance can be transposed into the tractive energy domain. Similarly, by combining thermodynamic, mechanical efficiency, and kinetic energy recovery fundamentals with logical proxies, achievable levels of energy conversion efficiency can be established to allow for the evaluation of future powertrain requirements. Combining the plausible levels of tractive energy and on-cycle efficiency provides a means to compute sustainable vehicle and propulsion system scenarios that can achieve future regulations. Using these principles, the regulations established in the United States (U.S.) for fuel consumption and CO2 emissions are evaluated. Fleet-level scenarios are generated and compared to the technology deployment assumptions made during rule-making. When compared to the rule-making assumptions, the results indicate that a greater level of advanced vehicle and propulsion system technology deployment will be required to achieve the model year 2025 U.S. standards for fuel economy and CO2 emissions.« less

  2. Reshaping Light-Emitting Diodes To Increase External Efficiency

    NASA Technical Reports Server (NTRS)

    Rogowski, Robert; Egalon, Claudio

    1995-01-01

    Light-emitting diodes (LEDs) reshaped, according to proposal, increasing amount of light emitted by decreasing fraction of light trapped via total internal reflection. Results in greater luminous output power for same electrical input power; greater external efficiency. Furthermore, light emitted by reshaped LEDs more nearly collimated (less diffuse). Concept potentially advantageous for conventional red-emitting LEDs. More advantageous for new "blue" LEDs, because luminous outputs and efficiencies of these devices very low. Another advantage, proposed conical shapes achieved relatively easily by chemical etching of semiconductor surfaces.

  3. A k-space method for large-scale models of wave propagation in tissue.

    PubMed

    Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C

    2001-03-01

    Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.

  4. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  5. Efficient convolutional sparse coding

    DOEpatents

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  6. Speeding Up Ecological and Evolutionary Computations in R; Essentials of High Performance Computing for Biologists

    PubMed Central

    Visser, Marco D.; McMahon, Sean M.; Merow, Cory; Dixon, Philip M.; Record, Sydne; Jongejans, Eelke

    2015-01-01

    Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof) that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1–S3 Texts) that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster). By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research. PMID:25811842

  7. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  8. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.

    PubMed

    Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia

    2016-03-08

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.

  9. Motion Planning of Two Stacker Cranes in a Large-Scale Automated Storage/Retrieval System

    NASA Astrophysics Data System (ADS)

    Kung, Yiheng; Kobayashi, Yoshimasa; Higashi, Toshimitsu; Ota, Jun

    We propose a method for reducing the computational time of motion planning for stacker cranes. Most automated storage/retrieval systems (AS/RSs) are only equipped with one stacker crane. However, this is logistically challenging, and greater work efficiency in warehouses, such as those using two stacker cranes, is required. In this paper, a warehouse with two stacker cranes working simultaneously is proposed. Unlike warehouses with only one crane, trajectory planning in those with two cranes is very difficult. Since there are two cranes working together, a proper trajectory must be considered to avoid collision. However, verifying collisions is complicated and requires a considerable amount of computational time. As transport work in AS/RSs occurs randomly, motion planning cannot be conducted in advance. Planning an appropriate trajectory within a restricted duration would be a difficult task. We thereby address the current problem of motion planning requiring extensive calculation time. As a solution, we propose a “free-step” to simplify the procedure of collision verification and reduce the computational time. On the other hand, we proposed a method to reschedule the order of collision verification in order to find an appropriate trajectory in less time. By the proposed method, we reduce the calculation time to less than 1/300 of that achieved in former research.

  10. Computational Fluid Dynamics Modeling of Bacillus anthracis ...

    EPA Pesticide Factsheets

    Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict

  11. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  12. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    PubMed

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  13. Developing an Efficient Computational Method that Estimates the Ability of Students in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2012-01-01

    This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…

  14. Facilitation as Attenuating of Environmental Stress among Structured Microbial Populations.

    PubMed

    Martins, Suzana Cláudia Silveira; Santaella, Sandra Tédde; Martins, Claudia Miranda; Martins, Rogério Parentoni

    2016-01-01

    There is currently an intense debate in microbial societies on whether evolution in complex communities is driven by competition or cooperation. Since Darwin, competition for scarce food resources has been considered the main ecological interaction shaping population dynamics and community structure both in vivo and in vitro. However, facilitation may be widespread across several animal and plant species. This could also be true in microbial strains growing under environmental stress. Pure and mixed strains of Serratia marcescens and Candida rugosa were grown in mineral culture media containing phenol. Growth rates were estimated as the angular coefficients computed from linearized growth curves. Fitness index was estimated as the quotient between growth rates computed for lineages grown in isolation and in mixed cultures. The growth rates were significantly higher in associated cultures than in pure cultures and fitness index was greater than 1 for both microbial species showing that the interaction between Serratia marcescens and Candida rugosa yielded more efficient phenol utilization by both lineages. This result corroborates the hypothesis that facilitation between microbial strains can increase their fitness and performance in environmental bioremediation.

  15. Implementation of Ada protocols on Mil-STD-1553 B data bus

    NASA Technical Reports Server (NTRS)

    Ruhman, Smil; Rosemberg, Flavia

    1986-01-01

    Standardization activity of data communication in avionic systems started in 1968 for the purpose of total system integration and the elimination of heavy wire bundles carrying signals between various subassemblies. The growing complexity of avionic systems is straining the capabilities of MIL-STD-1553 B (first issued in 1973), but a much greater challenge to it is posed by Ada, the standard language adopted for real-time, computer embedded-systems. Hardware implementation of Ada communication protocols in a contention/token bus or token ring network is proposed. However, during the transition period when the current command/response multiplex data bus is still flourishing and the development environment for distributed multi-computer Ada systems is as yet lacking, a temporary accomodation of the standard language with the standard bus could be very useful and even highly desirable. By concentrating all status informtion and decisions at the bus controller, it was found to be possible to construct an elegant and efficient harware impelementation of the Ada protocols at the bus interface. This solution is discussed.

  16. Effect of data truncation in an implementation of pixel clustering on a custom computing machine

    NASA Astrophysics Data System (ADS)

    Leeser, Miriam E.; Theiler, James P.; Estlick, Michael; Kitaryeva, Natalya V.; Szymanski, John J.

    2000-10-01

    We investigate the effect of truncating the precision of hyperspectral image data for the purpose of more efficiently segmenting the image using a variant of k-means clustering. We describe the implementation of the algorithm on field-programmable gate array (FPGA) hardware. Truncating the data to only a few bits per pixel in each spectral channel permits a more compact hardware design, enabling greater parallelism, and ultimately a more rapid execution. It also enables the storage of larger images in the onboard memory. In exchange for faster clustering, however, one trades off the quality of the produced segmentation. We find, however, that the clustering algorithm can tolerate considerable data truncation with little degradation in cluster quality. This robustness to truncated data can be extended by computing the cluster centers to a few more bits of precision than the data. Since there are so many more pixels than centers, the more aggressive data truncation leads to significant gains in the number of pixels that can be stored in memory and processed in hardware concurrently.

  17. Operational evaluation of high-throughput community-based mass prophylaxis using Just-in-time training.

    PubMed

    Spitzer, James D; Hupert, Nathaniel; Duckart, Jonathan; Xiong, Wei

    2007-01-01

    Community-based mass prophylaxis is a core public health operational competency, but staffing needs may overwhelm the local trained health workforce. Just-in-time (JIT) training of emergency staff and computer modeling of workforce requirements represent two complementary approaches to address this logistical problem. Multnomah County, Oregon, conducted a high-throughput point of dispensing (POD) exercise to test JIT training and computer modeling to validate POD staffing estimates. The POD had 84% non-health-care worker staff and processed 500 patients per hour. Post-exercise modeling replicated observed staff utilization levels and queue formation, including development and amelioration of a large medical evaluation queue caused by lengthy processing times and understaffing in the first half-hour of the exercise. The exercise confirmed the feasibility of using JIT training for high-throughput antibiotic dispensing clinics staffed largely by nonmedical professionals. Patient processing times varied over the course of the exercise, with important implications for both staff reallocation and future POD modeling efforts. Overall underutilization of staff revealed the opportunity for greater efficiencies and even higher future throughputs.

  18. A LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR (LASSO) FOR NONLINEAR SYSTEM IDENTIFICATION

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Lofberg, Johan; Brenner, Martin J.

    2006-01-01

    Identification of parametric nonlinear models involves estimating unknown parameters and detecting its underlying structure. Structure computation is concerned with selecting a subset of parameters to give a parsimonious description of the system which may afford greater insight into the functionality of the system or a simpler controller design. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear systems. The LASSO minimises the residual sum of squares by the addition of a 1 penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. The performance of this LASSO structure detection method was evaluated by using it to estimate the structure of a nonlinear polynomial model. Applicability of the method to more complex systems such as those encountered in aerospace applications was shown by identifying a parsimonious system description of the F/A-18 Active Aeroelastic Wing using flight test data.

  19. Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III

    1996-01-01

    Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.

  20. Computer modeling in the practice of acoustical consulting: An evolving variety of uses from marketing and diagnosis through design to eventually research

    NASA Astrophysics Data System (ADS)

    Madaras, Gary S.

    2002-05-01

    The use of computer modeling as a marketing, diagnosis, design, and research tool in the practice of acoustical consulting is discussed. From the time it is obtained, the software can be used as an effective marketing tool. It is not until the software basics are learned and some amount of testing and verification occurs that the software can be used as a tool for diagnosing the acoustics of existing rooms. A greater understanding of the output types and formats as well as experience in interpreting the results is required before the software can be used as an efficient design tool. Lastly, it is only after repetitive use as a design tool that the software can be used as a cost-effective means of conducting research in practice. The discussion is supplemented with specific examples of actual projects provided by various consultants within multiple firms. Focus is placed on the use of CATT-Acoustic software and predicting the room acoustics of large performing arts halls as well as other public assembly spaces.

  1. Facilitation as Attenuating of Environmental Stress among Structured Microbial Populations

    PubMed Central

    Santaella, Sandra Tédde; Martins, Claudia Miranda; Martins, Rogério Parentoni

    2016-01-01

    There is currently an intense debate in microbial societies on whether evolution in complex communities is driven by competition or cooperation. Since Darwin, competition for scarce food resources has been considered the main ecological interaction shaping population dynamics and community structure both in vivo and in vitro. However, facilitation may be widespread across several animal and plant species. This could also be true in microbial strains growing under environmental stress. Pure and mixed strains of Serratia marcescens and Candida rugosa were grown in mineral culture media containing phenol. Growth rates were estimated as the angular coefficients computed from linearized growth curves. Fitness index was estimated as the quotient between growth rates computed for lineages grown in isolation and in mixed cultures. The growth rates were significantly higher in associated cultures than in pure cultures and fitness index was greater than 1 for both microbial species showing that the interaction between Serratia marcescens and Candida rugosa yielded more efficient phenol utilization by both lineages. This result corroborates the hypothesis that facilitation between microbial strains can increase their fitness and performance in environmental bioremediation. PMID:26904719

  2. Development of a Compact Eleven Feed Cryostat for the Patriot 12-m Antenna System

    NASA Technical Reports Server (NTRS)

    Beaudoin, Christopher; Kildal, Per-Simon; Yang, Jian; Pantaleev, Miroslav

    2010-01-01

    The Eleven antenna has constant beam width, constant phase center location, and low spillover over a decade bandwidth. Therefore, it can feed a reflector for high aperture efficiency (also called feed efficiency). It is equally important that the feed efficiency and its subefficiencies not be degraded significantly by installing the feed in a cryostat. The MIT Haystack Observatory, with guidance from Onsala Space Observatory and Chalmers University, has been working to integrate the Eleven antenna into a compact cryostat suitable for the Patriot 12-m antenna. Since the analysis of the feed efficiencies in this presentation is purely computational, we first demonstrate the validity of the computed results by comparing them to measurements. Subsequently, we analyze the dependence of the cryostat size on the feed efficiencies, and, lastly, the Patriot 12-m subreflector is incorporated into the computational model to assess the overall broadband efficiency of the antenna system.

  3. Determination of efficiency of an aged HPGe detector for gaseous sources by self absorption correction and point source methods

    NASA Astrophysics Data System (ADS)

    Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.

    2017-07-01

    Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.

  4. Efficiency of including first-generation information in second-generation ranking and selection: results of computer simulation.

    Treesearch

    T.Z. Ye; K.J.S. Jayawickrama; G.R. Johnson

    2006-01-01

    Using computer simulation, we evaluated the impact of using first-generation information to increase selection efficiency in a second-generation breeding program. Selection efficiency was compared in terms of increase in rank correlation between estimated and true breeding values (i.e., ranking accuracy), reduction in coefficient of variation of correlation...

  5. High-Performance Computing Data Center Efficiency Dashboard | Computational

    Science.gov Websites

    recovery water (ERW) loop Heat exchanger for energy recovery Thermosyphon Heat exchanger between ERW loop and cooling tower loop Evaporative cooling towers Learn more about our energy-efficient facility

  6. Evaluation of Simulated Clinical Breast Exam Motion Patterns Using Marker-Less Video Tracking

    PubMed Central

    Azari, David P.; Pugh, Carla M.; Laufer, Shlomi; Kwan, Calvin; Chen, Chia-Hsiung; Yen, Thomas Y.; Hu, Yu Hen; Radwin, Robert G.

    2016-01-01

    Objective This study investigates using marker-less video tracking to evaluate hands-on clinical skills during simulated clinical breast examinations (CBEs). Background There are currently no standardized and widely accepted CBE screening techniques. Methods Experienced physicians attending a national conference conducted simulated CBEs presenting different pathologies with distinct tumorous lesions. Single hand exam motion was recorded and analyzed using marker-less video tracking. Four kinematic measures were developed to describe temporal (time pressing and time searching) and spatial (area covered and distance explored) patterns. Results Mean differences between time pressing, area covered, and distance explored varied across the simulated lesions. Exams were objectively categorized as either sporadic, localized, thorough, or efficient for both temporal and spatial categories based on spatiotemporal characteristics. The majority of trials were temporally or spatially thorough (78% and 91%), exhibiting proportionally greater time pressing and time searching (temporally thorough) and greater area probed with greater distance explored (spatially thorough). More efficient exams exhibited proportionally more time pressing with less time searching (temporally efficient) and greater area probed with less distance explored (spatially efficient). Just two (5.9 %) of the trials exhibited both high temporal and spatial efficiency. Conclusions Marker-less video tracking was used to discriminate different examination techniques and measure when an exam changes from general searching to specific probing. The majority of participants exhibited more thorough than efficient patterns. Application Marker-less video kinematic tracking may be useful for quantifying clinical skills for training and assessment. PMID:26546381

  7. Solving systems of linear equations by GPU-based matrix factorization in a Science Ground Segment

    NASA Astrophysics Data System (ADS)

    Legendre, Maxime; Schmidt, Albrecht; Moussaoui, Saïd; Lammers, Uwe

    2013-11-01

    Recently, Graphics Cards have been used to offload scientific computations from traditional CPUs for greater efficiency. This paper investigates the adaptation of a real-world linear system solver, which plays a central role in the data processing of the Science Ground Segment of ESA's astrometric Gaia mission. The paper quantifies the resource trade-offs between traditional CPU implementations and modern CUDA based GPU implementations. It also analyses the impact on the pipeline architecture and system development. The investigation starts from both a selected baseline algorithm with a reference implementation and a traditional linear system solver and then explores various modifications to control flow and data layout to achieve higher resource efficiency. It turns out that with the current state of the art, the modifications impact non-technical system attributes. For example, the control flow of the original modified Cholesky transform is modified so that locality of the code and verifiability deteriorate. The maintainability of the system is affected as well. On the system level, users will have to deal with more complex configuration control and testing procedures.

  8. A Q-Ising model application for linear-time image segmentation

    NASA Astrophysics Data System (ADS)

    Bentrem, Frank W.

    2010-10-01

    A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.

  9. The 27-28 October 1986 FIRE IFO cirrus case study: Cirrus parameter relationships derived from satellite and lidar data

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Young, David F.; Sassen, Kenneth; Alvarez, Joseph M.; Grund, Christian J.

    1989-01-01

    Cirrus cloud radiative and physical characteristics are determined using a combination of ground-based, aircraft, and satellite measurements taken as part of the First ISCCP Regional Experiment (FIRE) Cirrus Intensive Field Observations (IFO) during October and November 1986. Lidar backscatter data are used to define cloud base, center, and top heights and the corresponding temperatures. Coincident GOES 4 km visible (0.65 microns) and 8 km infrared window (11.5 microns) radiances are analyzed to determine cloud emittances and reflectances. Infrared optical depth is computed from the emittance results. Visible optical depth is derived from reflectance using a theoretical ice crystal scattering model and an empirical bidirectional reflectance mode. No clouds with visible optical depths greater than 5 or infrared optical depths less than 0.1 were used in the analysis. Average cloud thickness ranged from 0.5 km to 8 km for the 71 scenes. An average visible scattering efficiency of 2.1 was found for this data set. The results reveal a significant dependence of scattering efficiency on cloud temperature.

  10. Parallel implementation of geometrical shock dynamics for two dimensional converging shock waves

    NASA Astrophysics Data System (ADS)

    Qiu, Shi; Liu, Kuang; Eliasson, Veronica

    2016-10-01

    Geometrical shock dynamics (GSD) theory is an appealing method to predict the shock motion in the sense that it is more computationally efficient than solving the traditional Euler equations, especially for converging shock waves. However, to solve and optimize large scale configurations, the main bottleneck is the computational cost. Among the existing numerical GSD schemes, there is only one that has been implemented on parallel computers, with the purpose to analyze detonation waves. To extend the computational advantage of the GSD theory to more general applications such as converging shock waves, a numerical implementation using a spatial decomposition method has been coupled with a front tracking approach on parallel computers. In addition, an efficient tridiagonal system solver for massively parallel computers has been applied to resolve the most expensive function in this implementation, resulting in an efficiency of 0.93 while using 32 HPCC cores. Moreover, symmetric boundary conditions have been developed to further reduce the computational cost, achieving a speedup of 19.26 for a 12-sided polygonal converging shock.

  11. SSR_pipeline: a bioinformatic infrastructure for identifying microsatellites from paired-end Illumina high-throughput DNA sequencing data

    USGS Publications Warehouse

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (e.g., microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains 3 analysis modules along with a fourth control module that can automate analyses of large volumes of data. The modules are used to 1) identify the subset of paired-end sequences that pass Illumina quality standards, 2) align paired-end reads into a single composite DNA sequence, and 3) identify sequences that possess microsatellites (both simple and compound) conforming to user-specified parameters. The microsatellite search algorithm is extremely efficient, and we have used it to identify repeats with motifs from 2 to 25bp in length. Each of the 3 analysis modules can also be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc.). We demonstrate use of the program with data from the brine fly Ephydra packardi (Diptera: Ephydridae) and provide empirical timing benchmarks to illustrate program performance on a common desktop computer environment. We further show that the Illumina platform is capable of identifying large numbers of microsatellites, even when using unenriched sample libraries and a very small percentage of the sequencing capacity from a single DNA sequencing run. All modules from SSR_pipeline are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, and Windows).

  12. Scintillation detector efficiencies for neutrons in the energy region above 20 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickens, J.K.

    1991-01-01

    The computer program SCINFUL (for SCINtillator FUL1 response) is a program designed to provide a calculated complete pulse-height response anticipated for neutrons being detected by either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator in the shape of a right circular cylinder. The point neutron source may be placed at any location with respect to the detector, even inside of it. The neutron source may be monoenergetic, or Maxwellian distributed, or distributed between chosen lower and upper bounds. The calculational method uses Monte Carlo techniques, and it is relativistically correct. Extensive comparisons with a variety of experimental data havemore » been made. There is generally overall good agreement (less than 10% differences) of results for SCINFUL calculations with measured integral detector efficiencies for the design incident neutron energy range of 0.1 to 80 MeV. Calculations of differential detector responses, i.e. yield versus response pulse height, are generally within about 5% on the average for incident neutron energies between 16 and 50 MeV and for the upper 70% of the response pulse height. For incident neutron energies between 50 and 80 MeV, the calculated shape of the response agrees with measurements, but the calculations tend to underpredict the absolute values of the measured responses. Extension of the program to compute responses for incident neutron energies greater than 80 MeV will require new experimental data on neutron interactions with carbon. 32 refs., 6 figs., 2 tabs.« less

  13. Scintillation detector efficiencies for neutrons in the energy region above 20 MeV

    NASA Astrophysics Data System (ADS)

    Dickens, J. K.

    The computer program SCINFUL (for SCINtillator FUL1 response) is a program designed to provide a calculated complete pulse-height response anticipated for neutrons being detected by either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator in the shape of a right circular cylinder. The point neutron source may be placed at any location with respect to the detector, even inside of it. The neutron source may be monoenergetic, or Maxwellian distributed, or distributed between chosen lower and upper bounds. The calculational method uses Monte Carlo techniques, and it is relativistically correct. Extensive comparisons with a variety of experimental data were made. There is generally overall good agreement (less than 10 pct. differences) of results for SCINFUL calculations with measured integral detector efficiencies for the design incident neutron energy range of 0.1 to 80 MeV. Calculations of differential detector responses, i.e., yield versus response pulse height, are generally within about 5 pct. on the average for incident neutron energies between 16 and 50 MeV and for the upper 70 pct. of the response pulse height. For incident neutron energies between 50 and 80 MeV, the calculated shape of the response agrees with measurements, but the calculations tend to underpredict the absolute values of the measured responses. Extension of the program to compute responses for incident neutron energies greater than 80 MeV will require new experimental data on neutron interactions with carbon.

  14. SSR_pipeline: a bioinformatic infrastructure for identifying microsatellites from paired-end Illumina high-throughput DNA sequencing data.

    PubMed

    Miller, Mark P; Knaus, Brian J; Mullins, Thomas D; Haig, Susan M

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (e.g., microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains 3 analysis modules along with a fourth control module that can automate analyses of large volumes of data. The modules are used to 1) identify the subset of paired-end sequences that pass Illumina quality standards, 2) align paired-end reads into a single composite DNA sequence, and 3) identify sequences that possess microsatellites (both simple and compound) conforming to user-specified parameters. The microsatellite search algorithm is extremely efficient, and we have used it to identify repeats with motifs from 2 to 25 bp in length. Each of the 3 analysis modules can also be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc.). We demonstrate use of the program with data from the brine fly Ephydra packardi (Diptera: Ephydridae) and provide empirical timing benchmarks to illustrate program performance on a common desktop computer environment. We further show that the Illumina platform is capable of identifying large numbers of microsatellites, even when using unenriched sample libraries and a very small percentage of the sequencing capacity from a single DNA sequencing run. All modules from SSR_pipeline are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, and Windows).

  15. Sex differences in the relationship between white matter connectivity and creativity.

    PubMed

    Ryman, Sephira G; van den Heuvel, Martijn P; Yeo, Ronald A; Caprihan, Arvind; Carrasco, Jessica; Vakhtin, Andrei A; Flores, Ranee A; Wertz, Christopher; Jung, Rex E

    2014-11-01

    Creative cognition emerges from a complex network of interacting brain regions. This study investigated the relationship between the structural organization of the human brain and aspects of creative cognition tapped by divergent thinking tasks. Diffusion weighted imaging (DWI) was used to obtain fiber tracts from 83 segmented cortical regions. This information was represented as a network and metrics of connectivity organization, including connectivity strength, clustering and communication efficiency were computed, and their relationship to individual levels of creativity was examined. Permutation testing identified significant sex differences in the relationship between global connectivity and creativity as measured by divergent thinking tests. Females demonstrated significant inverse relationships between global connectivity and creative cognition, whereas there were no significant relationships observed in males. Node specific analyses revealed inverse relationships across measures of connectivity, efficiency, clustering and creative cognition in widespread regions in females. Our findings suggest that females involve more regions of the brain in processing to produce novel ideas to solutions, perhaps at the expense of efficiency (greater path lengths). Males, in contrast, exhibited few, relatively weak positive relationships across these measures. Extending recent observations of sex differences in connectome structure, our findings of sexually dimorphic relationships suggest a unique topological organization of connectivity underlying the generation of novel ideas in males and females. Published by Elsevier Inc.

  16. Broadband linearisation of high-efficiency power amplifiers

    NASA Technical Reports Server (NTRS)

    Kenington, Peter B.; Parsons, Kieran J.; Bennett, David W.

    1993-01-01

    A feedforward-based amplifier linearization technique is presented which is capable of yielding significant improvements in both linearity and power efficiency over conventional amplifier classes (e.g. class-A or class-AB). Theoretical and practical results are presented showing that class-C stages may be used for both the main and error amplifiers yielding practical efficiencies well in excess of 30 percent, with theoretical efficiencies of much greater than 40 percent being possible. The levels of linearity which may be achieved are required for most satellite systems, however if greater linearity is required, the technique may be used in addition to conventional pre-distortion techniques.

  17. An Initial Multi-Domain Modeling of an Actively Cooled Structure

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur

    1997-01-01

    A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.

  18. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  19. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    NASA Technical Reports Server (NTRS)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  20. Details of insect wing design and deformation enhance aerodynamic function and flight efficiency.

    PubMed

    Young, John; Walker, Simon M; Bomphrey, Richard J; Taylor, Graham K; Thomas, Adrian L R

    2009-09-18

    Insect wings are complex structures that deform dramatically in flight. We analyzed the aerodynamic consequences of wing deformation in locusts using a three-dimensional computational fluid dynamics simulation based on detailed wing kinematics. We validated the simulation against smoke visualizations and digital particle image velocimetry on real locusts. We then used the validated model to explore the effects of wing topography and deformation, first by removing camber while keeping the same time-varying twist distribution, and second by removing camber and spanwise twist. The full-fidelity model achieved greater power economy than the uncambered model, which performed better than the untwisted model, showing that the details of insect wing topography and deformation are important aerodynamically. Such details are likely to be important in engineering applications of flapping flight.

  1. Design and Strength check of Large Blow Molding Machine Rack

    NASA Astrophysics Data System (ADS)

    Fei-fei, GU; Zhi-song, ZHU; Xiao-zhao, YAN; Yi-min, ZHU

    Design procedure of large blow moulding machine rack is discussed in the article. A strength checking method is presented. Finite element analysis is conducted in the design procedure by ANSYS software. The actual situation of the rack load bearing is fully considered. The necessary means to simplify the model are done. The dimensional linear element Beam 188 is analyzed. MESH200 is used to mesh. Therefore, it simplifies the analysis process and improves computational efficiency. The maximum deformation of rack is 8.037 mm: it is occurred in the position of accumulator head. The result states: it meets the national standard curvature which is not greater than 0.3% of the total channel length; it also meets strength requirement that the maximum stress was 54.112 MPa.

  2. Comparison of different approaches of modelling in a masonry building

    NASA Astrophysics Data System (ADS)

    Saba, M.; Meloni, D.

    2017-12-01

    The present work has the objective to model a simple masonry building, through two different modelling methods in order to assess their validity in terms of evaluation of static stresses. Have been chosen two of the most commercial software used to address this kind of problem, which are of S.T.A. Data S.r.l. and Sismicad12 of Concrete S.r.l. While the 3Muri software adopts the Frame by Macro Elements Method (FME), which should be more schematic and more efficient, Sismicad12 software uses the Finite Element Method (FEM), which guarantees accurate results, with greater computational burden. Remarkably differences of the static stresses, for such a simple structure between the two approaches have been found, and an interesting comparison and analysis of the reasons is proposed.

  3. High quantum efficiency photocathode simulation for the investigation of novel structured designs

    DOE PAGES

    MacPhee, A. G.; Nagel, S. R.; Bell, P. M.; ...

    2014-09-02

    A computer model in CST Studio Suite has been developed to evaluate several novel geometrically enhanced photocathode designs. This work was aimed at identifying a structure that would increase the total electron yield by a factor of two or greater in the 1–30 keV range. The modeling software was used to simulate the electric field and generate particle tracking for several potential structures. The final photocathode structure has been tailored to meet a set of detector performance requirements, namely, a spatial resolution of <40 μm and a temporal spread of 1–10 ps. As a result, we present the details ofmore » the geometrically enhanced photocathode model and resulting static field and electron emission characteristics.« less

  4. Integration of Multifidelity Multidisciplinary Computer Codes for Design and Analysis of Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Geiselhart, Karl A.; Ozoroski, Lori P.; Fenbert, James W.; Shields, Elwood W.; Li, Wu

    2011-01-01

    This paper documents the development of a conceptual level integrated process for design and analysis of efficient and environmentally acceptable supersonic aircraft. To overcome the technical challenges to achieve this goal, a conceptual design capability which provides users with the ability to examine the integrated solution between all disciplines and facilitates the application of multidiscipline design, analysis, and optimization on a scale greater than previously achieved, is needed. The described capability is both an interactive design environment as well as a high powered optimization system with a unique blend of low, mixed and high-fidelity engineering tools combined together in the software integration framework, ModelCenter. The various modules are described and capabilities of the system are demonstrated. The current limitations and proposed future enhancements are also discussed.

  5. A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.

    PubMed

    Moretti, Loris; Sartori, Luca

    2016-10-01

    Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Secure Multiparty Quantum Computation for Summation and Multiplication.

    PubMed

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-21

    As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics.

  7. Secure Multiparty Quantum Computation for Summation and Multiplication

    PubMed Central

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-01

    As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics. PMID:26792197

  8. Computer Vision Syndrome.

    PubMed

    Randolph, Susan A

    2017-07-01

    With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.

  9. Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure

    NASA Astrophysics Data System (ADS)

    Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang

    2018-04-01

    S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint inversion is a reliable method capable of accurately estimating the density parameter as well as P-wave velocity and S-wave velocity, even when the seismic data is noisy with signal-to-noise ratio of 5.

  10. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  11. An approximate solution to improve computational efficiency of impedance-type payload load prediction

    NASA Technical Reports Server (NTRS)

    White, C. W.

    1981-01-01

    The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.

  12. Efficient modeling of vector hysteresis using a novel Hopfield neural network implementation of Stoner–Wohlfarth-like operators

    PubMed Central

    Adly, Amr A.; Abd-El-Hafiz, Salwa K.

    2012-01-01

    Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446

  13. Entanglement negativity bounds for fermionic Gaussian states

    NASA Astrophysics Data System (ADS)

    Eisert, Jens; Eisler, Viktor; Zimborás, Zoltán

    2018-04-01

    The entanglement negativity is a versatile measure of entanglement that has numerous applications in quantum information and in condensed matter theory. It can not only efficiently be computed in the Hilbert space dimension, but for noninteracting bosonic systems, one can compute the negativity efficiently in the number of modes. However, such an efficient computation does not carry over to the fermionic realm, the ultimate reason for this being that the partial transpose of a fermionic Gaussian state is no longer Gaussian. To provide a remedy for this state of affairs, in this work, we introduce efficiently computable and rigorous upper and lower bounds to the negativity, making use of techniques of semidefinite programming, building upon the Lagrangian formulation of fermionic linear optics, and exploiting suitable products of Gaussian operators. We discuss examples in quantum many-body theory and hint at applications in the study of topological properties at finite temperature.

  14. HPC on Competitive Cloud Resources

    NASA Astrophysics Data System (ADS)

    Bientinesi, Paolo; Iakymchuk, Roman; Napper, Jeff

    Computing as a utility has reached the mainstream. Scientists can now easily rent time on large commercial clusters that can be expanded and reduced on-demand in real-time. However, current commercial cloud computing performance falls short of systems specifically designed for scientific applications. Scientific computing needs are quite different from those of the web applications that have been the focus of cloud computing vendors. In this chapter we demonstrate through empirical evaluation the computational efficiency of high-performance numerical applications in a commercial cloud environment when resources are shared under high contention. Using the Linpack benchmark as a case study, we show that cache utilization becomes highly unpredictable and similarly affects computation time. For some problems, not only is it more efficient to underutilize resources, but the solution can be reached sooner in realtime (wall-time). We also show that the smallest, cheapest (64-bit) instance on the studied environment is the best for price to performance ration. In light of the high-contention we witness, we believe that alternative definitions of efficiency for commercial cloud environments should be introduced where strong performance guarantees do not exist. Concepts like average, expected performance and execution time, expected cost to completion, and variance measures--traditionally ignored in the high-performance computing context--now should complement or even substitute the standard definitions of efficiency.

  15. A synthetic visual plane algorithm for visibility computation in consideration of accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang

    2017-12-01

    Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.

  16. Measuring Symmetry in Children With Unrepaired Cleft Lip: Defining a Standard for the Three-Dimensional Midfacial Reference Plane.

    PubMed

    Wu, Jia; Heike, Carrie; Birgfeld, Craig; Evans, Kelly; Maga, Murat; Morrison, Clinton; Saltzman, Babette; Shapiro, Linda; Tse, Raymond

    2016-11-01

      Quantitative measures of facial form to evaluate treatment outcomes for cleft lip (CL) are currently limited. Computer-based analysis of three-dimensional (3D) images provides an opportunity for efficient and objective analysis. The purpose of this study was to define a computer-based standard of identifying the 3D midfacial reference plane of the face in children with unrepaired cleft lip for measurement of facial symmetry.   The 3D images of 50 subjects (35 with unilateral CL, 10 with bilateral CL, five controls) were included in this study.   Five methods of defining a midfacial plane were applied to each image, including two human-based (Direct Placement, Manual Landmark) and three computer-based (Mirror, Deformation, Learning) methods.   Six blinded raters (three cleft surgeons, two craniofacial pediatricians, and one craniofacial researcher) independently ranked and rated the accuracy of the defined planes.   Among computer-based methods, the Deformation method performed significantly better than the others. Although human-based methods performed best, there was no significant difference compared with the Deformation method. The average correlation coefficient among raters was .4; however, it was .7 and .9 when the angular difference between planes was greater than 6° and 8°, respectively.   Raters can agree on the 3D midfacial reference plane in children with unrepaired CL using digital surface mesh. The Deformation method performed best among computer-based methods evaluated and can be considered a useful tool to carry out automated measurements of facial symmetry in children with unrepaired cleft lip.

  17. Green's function methods in heavy ion shielding

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.

    1993-01-01

    An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.

  18. High-speed extended-term time-domain simulation for online cascading analysis of power system

    NASA Astrophysics Data System (ADS)

    Fu, Chuan

    A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.

  19. Changes of glucose utilization by erythrocytes, lactic acid concentration in the serum and blood cells, and haematocrit value during one hour rest after maximal effort in individuals differing in physical efficiency.

    PubMed

    Tomasik, M

    1982-01-01

    Glucose utilization by the erythrocytes, lactic acid concentration in the blood and erythrocytes, and haematocrit value were determined before exercise and during one hour rest following maximal exercise in 97 individuals of either sex differing in physical efficiency. In the investigations reported by the author individuals with strikingly high physical fitness performed maximal work one-third greater than that performed by individuals with medium fitness. The serum concentration of lactic acid was in all individuals above the resting value still after 60 minutes of rest. On the other hand, this concentration returned to the normal level in the erythrocytes but only in individuals with strikingly high efficiency. Glucose utilization by the erythrocytes during the restitution period was highest immediately after the exercise in all studied individuals and showed a tendency for more rapid return to resting values again in individuals with highest efficiency. The investigation of very efficient individuals repeated twice demonstrated greater utilization of glucose by the erythrocytes at the time of greater maximal exercise. This was associated with greater lactic acid concentration in the serum and erythrocytes throughout the whole one-hour rest period. The observed facts suggest an active participation of erythrocytes in the process of adaptation of the organism to exercise.

  20. Evaluation of Street Sweeping as a Stormwater-Quality-Management Tool in Three Residential Basins in Madison, Wisconsin

    USGS Publications Warehouse

    Selbig, William R.; Bannerman, Roger T.

    2007-01-01

    Recent technological improvements have increased the ability of street sweepers to remove sediment and other debris from street surfaces; the effect of these technological advancements on stormwater quality is largely unknown. The U.S. Geological Survey, in cooperation with the City of Madison and the Wisconsin Department of Natural Resources, evaluated three street-sweeper technologies from 2002 through 2006. Regenerative-air, vacuum-assist, and mechanical-broom street sweepers were operated on a frequency of once per week (high frequency) in separate residential basins in Madison, Wis., to measure each sweeper's ability to not only reduce street-dirt yield but also improve the quality of stormwater runoff. A second mechanical-broom sweeper operating on a frequency of once per month (low frequency) was also evaluated to measure reductions in street-dirt yield only. A paired-basin study design was used to compare street-dirt and stormwater-quality samples during a calibration (no sweeping) and a treatment period (weekly sweeping). The basis of this paired-basin approach is that the relation between paired street-dirt and stormwater-quality loads for the control and tests basins is constant until a major change is made at one of the basins. At that time, a new relation will develop. Changes in either street-dirt and/or stormwater quality as a result of street sweeping could then be quantified by use of statistical tests. Street-dirt samples collected weekly during the calibration period and twice per week during the treatment period, once before and once after sweeping, were dried and separated into seven particle-size fractions ranging from less than 63 micrometers to greater than 2 millimeters. Street-dirt yield evaluation was based on a computed mass per unit length of pounds per curb-mile. An analysis of covariance was used to measure the significance of the effect of street sweeping at the end of the treatment period and to quantify any reduction in street-dirt yield. Both the regenerative-air and vacuum-assist sweepers produced reductions in street-dirt yield at the 5-percent significance level. Street-dirt yield was reduced by an average of 76, 63, and 20 percent in the regenerative-air, vacuum-assist, and high-frequency broom basins, respectively. The low-frequency broom basin showed no significant reductions in street-dirt yield. Sand-size particles (greater than 63 micrometers) recorded the greatest overall reduction. Street-sweeper pickup efficiency was determined by computing the difference between weekly street-dirt yields before and after sweeping cleaning. The regenerative-air and vacuum-assist sweepers had similar pickup efficiencies of 25 and 30 percent, respectively. The mechanical broom sweeper operating at high frequency was considerably less efficient, removing an average of 5 percent of street-dirt yield. The effects of street sweeping on stormwater quality were evaluated by use of statistical tests to compare event mean concentrations and loads computed for individual storms at the control and test basins. Loads were computed by multiplying the event mean concentrations by storm-runoff volumes. Only ammonia-nitrogen for the test basin with the vacuum-assist sweeper showed significant load increases over the control basin, at the 10-percent significance level, of 63 percent. Difficulty in detecting significant changes in constituent stormwater-quality loads could be due, in part, to the large amount of variability in the data. Coefficients of variation for the majority of constituent loads were greater than 1, indicating substantial variability. The ability to detect changes in constituent stormwater-quality loads was likely hampered by an inadequate number of samples in the data set. However, sediment transport in the storm-sewer pipe, sediment washing onto the street from other source areas, winter sand application, and sampling challenges were additional sources of variability within each study ba

  1. Brain reserve and cognitive reserve protect against cognitive decline over 4.5 years in MS

    PubMed Central

    Rocca, Maria A.; Leavitt, Victoria M.; Dackovic, Jelena; Mesaros, Sarlota; Drulovic, Jelena; DeLuca, John; Filippi, Massimo

    2014-01-01

    Objective: Based on the theories of brain reserve and cognitive reserve, we investigated whether larger maximal lifetime brain growth (MLBG) and/or greater lifetime intellectual enrichment protect against cognitive decline over time. Methods: Forty patients with multiple sclerosis (MS) underwent baseline and 4.5-year follow-up evaluations of cognitive efficiency (Symbol Digit Modalities Test, Paced Auditory Serial Addition Task) and memory (Selective Reminding Test, Spatial Recall Test). Baseline and follow-up MRIs quantified disease progression: percentage brain volume change (cerebral atrophy), percentage change in T2 lesion volume. MLBG (brain reserve) was estimated with intracranial volume; intellectual enrichment (cognitive reserve) was estimated with vocabulary. We performed repeated-measures analyses of covariance to investigate whether larger MLBG and/or greater intellectual enrichment moderate/attenuate cognitive decline over time, controlling for disease progression. Results: Patients with MS declined in cognitive efficiency and memory (p < 0.001). MLBG moderated decline in cognitive efficiency (p = 0.031, ηp2 = 0.122), with larger MLBG protecting against decline. MLBG did not moderate memory decline (p = 0.234, ηp2 = 0.039). Intellectual enrichment moderated decline in cognitive efficiency (p = 0.031, ηp2 = 0.126) and memory (p = 0.037, ηp2 = 0.115), with greater intellectual enrichment protecting against decline. MS disease progression was more negatively associated with change in cognitive efficiency and memory among patients with lower vs higher MLBG and intellectual enrichment. Conclusion: We provide longitudinal support for theories of brain reserve and cognitive reserve in MS. Larger MLBG protects against decline in cognitive efficiency, and greater intellectual enrichment protects against decline in cognitive efficiency and memory. Consideration of these protective factors should improve prediction of future cognitive decline in patients with MS. PMID:24748670

  2. Application of the MacCormack scheme to overland flow routing for high-spatial resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia

    2018-03-01

    Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.

  3. Cost Considerations in Nonlinear Finite-Element Computing

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.

    1985-01-01

    Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.

  4. Numerical simulations: Toward the design of 27.6% efficient four-terminal semi-transparent perovskite/SiC passivated rear contact silicon tandem solar cell

    NASA Astrophysics Data System (ADS)

    Pandey, Rahul; Chaujar, Rishu

    2016-12-01

    In this work, a novel four-terminal perovskite/SiC-based rear contact silicon tandem solar cell device has been proposed and simulated to achieve 27.6% power conversion efficiency (PCE) under single AM1.5 illumination. 20.9% efficient semitransparent perovskite top subcell has been used for perovskite/silicon tandem architecture. The tandem structure of perovskite-silicon solar cells is a promising method to achieve efficient solar energy conversion at low cost. In the four-terminal tandem configuration, the cells are connected independently and hence avoids the need for current matching between top and bottom subcell, thus giving greater design flexibility. The simulation analysis shows, PCE of 27.6% and 22.4% with 300 μm and 10 μm thick rear contact Si bottom subcell, respectively. This is a substantial improvement comparing to transparent perovskite solar cell and c-Si solar cell operated individually. The impact of perovskite layer thickness, monomolecular, bimolecular, and trimolecular recombination have also been obtained on the performance of perovskite top subcell. Reported PCEs of 27.6% and 22.4% are 1.25 times and 1.42 times higher as compared to experimentally available efficiencies of 22.1% and 15.7% in 300 μm and 10 μm thick stand-alone silicon solar cell devices, respectively. The presence of SiC significantly suppressed the interface recombination in bottom silicon subcell. Detailed realistic technology computer aided design (TCAD) analysis has been performed to predict the behaviour of the device.

  5. The impact of overdiagnosis on the selection of efficient lung cancer screening strategies.

    PubMed

    Han, Summer S; Ten Haaf, Kevin; Hazelton, William D; Munshi, Vidit N; Jeon, Jihyoun; Erdogan, Saadet A; Johanson, Colden; McMahon, Pamela M; Meza, Rafael; Kong, Chung Yin; Feuer, Eric J; de Koning, Harry J; Plevritis, Sylvia K

    2017-06-01

    The U.S. Preventive Services Task Force (USPSTF) recently updated their national lung screening guidelines and recommended low-dose computed tomography (LDCT) for lung cancer (LC) screening through age 80. However, the risk of overdiagnosis among older populations is a concern. Using four comparative models from the Cancer Intervention and Surveillance Modeling Network, we evaluate the overdiagnosis of the screening program recommended by USPSTF in the U.S. 1950 birth cohort. We estimate the number of LC deaths averted by screening (D) per overdiagnosed case (O), yielding the ratio D/O, to quantify the trade-off between the harms and benefits of LDCT. We analyze 576 hypothetical screening strategies that vary by age, smoking, and screening frequency and evaluate efficient screening strategies that maximize the D/O ratio and other metrics including D and life-years gained (LYG) per overdiagnosed case. The estimated D/O ratio for the USPSTF screening program is 2.85 (model range: 1.5-4.5) in the 1950 birth cohort, implying LDCT can prevent ∼3 LC deaths per overdiagnosed case. This D/O ratio increases by 22% when the program stops screening at an earlier age 75 instead of 80. Efficiency frontier analysis shows that while the most efficient screening strategies that maximize the mortality reduction (D) irrespective of overdiagnosis screen through age 80, screening strategies that stop at age 75 versus 80 produce greater efficiency in increasing life-years gained per overdiagnosed case. Given the risk of overdiagnosis with LC screening, the stopping age of screening merits further consideration when balancing benefits and harms. © 2017 UICC.

  6. The world state of orbital debris measurements and modeling

    NASA Astrophysics Data System (ADS)

    Johnson, Nicholas L.

    2004-02-01

    For more than 20 years orbital debris research around the world has been striving to obtain a sharper, more comprehensive picture of the near-Earth artificial satellite environment. Whereas significant progress has been achieved through better organized and funded programs and with the assistance of advancing technologies in both space surveillance sensors and computational capabilities, the potential of measurements and modeling of orbital debris has yet to be realized. Greater emphasis on a systems-level approach to the characterization and projection of the orbital debris environment would prove beneficial. On-going space surveillance activities, primarily from terrestrial-based facilities, are narrowing the uncertainties of the orbital debris population for objects greater than 2 mm in LEO and offer a better understanding of the GEO regime down to 10 cm diameter objects. In situ data collected in LEO is limited to a narrow range of altitudes and should be employed with great care. Orbital debris modeling efforts should place high priority on improving model fidelity, on clearly and completely delineating assumptions and simplifications, and on more thorough sensitivity studies. Most importantly, however, greater communications and cooperation between the measurements and modeling communities are essential for the efficient advancement of the field. The advent of the Inter-Agency Space Debris Coordination Committee (IADC) in 1993 has facilitated this exchange of data and modeling techniques. A joint goal of these communities should be the identification of new sources of orbital debris.

  7. High Energy, Single-Mode, All-Solid-State and Tunable UV Laser Transmitter

    NASA Technical Reports Server (NTRS)

    Prasad, Narasimha S.; Singh, Upendra N.; Hovis, FLoyd

    2007-01-01

    A high energy, single mode, all solid-state Nd:YAG laser primarily for pumping an UV converter is developed. Greater than 1 J/pulse at 50 HZ PRF and pulse widths around 22 ns have been demonstrated. Higher energy, greater efficiency may be possible. Refinements are known and practical to implement. Technology Demonstration of a highly efficient, high-pulse-energy, single mode UV wavelength generation using flash lamp pumped laser has been achieved. Greater than 90% pump depletion is observed. 190 mJ extra-cavity SFG; IR to UV efficiency > 21% (> 27% for 1 mJ seed). 160 mJ intra-cavity SFG; IR to UV efficiency up to 24% Fluence < 1 J/sq cm for most beams. The pump beam quality of the Nd:YAG pump laser is being refined to match or exceed the above UV converter results. Currently the Nd:YAG pump laser development is a technology demonstration. System can be engineered for compact packaging.

  8. S-1 project. Volume I. Architecture. 1979 annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-01-01

    The US Navy is one of the world's largest users of digital computing equipment having a procurement cost of at least $50,000, and is the single largest such computer customer in the Department of Defense. Its projected acquisition plan for embedded computer systems during the first half of the 80s contemplates the installation of over 10,000 such systems at an estimated cost of several billions of dollars. This expenditure, though large, is dwarfed by the 85 billion dollars which DOD is projected to spend during the next half-decade on computer software, the near-majority of which will be spent by themore » Navy; the life-cycle costs of the 700,000+ lines of software for a single large Navy weapons systems application (e.g., AEGIS) have been conservatively estimated at most of a billion dollars. The S-1 Project is dedicated to realizing potentially large improvements in the efficiency with which such very large sums may be spent, so that greater military effectiveness may be secured earlier, and with smaller expenditures. The fundamental objectives of the S-1 Project's work are first to enable the Navy to be able to quickly, reliably and inexpensively evaluate at any time what is available from the state-of-the-art in digital processing systems and what the relevance of such systems may be to Navy data processing applications: and second to provide reference prototype systems to support possible competitive procurement action leading to deployment of such systems.« less

  9. 76 FR 1410 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-10

    ...; Computer Matching Program AGENCY: Defense Manpower Data Center (DMDC), DoD. ACTION: Notice of a Computer... administrative burden, constitute a greater intrusion of the individual's privacy, and would result in additional... Liaison Officer, Department of Defense. Notice of a Computer Matching Program Among the Defense Manpower...

  10. Enhancing battery efficiency for pervasive health-monitoring systems based on electronic textiles.

    PubMed

    Zheng, Nenggan; Wu, Zhaohui; Lin, Man; Yang, Laurence Tianruo

    2010-03-01

    Electronic textiles are regarded as one of the most important computation platforms for future computer-assisted health-monitoring applications. In these novel systems, multiple batteries are used in order to prolong their operational lifetime, which is a significant metric for system usability. However, due to the nonlinear features of batteries, computing systems with multiple batteries cannot achieve the same battery efficiency as those powered by a monolithic battery of equal capacity. In this paper, we propose an algorithm aiming to maximize battery efficiency globally for the computer-assisted health-care systems with multiple batteries. Based on an accurate analytical battery model, the concept of weighted battery fatigue degree is introduced and the novel battery-scheduling algorithm called predicted weighted fatigue degree least first (PWFDLF) is developed. Besides, we also discuss our attempts during search PWFDLF: a weighted round-robin (WRR) and a greedy algorithm achieving highest local battery efficiency, which reduces to the sequential discharging policy. Evaluation results show that a considerable improvement in battery efficiency can be obtained by PWFDLF under various battery configurations and current profiles compared to conventional sequential and WRR discharging policies.

  11. Fuel Injector Design Optimization for an Annular Scramjet Geometry

    NASA Technical Reports Server (NTRS)

    Steffen, Christopher J., Jr.

    2003-01-01

    A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.

  12. Advanced computer architecture for large-scale real-time applications.

    DOT National Transportation Integrated Search

    1973-04-01

    Air traffic control automation is identified as a crucial problem which provides a complex, real-time computer application environment. A novel computer architecture in the form of a pipeline associative processor is conceived to achieve greater perf...

  13. Computer grading of examinations

    NASA Technical Reports Server (NTRS)

    Frigerio, N. A.

    1969-01-01

    A method, using IBM cards and computer processing, automates examination grading and recording and permits use of computational problems. The student generates his own answers, and the instructor has much greater freedom in writing questions than is possible with multiple choice examinations.

  14. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    PubMed

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  15. [Economic efficiency of computer monitoring of health].

    PubMed

    Il'icheva, N P; Stazhadze, L L

    2001-01-01

    Presents the method of computer monitoring of health, based on utilization of modern information technologies in public health. The method helps organize preventive activities of an outpatient clinic at a high level and essentially decrease the time and money loss. Efficiency of such preventive measures, increased number of computer and Internet users suggests that such methods are promising and further studies in this field are needed.

  16. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  17. Computing Interactions Of Free-Space Radiation With Matter

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.; Townsend, L. W.; Badavi, F. F.; Tripathi, R. K.; Silberberg, R.; Tsao, C. H.; Badwar, G. D.

    1995-01-01

    High Charge and Energy Transport (HZETRN) computer program computationally efficient, user-friendly package of software adressing problem of transport of, and shielding against, radiation in free space. Designed as "black box" for design engineers not concerned with physics of underlying atomic and nuclear radiation processes in free-space environment, but rather primarily interested in obtaining fast and accurate dosimetric information for design and construction of modules and devices for use in free space. Computational efficiency achieved by unique algorithm based on deterministic approach to solution of Boltzmann equation rather than computationally intensive statistical Monte Carlo method. Written in FORTRAN.

  18. Efficiency and rumen responses in younger and older Holstein heifers limit-fed diets of differing energy density.

    PubMed

    Zanton, G I; Heinrichs, A J

    2016-04-01

    The objective of this study was to evaluate the effects of limit feeding diets of different predicted energy density on the efficiency of utilization of feed and nitrogen and rumen responses in younger and older Holstein heifers. Eight rumen-cannulated Holstein heifers (4 heifers beginning at 257 ± 7 d, hereafter "young," and 4 heifers beginning at 610 ± 16 d, hereafter "old") were limit-fed high [HED; 2.64 Mcal/kg of dry matter (DM), 15.31% crude protein (CP)] or low (LED; 2.42 Mcal/kg of DM, 14.15% CP) energy density diets according to a 4-period, split-plot Latin square design with 28-d periods. Diets were limit-fed to provide isonitrogenous and isoenergetic intake on a rumen empty body weight (BW) basis at a level predicted to support approximately 800 g/d of average daily gain. During the last 7d of each period, rumen contents were subsampled over a 24-h period, rumen contents were completely evacuated, and total collection of feces and urine was made over 4d. Intakes of DM and water were greater for heifers fed LED, although, by design, calculated intake of metabolizable energy did not differ between age groups or diets when expressed relative to rumen empty BW. Rumen pH was lower, ammonia (NH3-N) concentration tended to be higher, and volatile fatty acids (VFA) concentration was not different for HED compared with LED and was unaffected by age group. Rumen content mass was greater for heifers fed LED and for old heifers, so when expressing rumen fermentation responses corrected for this difference in pool size, NH3-N pool size was not different between diets and total moles of VFA in the rumen were greater for heifers fed LED, whereas these pool sizes were greater for old heifers. Total-tract digestibility of potentially digestible neutral detergent fiber (NDF) was greater in heifers fed LED and for young heifers, whereas the fractional rate of ruminal passage and digestion of NDF were both greater in heifers fed LED. Digestibility of N was greater for heifers fed HED, but was unaffected by age group, whereas the efficiency of N retention was greater for heifers fed HED and for young heifers. Manure output was reduced in heifers fed HED, but the effect was largest in old heifers. Results confirm previous studies in which young heifers utilize N more efficiently than old heifers, primarily through greater efficiency of postabsorptive metabolism. Results also support the concept of limit feeding HED diets as a potential means to reduce manure excretion and increase nitrogen efficiency. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Computational strategies for three-dimensional flow simulations on distributed computer systems. Ph.D. Thesis Semiannual Status Report, 15 Aug. 1993 - 15 Feb. 1994

    NASA Technical Reports Server (NTRS)

    Weed, Richard Allen; Sankar, L. N.

    1994-01-01

    An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.

  20. Efficient marginalization to compute protein posterior probabilities from shotgun mass spectrometry data

    PubMed Central

    Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford

    2010-01-01

    The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337

  1. Accelerating artificial intelligence with reconfigurable computing

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw

    Reconfigurable computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated by placing the computationally intense portions of an algorithm into reconfigurable hardware. Reconfigurable computing combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be changed over the lifetime of the system. Similar to an ASIC, reconfigurable systems provide a method to map circuits into hardware. Reconfigurable systems therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Such a field, where there is many different algorithms which can be accelerated, is an artificial intelligence. This paper presents example hardware implementations of Artificial Neural Networks, Genetic Algorithms and Expert Systems.

  2. Evaluation of the rapid and slow maxillary expansion using cone-beam computed tomography: a randomized clinical trial

    PubMed Central

    Pereira, Juliana da S.; Jacob, Helder B.; Locks, Arno; Brunetto, Mauricio; Ribeiro, Gerson L. U.

    2017-01-01

    ABSTRACT OBJECTIVE: The aim of this randomized clinical trial was to evaluate the dental, dentoalveolar, and skeletal changes occurring right after the rapid maxillary expansion (RME) and slow maxillary expansion (SME) treatment using Haas-type expander. METHODS: All subjects performed cone-beam computed tomography (CBCT) before installation of expanders (T1) and right after screw stabilization (T2). Patients who did not follow the research parameters were excluded. The final sample resulted in 21 patients in RME group (mean age of 8.43 years) and 16 patients in SME group (mean age of 8.70 years). Based on the skewness and kurtosis statistics, the variables were judged to be normally distributed and paired t-test and student t-test were performed at significance level of 5%. RESULTS: Intermolar angle changed significantly due to treatment and RME showed greater buccal tipping than SME. RME showed significant changes in other four measurements due to treatment: maxilla moved forward and mandible showed backward rotation and, at transversal level both skeletal and dentoalveolar showed significant changes due to maxillary expansion. SME showed significant dentoalveolar changes due to maxillary expansion. CONCLUSIONS: Only intermolar angle showed significant difference between the two modalities of maxillary expansion with greater buccal tipping for RME. Also, RME produced skeletal maxillary expansion and SME did not. Both maxillary expansion modalities were efficient to promote transversal gain at dentoalveolar level. Sagittal and vertical measurements did not show differences between groups, but RME promoted a forward movement of the maxilla and backward rotation of the mandible. PMID:28658357

  3. Numerical investigation of the effect of number and shape of inlet of cyclone and particle size on particle separation

    NASA Astrophysics Data System (ADS)

    Khazaee, Iman

    2017-06-01

    Cyclones are one of the most common devices for removing particles from the gas stream and act as a filter. The mode of action of separating these particles, from mass gas flow, in this case, is that the inertia force exerted on the solid particles in the cyclone, several times greater than the force of inertia into the gas phase and so the particles are guided from the sides of the cyclone body to the bottom body but less power will be affected by the gas phase and from upper parts, solid particles, goes to the bottom chamber. Most of the attention has been focused on finding new methods to improve performance parameters. Recently, some studies were conducted to improve equipment performance by evaluating geometric effects on projects. In this work, the effect of cyclone geometry was studied through the creation of a symmetrical double and quad inlet and also studied cutting inlet geometry and their influence on separation efficiency. To assess the accuracy of modeling, selected model compared with the model Kim and Lee and the results were close to acceptable. The collection efficiency of the double inlet cyclone was found to be 20-25% greater than that of the single inlet cyclone and the collection efficiency of the quad inlet cyclone was found to be 40-45% greater than with the same inlet size. Also the collection efficiency of the rectangle inlet was found to be 4-6% greater than ellipse inlet and the collection efficiency of the ellipse inlet was found to be 30-35% greater than circle inlet.

  4. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  5. BCM: toolkit for Bayesian analysis of Computational Models using samplers.

    PubMed

    Thijssen, Bram; Dijkstra, Tjeerd M H; Heskes, Tom; Wessels, Lodewyk F A

    2016-10-21

    Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.

  6. Efficient computation of kinship and identity coefficients on large pedigrees.

    PubMed

    Cheng, En; Elliott, Brendan; Ozsoyoglu, Z Meral

    2009-06-01

    With the rapidly expanding field of medical genetics and genetic counseling, genealogy information is becoming increasingly abundant. An important computation on pedigree data is the calculation of identity coefficients, which provide a complete description of the degree of relatedness of a pair of individuals. The areas of application of identity coefficients are numerous and diverse, from genetic counseling to disease tracking, and thus, the computation of identity coefficients merits special attention. However, the computation of identity coefficients is not done directly, but rather as the final step after computing a set of generalized kinship coefficients. In this paper, we first propose a novel Path-Counting Formula for calculating generalized kinship coefficients, which is motivated by Wright's path-counting method for computing inbreeding coefficient. We then present an efficient and scalable scheme for calculating generalized kinship coefficients on large pedigrees using NodeCodes, a special encoding scheme for expediting the evaluation of queries on pedigree graph structures. Furthermore, we propose an improved scheme using Family NodeCodes for the computation of generalized kinship coefficients, which is motivated by the significant improvement of using Family NodeCodes for inbreeding coefficient over the use of NodeCodes. We also perform experiments for evaluating the efficiency of our method, and compare it with the performance of the traditional recursive algorithm for three individuals. Experimental results demonstrate that the resulting scheme is more scalable and efficient than the traditional recursive methods for computing generalized kinship coefficients.

  7. Use of diabetes data management software reports by health care providers, patients with diabetes, and caregivers improves accuracy and efficiency of data analysis and interpretation compared with traditional logbook data: first results of the Accu-Chek Connect Reports Utility and Efficiency Study (ACCRUES).

    PubMed

    Hinnen, Deborah A; Buskirk, Ann; Lyden, Maureen; Amstutz, Linda; Hunter, Tracy; Parkin, Christopher G; Wagner, Robin

    2015-03-01

    We assessed users' proficiency and efficiency in identifying and interpreting self-monitored blood glucose (SMBG), insulin, and carbohydrate intake data using data management software reports compared with standard logbooks. This prospective, self-controlled, randomized study enrolled insulin-treated patients with diabetes (PWDs) (continuous subcutaneous insulin infusion [CSII] and multiple daily insulin injection [MDI] therapy), patient caregivers [CGVs]) and health care providers (HCPs) who were naïve to diabetes data management computer software. Six paired clinical cases (3 CSII, 3 MDI) and associated multiple-choice questions/answers were reviewed by diabetes specialists and presented to participants via a web portal in both software report (SR) and traditional logbook (TL) formats. Participant response time and accuracy were documented and assessed. Participants completed a preference questionnaire at study completion. All participants (54 PWDs, 24 CGVs, 33 HCPs) completed the cases. Participants achieved greater accuracy (assessed by percentage of accurate answers) using the SR versus TL formats: PWDs, 80.3 (13.2)% versus 63.7 (15.0)%, P < .0001; CGVs, 84.6 (8.9)% versus 63.6 (14.4)%, P < .0001; HCPs, 89.5 (8.0)% versus 66.4 (12.3)%, P < .0001. Participants spent less time (minutes) with each case using the SR versus TL formats: PWDs, 8.6 (4.3) versus 19.9 (12.2), P < .0001; CGVs, 7.0 (3.5) versus 15.5 (11.8), P = .0005; HCPs, 6.7 (2.9) versus 16.0 (12.0), P < .0001. The majority of participants preferred using the software reports versus logbook data. Use of the Accu-Chek Connect Online software reports enabled PWDs, CGVs, and HCPs, naïve to diabetes data management software, to identify and utilize key diabetes information with significantly greater accuracy and efficiency compared with traditional logbook information. Use of SRs was preferred over logbooks. © 2014 Diabetes Technology Society.

  8. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads.

    PubMed

    Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-05-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.

  9. High Efficiency Ka-Band Solid State Power Amplifier Waveguide Power Combiner

    NASA Technical Reports Server (NTRS)

    Wintucky, Edwin G.; Simons, Rainee N.; Chevalier, Christine T.; Freeman, Jon C.

    2010-01-01

    A novel Ka-band high efficiency asymmetric waveguide four-port combiner for coherent combining of two Monolithic Microwave Integrated Circuit (MMIC) Solid State Power Amplifiers (SSPAs) having unequal outputs has been successfully designed, fabricated and characterized over the NASA deep space frequency band from 31.8 to 32.3 GHz. The measured combiner efficiency is greater than 90 percent, the return loss greater than 18 dB and input port isolation greater than 22 dB. The manufactured combiner was designed for an input power ratio of 2:1 but can be custom designed for any arbitrary power ratio. Applications considered are NASA s space communications systems needing 6 to 10 W of radio frequency (RF) power. This Technical Memorandum (TM) is an expanded version of the article recently published in Institute of Engineering and Technology (IET) Electronics Letters.

  10. Technical Note: scuda: A software platform for cumulative dose assessment.

    PubMed

    Park, Seyoun; McNutt, Todd; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2016-10-01

    Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (scuda) that can be seamlessly integrated into the clinical workflow. scuda consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our image PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.

  11. Technical Note: SCUDA: A software platform for cumulative dose assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seyoun; McNutt, Todd; Quon, Harry

    Purpose: Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (SCUDA) that can be seamlessly integrated into the clinical workflow. Methods: SCUDA consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our imagemore » PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. Results: The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. Conclusions: The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.« less

  12. Analog synthetic biology.

    PubMed

    Sarpeshkar, R

    2014-03-28

    We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog-digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA-protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations.

  13. Analog synthetic biology

    PubMed Central

    Sarpeshkar, R.

    2014-01-01

    We analyse the pros and cons of analog versus digital computation in living cells. Our analysis is based on fundamental laws of noise in gene and protein expression, which set limits on the energy, time, space, molecular count and part-count resources needed to compute at a given level of precision. We conclude that analog computation is significantly more efficient in its use of resources than deterministic digital computation even at relatively high levels of precision in the cell. Based on this analysis, we conclude that synthetic biology must use analog, collective analog, probabilistic and hybrid analog–digital computational approaches; otherwise, even relatively simple synthetic computations in cells such as addition will exceed energy and molecular-count budgets. We present schematics for efficiently representing analog DNA–protein computation in cells. Analog electronic flow in subthreshold transistors and analog molecular flux in chemical reactions obey Boltzmann exponential laws of thermodynamics and are described by astoundingly similar logarithmic electrochemical potentials. Therefore, cytomorphic circuits can help to map circuit designs between electronic and biochemical domains. We review recent work that uses positive-feedback linearization circuits to architect wide-dynamic-range logarithmic analog computation in Escherichia coli using three transcription factors, nearly two orders of magnitude more efficient in parts than prior digital implementations. PMID:24567476

  14. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-01

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  15. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices.more » Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.« less

  16. Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.

    PubMed

    Will, Sebastian; Jabbari, Hosna

    2016-01-01

    RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.uni-leipzig.de/~will/Software/SparseMFEFold.

  17. Computation of unsteady transonic aerodynamics with steady state fixed by truncation error injection

    NASA Technical Reports Server (NTRS)

    Fung, K.-Y.; Fu, J.-K.

    1985-01-01

    A novel technique is introduced for efficient computations of unsteady transonic aerodynamics. The steady flow corresponding to body shape is maintained by truncation error injection while the perturbed unsteady flows corresponding to unsteady body motions are being computed. This allows the use of different grids comparable to the characteristic length scales of the steady and unsteady flows and, hence, allows efficient computation of the unsteady perturbations. An example of typical unsteady computation of flow over a supercritical airfoil shows that substantial savings in computation time and storage without loss of solution accuracy can easily be achieved. This technique is easy to apply and requires very few changes to existing codes.

  18. Kinetic Super-Resolution Long-Wave Infrared (KSR LWIR) Thermography Diagnostic for Building Envelopes: Camp Lejeune, NC

    DTIC Science & Technology

    2015-08-18

    of Defense (DoD) to achieve cost-effective energy efficiency at much greater scale than other commercially available techniques of measuring energy...recommends specific energy conservation measures (ECMs), and quantifies significant potential return on investment. ERDC/CERL TR-15-18 iii...effective energy efficiency at much greater scale than other commercially available techniques of measuring energy loss due to envelope inefficien- cies

  19. Adult Literacy Learning and Computer Technology: Features of Effective Computer-Assisted Learning Systems.

    ERIC Educational Resources Information Center

    Fahy, Patrick J.

    Computer-assisted learning (CAL) can be used for adults functioning at any academic or grade level. In adult basic education (ABE), CAL can promote greater learning effectiveness and faster progress, concurrent learning and experience with computer literacy skills, privacy, and motivation. Adults who face barriers (financial, geographic, personal,…

  20. A parallel computing engine for a class of time critical processes.

    PubMed

    Nabhan, T M; Zomaya, A Y

    1997-01-01

    This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.

  1. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  2. Multiphysics Computational Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen

    2007-01-01

    The objective of this effort is to develop an efficient and accurate computational heat transfer methodology to predict thermal, fluid, and hydrogen environments for a hypothetical solid-core, nuclear thermal engine - the Small Engine. In addition, the effects of power profile and hydrogen conversion on heat transfer efficiency and thrust performance were also investigated. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics platform, while formulations of conjugate heat transfer were implemented to describe the heat transfer from solid to hydrogen inside the solid-core reactor. The computational domain covers the entire thrust chamber so that the afore-mentioned heat transfer effects impact the thrust performance directly. The result shows that the computed core-exit gas temperature, specific impulse, and core pressure drop agree well with those of design data for the Small Engine. Finite-rate chemistry is very important in predicting the proper energy balance as naturally occurring hydrogen decomposition is endothermic. Locally strong hydrogen conversion associated with centralized power profile gives poor heat transfer efficiency and lower thrust performance. On the other hand, uniform hydrogen conversion associated with a more uniform radial power profile achieves higher heat transfer efficiency, and higher thrust performance.

  3. The diffusive finite state projection algorithm for efficient simulation of the stochastic reaction-diffusion master equation.

    PubMed

    Drawert, Brian; Lawson, Michael J; Petzold, Linda; Khammash, Mustafa

    2010-02-21

    We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.

  4. Distinctive receptor binding properties of the surface glycoprotein of a natural feline leukemia virus isolate with unusual disease spectrum.

    PubMed

    Bolin, Lisa L; Chandhasin, Chandtip; Lobelle-Rich, Patricia A; Albritton, Lorraine M; Levy, Laura S

    2011-05-13

    Feline leukemia virus (FeLV)-945, a member of the FeLV-A subgroup, was previously isolated from a cohort of naturally infected cats. An unusual multicentric lymphoma of non-T-cell origin was observed in natural and experimental infection with FeLV-945. Previous studies implicated the FeLV-945 surface glycoprotein (SU) as a determinant of disease outcome by an as yet unknown mechanism. The present studies demonstrate that FeLV-945 SU confers distinctive properties of binding to the cell surface receptor. Virions bearing the FeLV-945 Env protein were observed to bind the cell surface receptor with significantly increased efficiency, as was soluble FeLV-945 SU protein, as compared to the corresponding virions or soluble protein from a prototype FeLV-A isolate. SU proteins cloned from other cohort isolates exhibited increased binding efficiency comparable to or greater than FeLV-945 SU. Mutational analysis implicated a domain containing variable region B (VRB) to be the major determinant of increased receptor binding, and identified a single residue, valine 186, to be responsible for the effect. The FeLV-945 SU protein binds its cell surface receptor, feTHTR1, with significantly greater efficiency than does that of prototype FeLV-A (FeLV-A/61E) when present on the surface of virus particles or in soluble form, demonstrating a 2-fold difference in the relative dissociation constant. The results implicate a single residue, valine 186, as the major determinant of increased binding affinity. Computational modeling suggests a molecular mechanism by which residue 186 interacts with the receptor-binding domain through residue glutamine 110 to effect increased binding affinity. Through its increased receptor binding affinity, FeLV-945 SU might function in pathogenesis by increasing the rate of virus entry and spread in vivo, or by facilitating entry into a novel target cell with a low receptor density.

  5. Use of Diabetes Data Management Software Reports by Health Care Providers, Patients With Diabetes, and Caregivers Improves Accuracy and Efficiency of Data Analysis and Interpretation Compared With Traditional Logbook Data

    PubMed Central

    Hinnen, Deborah A.; Buskirk, Ann; Lyden, Maureen; Amstutz, Linda; Hunter, Tracy; Parkin, Christopher G.; Wagner, Robin

    2014-01-01

    Background: We assessed users’ proficiency and efficiency in identifying and interpreting self-monitored blood glucose (SMBG), insulin, and carbohydrate intake data using data management software reports compared with standard logbooks. Method: This prospective, self-controlled, randomized study enrolled insulin-treated patients with diabetes (PWDs) (continuous subcutaneous insulin infusion [CSII] and multiple daily insulin injection [MDI] therapy), patient caregivers [CGVs]) and health care providers (HCPs) who were naïve to diabetes data management computer software. Six paired clinical cases (3 CSII, 3 MDI) and associated multiple-choice questions/answers were reviewed by diabetes specialists and presented to participants via a web portal in both software report (SR) and traditional logbook (TL) formats. Participant response time and accuracy were documented and assessed. Participants completed a preference questionnaire at study completion. Results: All participants (54 PWDs, 24 CGVs, 33 HCPs) completed the cases. Participants achieved greater accuracy (assessed by percentage of accurate answers) using the SR versus TL formats: PWDs, 80.3 (13.2)% versus 63.7 (15.0)%, P < .0001; CGVs, 84.6 (8.9)% versus 63.6 (14.4)%, P < .0001; HCPs, 89.5 (8.0)% versus 66.4 (12.3)%, P < .0001. Participants spent less time (minutes) with each case using the SR versus TL formats: PWDs, 8.6 (4.3) versus 19.9 (12.2), P < .0001; CGVs, 7.0 (3.5) versus 15.5 (11.8), P = .0005; HCPs, 6.7 (2.9) versus 16.0 (12.0), P < .0001. The majority of participants preferred using the software reports versus logbook data. Conclusions: Use of the Accu-Chek Connect Online software reports enabled PWDs, CGVs, and HCPs, naïve to diabetes data management software, to identify and utilize key diabetes information with significantly greater accuracy and efficiency compared with traditional logbook information. Use of SRs was preferred over logbooks. PMID:25367012

  6. Dendritic Properties Control Energy Efficiency of Action Potentials in Cortical Pyramidal Cells

    PubMed Central

    Yi, Guosheng; Wang, Jiang; Wei, Xile; Deng, Bin

    2017-01-01

    Neural computation is performed by transforming input signals into sequences of action potentials (APs), which is metabolically expensive and limited by the energy available to the brain. The metabolic efficiency of single AP has important consequences for the computational power of the cell, which is determined by its biophysical properties and morphologies. Here we adopt biophysically-based two-compartment models to investigate how dendrites affect energy efficiency of APs in cortical pyramidal neurons. We measure the Na+ entry during the spike and examine how it is efficiently used for generating AP depolarization. We show that increasing the proportion of dendritic area or coupling conductance between two chambers decreases Na+ entry efficiency of somatic AP. Activating inward Ca2+ current in dendrites results in dendritic spike, which increases AP efficiency. Activating Ca2+-activated outward K+ current in dendrites, however, decreases Na+ entry efficiency. We demonstrate that the active and passive dendrites take effects by altering the overlap between Na+ influx and internal current flowing from soma to dendrite. We explain a fundamental link between dendritic properties and AP efficiency, which is essential to interpret how neural computation consumes metabolic energy and how biophysics and morphologies contribute to such consumption. PMID:28919852

  7. Dendritic Properties Control Energy Efficiency of Action Potentials in Cortical Pyramidal Cells.

    PubMed

    Yi, Guosheng; Wang, Jiang; Wei, Xile; Deng, Bin

    2017-01-01

    Neural computation is performed by transforming input signals into sequences of action potentials (APs), which is metabolically expensive and limited by the energy available to the brain. The metabolic efficiency of single AP has important consequences for the computational power of the cell, which is determined by its biophysical properties and morphologies. Here we adopt biophysically-based two-compartment models to investigate how dendrites affect energy efficiency of APs in cortical pyramidal neurons. We measure the Na + entry during the spike and examine how it is efficiently used for generating AP depolarization. We show that increasing the proportion of dendritic area or coupling conductance between two chambers decreases Na + entry efficiency of somatic AP. Activating inward Ca 2+ current in dendrites results in dendritic spike, which increases AP efficiency. Activating Ca 2+ -activated outward K + current in dendrites, however, decreases Na + entry efficiency. We demonstrate that the active and passive dendrites take effects by altering the overlap between Na + influx and internal current flowing from soma to dendrite. We explain a fundamental link between dendritic properties and AP efficiency, which is essential to interpret how neural computation consumes metabolic energy and how biophysics and morphologies contribute to such consumption.

  8. Computer-based learning: interleaving whole and sectional representation of neuroanatomy.

    PubMed

    Pani, John R; Chariker, Julia H; Naaz, Farah

    2013-01-01

    The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). Copyright © 2012 American Association of Anatomists.

  9. Convolutional networks for fast, energy-efficient neuromorphic computing

    PubMed Central

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  10. Convolutional networks for fast, energy-efficient neuromorphic computing.

    PubMed

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  11. Positive Wigner functions render classical simulation of quantum computation efficient.

    PubMed

    Mari, A; Eisert, J

    2012-12-07

    We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.

  12. Computer-Based Learning: Interleaving Whole and Sectional Representation of Neuroanatomy

    PubMed Central

    Pani, John R.; Chariker, Julia H.; Naaz, Farah

    2015-01-01

    The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously integrate learning of whole and sectional anatomy. A study of computer-based learning of neuroanatomy was conducted to compare a basic transfer paradigm for learning whole and sectional neuroanatomy with a method in which the two forms of representation were interleaved (alternated). For all experimental groups, interactive computer programs supported an approach to instruction called adaptive exploration. Each learning trial consisted of time-limited exploration of neuroanatomy, self-timed testing, and graphical feedback. The primary result of this study was that interleaved learning of whole and sectional neuroanatomy was more efficient than the basic transfer method, without cost to long-term retention or generalization of knowledge to recognizing new images (Visible Human and MRI). PMID:22761001

  13. CFD Analysis and Design Optimization Using Parallel Computers

    NASA Technical Reports Server (NTRS)

    Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James

    1997-01-01

    A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.

  14. A comparison of the postures assumed when using laptop computers and desktop computers.

    PubMed

    Straker, L; Jones, K J; Miller, J

    1997-08-01

    This study evaluated the postural implications of using a laptop computer. Laptop computer screens and keyboards are joined, and are therefore unable to be adjusted separately in terms of screen height and distance, and keyboard height and distance. The posture required for their use is likely to be constrained, as little adjustment can be made for the anthropometric differences of users. In addition to the postural constraints, the study looked at discomfort levels and performance when using laptops as compared with desktops. Statistical analysis showed significantly greater neck flexion and head tilt with laptop use. The other body angles measured (trunk, shoulder, elbow, wrist, and scapula and neck protraction/retraction) showed no statistical differences. The average discomfort experienced after using the laptop for 20 min, although appearing greater than the discomfort experienced after using the desktop, was not significantly greater. When using the laptop, subjects tended to perform better than when using the desktop, though not significantly so. Possible reasons for the results are discussed and implications of the findings outlined.

  15. Energy 101: Energy Efficient Data Centers

    ScienceCinema

    None

    2018-04-16

    Data centers provide mission-critical computing functions vital to the daily operation of top U.S. economic, scientific, and technological organizations. These data centers consume large amounts of energy to run and maintain their computer systems, servers, and associated high-performance components—up to 3% of all U.S. electricity powers data centers. And as more information comes online, data centers will consume even more energy. Data centers can become more energy efficient by incorporating features like power-saving "stand-by" modes, energy monitoring software, and efficient cooling systems instead of energy-intensive air conditioners. These and other efficiency improvements to data centers can produce significant energy savings, reduce the load on the electric grid, and help protect the nation by increasing the reliability of critical computer operations.

  16. Protein sequences clustering of herpes virus by using Tribe Markov clustering (Tribe-MCL)

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Siswantining, T.; Febriyani, N. L.; Novitasari, I. D.; Cahyaningrum, R. D.

    2017-07-01

    The herpes virus can be found anywhere and one of the important characteristics is its ability to cause acute and chronic infection at certain times so as a result of the infection allows severe complications occurred. The herpes virus is composed of DNA containing protein and wrapped by glycoproteins. In this work, the Herpes viruses family is classified and analyzed by clustering their protein-sequence using Tribe Markov Clustering (Tribe-MCL) algorithm. Tribe-MCL is an efficient clustering method based on the theory of Markov chains, to classify protein families from protein sequences using pre-computed sequence similarity information. We implement the Tribe-MCL algorithm using an open source program of R. We select 24 protein sequences of Herpes virus obtained from NCBI database. The dataset consists of three types of glycoprotein B, F, and H. Each type has eight herpes virus that infected humans. Based on our simulation using different inflation factor r=1.5, 2, 3 we find a various number of the clusters results. The greater the inflation factor the greater the number of their clusters. Each protein will grouped together in the same type of protein.

  17. Important ingredients for health adaptive information systems.

    PubMed

    Senathirajah, Yalini; Bakken, Suzanne

    2011-01-01

    Healthcare information systems frequently do not truly meet clinician needs, due to the complexity, variability, and rapid change in medical contexts. Recently the internet world has been transformed by approaches commonly termed 'Web 2.0'. This paper proposes a Web 2.0 model for a healthcare adaptive architecture. The vision includes creating modular, user-composable systems which aim to make all necessary information from multiple internal and external sources available via a platform, for the user to use, arrange, recombine, author, and share at will, using rich interfaces where advisable. Clinicians can create a set of 'widgets' and 'views' which can transform data, reflect their domain knowledge and cater to their needs, using simple drag and drop interfaces without the intervention of programmers. We have built an example system, MedWISE, embodying the user-facing parts of the model. This approach to HIS is expected to have several advantages, including greater suitability to user needs (reflecting clinician rather than programmer concepts and priorities), incorporation of multiple information sources, agile reconfiguration to meet emerging situations and new treatment deployment, capture of user domain expertise and tacit knowledge, efficiencies due to workflow and human-computer interaction improvements, and greater user acceptance.

  18. Finding New Math Identities by Computer

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Recently a number of interesting new mathematical identities have been discovered by means of numerical searches on high performance computers, using some newly discovered algorithms. These include the following: pi = ((sup oo)(sub k=0))(Sigma) (1 / 16) (sup k) ((4 / 8k+1) - (2 / 8k+4) - (1 / 8k+5) - (1 / 8k+6)) and ((17 pi(exp 4)) / 360) = ((sup oo)(sub k=1))(Sigma) (1 + (1/2) + (1/3) + ... + (1/k))(exp 2) k(exp -2), zeta(3, 1, 3, 1, ..., 3, 1) = (2 pi(exp 4m) / (4m+2)! where m = number of (3,1) pairs. and where zeta(n1,n2,...,nr) = (sub k1 (is greater than) k2 (is greater than) ... (is greater than) kr)(Sigma) (1 / (k1 (sup n1) k2 (sup n2) ... kr (sup nr). The first identity is remarkable in that it permits one to compute the n-th binary or hexadecimal digit of pu directly, without computing any of the previous digits, and without using multiple precision arithmetic. Recently the ten billionth hexadecimal digit of pi was computed using this formula. The third identity has connections to quantum field theory. (The first and second of these been formally established; the third is affirmed by numerical evidence only.) The background and results of this work will be described, including an overview of the algorithms and computer techniques used in these studies.

  19. Smart integrated microsystems: the energy efficiency challenge (Conference Presentation) (Plenary Presentation)

    NASA Astrophysics Data System (ADS)

    Benini, Luca

    2017-06-01

    The "internet of everything" envisions trillions of connected objects loaded with high-bandwidth sensors requiring massive amounts of local signal processing, fusion, pattern extraction and classification. From the computational viewpoint, the challenge is formidable and can be addressed only by pushing computing fabrics toward massive parallelism and brain-like energy efficiency levels. CMOS technology can still take us a long way toward this goal, but technology scaling is losing steam. Energy efficiency improvement will increasingly hinge on architecture, circuits, design techniques such as heterogeneous 3D integration, mixed-signal preprocessing, event-based approximate computing and non-Von-Neumann architectures for scalable acceleration.

  20. An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints

    PubMed Central

    Sung, Jinmo; Jeong, Bongju

    2014-01-01

    Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments. PMID:24701158

  1. An adaptive evolutionary algorithm for traveling salesman problem with precedence constraints.

    PubMed

    Sung, Jinmo; Jeong, Bongju

    2014-01-01

    Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments.

  2. Advanced dendritic web growth development and development of single-crystal silicon dendritic ribbon and high-efficiency solar cell program

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.

    1986-01-01

    Efforts to demonstrate that the dendritic web technology is ready for commercial use by the end of 1986 continues. A commercial readiness goal involves improvements to crystal growth furnace throughput to demonstrate an area growth rate of greater than 15 sq cm/min while simultaneously growing 10 meters or more of ribbon under conditions of continuous melt replenishment. Continuous means that the silicon melt is being replenished at the same rate that it is being consumed by ribbon growth so that the melt level remains constant. Efforts continue on computer thermal modeling required to define high speed, low stress, continuous growth configurations; the study of convective effects in the molten silicon and growth furnace cover gas; on furnace component modifications; on web quality assessments; and on experimental growth activities.

  3. Using ballistocardiography to measure cardiac performance: a brief review of its history and future significance.

    PubMed

    Vogt, Emelie; MacQuarrie, David; Neary, John Patrick

    2012-11-01

    Ballistocardiography (BCG) is a non-invasive technology that has been used to record ultra-low-frequency vibrations of the heart allowing for the measurement of cardiac cycle events including timing and amplitudes of contraction. Recent developments in BCG have made this technology simple to use, as well as time- and cost-efficient in comparison with other more complicated and invasive techniques used to evaluate cardiac performance. Recent technological advances are considerably greater since the advent of microprocessors and laptop computers. Along with the history of BCG, this paper reviews the present and future potential benefits of using BCG to measure cardiac cycle events and its application to clinical and applied research. © 2012 The Authors Clinical Physiology and Functional Imaging © 2012 Scandinavian Society of Clinical Physiology and Nuclear Medicine.

  4. High external quantum efficiency and fill-factor InGaN/GaN heterojunction solar cells grown by NH3-based molecular beam epitaxy

    NASA Astrophysics Data System (ADS)

    Lang, J. R.; Neufeld, C. J.; Hurni, C. A.; Cruz, S. C.; Matioli, E.; Mishra, U. K.; Speck, J. S.

    2011-03-01

    High external quantum efficiency (EQE) p-i-n heterojunction solar cells grown by NH3-based molecular beam epitaxy are presented. EQE values including optical losses are greater than 50% with fill-factors over 72% when illuminated with a 1 sun AM0 spectrum. Optical absorption measurements in conjunction with EQE measurements indicate an internal quantum efficiency greater than 90% for the InGaN absorbing layer. By adjusting the thickness of the top p-type GaN window contact layer, it is shown that the short-wavelength (<365 nm) quantum efficiency is limited by the minority carrier diffusion length in highly Mg-doped p-GaN.

  5. Interhemispheric Resource Sharing: Decreasing Benefits with Increasing Processing Efficiency

    ERIC Educational Resources Information Center

    Maertens, M.; Pollmann, S.

    2005-01-01

    Visual matches are sometimes faster when stimuli are presented across visual hemifields, compared to within-field matching. Using a cued geometric figure matching task, we investigated the influence of computational complexity vs. processing efficiency on this bilateral distribution advantage (BDA). Computational complexity was manipulated by…

  6. The Improvement of Efficiency in the Numerical Computation of Orbit Trajectories

    NASA Technical Reports Server (NTRS)

    Dyer, J.; Danchick, R.; Pierce, S.; Haney, R.

    1972-01-01

    An analysis, system design, programming, and evaluation of results are described for numerical computation of orbit trajectories. Evaluation of generalized methods, interaction of different formulations for satellite motion, transformation of equations of motion and integrator loads, and development of efficient integrators are also considered.

  7. Framework for computationally efficient optimal irrigation scheduling using ant colony optimization

    USDA-ARS?s Scientific Manuscript database

    A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...

  8. The semantic system is involved in mathematical problem solving.

    PubMed

    Zhou, Xinlin; Li, Mengyi; Li, Leinian; Zhang, Yiyun; Cui, Jiaxin; Liu, Jie; Chen, Chuansheng

    2018-02-01

    Numerous studies have shown that the brain regions around bilateral intraparietal cortex are critical for number processing and arithmetical computation. However, the neural circuits for more advanced mathematics such as mathematical problem solving (with little routine arithmetical computation) remain unclear. Using functional magnetic resonance imaging (fMRI), this study (N = 24 undergraduate students) compared neural bases of mathematical problem solving (i.e., number series completion, mathematical word problem solving, and geometric problem solving) and arithmetical computation. Direct subject- and item-wise comparisons revealed that mathematical problem solving typically had greater activation than arithmetical computation in all 7 regions of the semantic system (which was based on a meta-analysis of 120 functional neuroimaging studies on semantic processing). Arithmetical computation typically had greater activation in the supplementary motor area and left precentral gyrus. The results suggest that the semantic system in the brain supports mathematical problem solving. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Shade compromises the photosynthetic efficiency of NADP-ME less than that of PEP-CK and NAD-ME C4 grasses.

    PubMed

    Sonawane, Balasaheb V; Sharwood, Robert E; Whitney, Spencer; Ghannoum, Oula

    2018-05-25

    The high energy cost and apparently low plasticity of C4 photosynthesis compared with C3 photosynthesis may limit the productivity and distribution of C4 plants in low light (LL) environments. C4 photosynthesis evolved numerous times, but it remains unclear how different biochemical subtypes perform under LL. We grew eight C4 grasses belonging to three biochemical subtypes [NADP-malic enzyme (NADP-ME), NAD-malic enzyme (NAD-ME), and phosphoenolpyruvate carboxykinase (PEP-CK)] under shade (16% sunlight) or control (full sunlight) conditions and measured their photosynthetic characteristics at both low and high light. We show for the first time that LL (during measurement or growth) compromised the CO2-concentrating mechanism (CCM) to a greater extent in NAD-ME than in PEP-CK or NADP-ME C4 grasses by virtue of a greater increase in carbon isotope discrimination (∆P) and bundle sheath CO2 leakiness (ϕ), and a greater reduction in photosynthetic quantum yield (Φmax). These responses were partly explained by changes in the ratios of phosphoenolpyruvate carboxylase (PEPC)/initial Rubisco activity and dark respiration/photosynthesis (Rd/A). Shade induced a greater photosynthetic acclimation in NAD-ME than in NADP-ME and PEP-CK species due to a greater Rubisco deactivation. Shade also reduced plant dry mass to a greater extent in NAD-ME and PEP-CK relative to NADP-ME grasses. In conclusion, LL compromised the co-ordination of the C4 and C3 cycles and, hence, the efficiency of the CCM to a greater extent in NAD-ME than in PEP-CK species, while CCM efficiency was less impacted by LL in NADP-ME species. Consequently, NADP-ME species are more efficient at LL, which could explain their agronomic and ecological dominance relative to other C4 grasses.

  10. Shade compromises the photosynthetic efficiency of NADP-ME less than that of PEP-CK and NAD-ME C4 grasses

    PubMed Central

    2018-01-01

    Abstract The high energy cost and apparently low plasticity of C4 photosynthesis compared with C3 photosynthesis may limit the productivity and distribution of C4 plants in low light (LL) environments. C4 photosynthesis evolved numerous times, but it remains unclear how different biochemical subtypes perform under LL. We grew eight C4 grasses belonging to three biochemical subtypes [NADP-malic enzyme (NADP-ME), NAD-malic enzyme (NAD-ME), and phosphoenolpyruvate carboxykinase (PEP-CK)] under shade (16% sunlight) or control (full sunlight) conditions and measured their photosynthetic characteristics at both low and high light. We show for the first time that LL (during measurement or growth) compromised the CO2-concentrating mechanism (CCM) to a greater extent in NAD-ME than in PEP-CK or NADP-ME C4 grasses by virtue of a greater increase in carbon isotope discrimination (∆P) and bundle sheath CO2 leakiness (ϕ), and a greater reduction in photosynthetic quantum yield (Φmax). These responses were partly explained by changes in the ratios of phosphoenolpyruvate carboxylase (PEPC)/initial Rubisco activity and dark respiration/photosynthesis (Rd/A). Shade induced a greater photosynthetic acclimation in NAD-ME than in NADP-ME and PEP-CK species due to a greater Rubisco deactivation. Shade also reduced plant dry mass to a greater extent in NAD-ME and PEP-CK relative to NADP-ME grasses. In conclusion, LL compromised the co-ordination of the C4 and C3 cycles and, hence, the efficiency of the CCM to a greater extent in NAD-ME than in PEP-CK species, while CCM efficiency was less impacted by LL in NADP-ME species. Consequently, NADP-ME species are more efficient at LL, which could explain their agronomic and ecological dominance relative to other C4 grasses. PMID:29659931

  11. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.

    PubMed

    Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei

    2017-07-18

    Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.

  12. Association of residual feed intake with abundance of ruminal bacteria and biopolymer hydrolyzing enzyme activities during the peripartal period and early lactation in Holstein dairy cows.

    PubMed

    Elolimy, Ahmed A; Arroyo, José M; Batistel, Fernanda; Iakiviak, Michael A; Loor, Juan J

    2018-01-01

    Residual feed intake (RFI) in dairy cattle typically calculated at peak lactation is a measure of feed efficiency independent of milk production level. The objective of this study was to evaluate differences in ruminal bacteria, biopolymer hydrolyzing enzyme activities, and overall performance between the most- and the least-efficient dairy cows during the peripartal period. Twenty multiparous Holstein dairy cows with daily ad libitum access to a total mixed ration from d - 10 to d 60 relative to the calving date were used. Cows were classified into most-efficient (i.e. with low RFI, n  = 10) and least-efficient (i.e. with high RFI, n  = 10) based on a linear regression model involving dry matter intake (DMI), fat-corrected milk (FCM), changes in body weight (BW), and metabolic BW. The most-efficient cows had ~ 2.6 kg/d lower DMI at wk 4, 6, 7, and 8 compared with the least-efficient cows. In addition, the most-efficient cows had greater relative abundance of total ruminal bacterial community during the peripartal period. Compared with the least-efficient cows, the most-efficient cows had 4-fold greater relative abundance of Succinivibrio dextrinosolvens at d - 10 and d 10 around parturition and tended to have greater abundance of Fibrobacter succinogenes and Megaspheara elsdenii . In contrast, the relative abundance of Butyrivibrio proteoclasticus and Streptococcus bovis was lower and Succinimonas amylolytica and Prevotella bryantii tended to be lower in the most-efficient cows around calving. During the peripartal period, the most-efficient cows had lower enzymatic activities of cellulase, amylase, and protease compared with the least-efficient cows. The results suggest that shifts in ruminal bacteria and digestive enzyme activities during the peripartal period could, at least in part, be part of the mechanism associated with better feed efficiency in dairy cows.

  13. Utilizing fast multipole expansions for efficient and accurate quantum-classical molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Schwörer, Magnus; Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul

    2015-03-01

    Recently, a novel approach to hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations has been suggested [Schwörer et al., J. Chem. Phys. 138, 244103 (2013)]. Here, the forces acting on the atoms are calculated by grid-based density functional theory (DFT) for a solute molecule and by a polarizable molecular mechanics (PMM) force field for a large solvent environment composed of several 103-105 molecules as negative gradients of a DFT/PMM hybrid Hamiltonian. The electrostatic interactions are efficiently described by a hierarchical fast multipole method (FMM). Adopting recent progress of this FMM technique [Lorenzen et al., J. Chem. Theory Comput. 10, 3244 (2014)], which particularly entails a strictly linear scaling of the computational effort with the system size, and adapting this revised FMM approach to the computation of the interactions between the DFT and PMM fragments of a simulation system, here, we show how one can further enhance the efficiency and accuracy of such DFT/PMM-MD simulations. The resulting gain of total performance, as measured for alanine dipeptide (DFT) embedded in water (PMM) by the product of the gains in efficiency and accuracy, amounts to about one order of magnitude. We also demonstrate that the jointly parallelized implementation of the DFT and PMM-MD parts of the computation enables the efficient use of high-performance computing systems. The associated software is available online.

  14. Computer-intensive simulation of solid-state NMR experiments using SIMPSON.

    PubMed

    Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas

    2014-09-01

    Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  16. Conservative bin-to-bin fractional collisions

    NASA Astrophysics Data System (ADS)

    Martin, Robert

    2016-11-01

    Particle methods such as direct simulation Monte Carlo (DSMC) and particle-in-cell (PIC) are commonly used to model rarefied kinetic flows for engineering applications because of their ability to efficiently capture non-equilibrium behavior. The primary drawback to these methods relates to the poor convergence properties due to the stochastic nature of the methods which typically rely heavily on high degrees of non-equilibrium and time averaging to compensate for poor signal to noise ratios. For standard implementations, each computational particle represents many physical particles which further exacerbate statistical noise problems for flow with large species density variation such as encountered in flow expansions and chemical reactions. The stochastic weighted particle method (SWPM) introduced by Rjasanow and Wagner overcome this difficulty by allowing the ratio of real to computational particles to vary on a per particle basis throughout the flow. The DSMC procedure must also be slightly modified to properly sample the Boltzmann collision integral accounting for the variable particle weights and to avoid the creation of additional particles with negative weight. In this work, the SWPM with necessary modification to incorporate the variable hard sphere (VHS) collision cross section model commonly used in engineering applications is first incorporated into an existing engineering code, the Thermophysics Universal Research Framework. The results and computational efficiency are compared to a few simple test cases using a standard validated implementation of the DSMC method along with the adapted SWPM/VHS collision using an octree based conservative phase space reconstruction. The SWPM method is then further extended to combine the collision and phase space reconstruction into a single step which avoids the need to create additional computational particles only to destroy them again during the particle merge. This is particularly helpful when oversampling the collision integral when compared to the standard DSMC method. However, it is found that the more frequent phase space reconstructions can cause added numerical thermalization with low particle per cell counts due to the coarseness of the octree used. However, the methods are expected to be of much greater utility in transient expansion flows and chemical reactions in the future.

  17. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  18. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  19. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage

    PubMed Central

    Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-01-01

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query. PMID:29652810

  20. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage.

    PubMed

    Guo, Yeting; Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-04-13

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query.

  1. Efficient computation of the genomic relationship matrix and other matrices used in single-step evaluation.

    PubMed

    Aguilar, I; Misztal, I; Legarra, A; Tsuruta, S

    2011-12-01

    Genomic evaluations can be calculated using a unified procedure that combines phenotypic, pedigree and genomic information. Implementation of such a procedure requires the inverse of the relationship matrix based on pedigree and genomic relationships. The objective of this study was to investigate efficient computing options to create relationship matrices based on genomic markers and pedigree information as well as their inverses. SNP maker information was simulated for a panel of 40 K SNPs, with the number of genotyped animals up to 30 000. Matrix multiplication in the computation of the genomic relationship was by a simple 'do' loop, by two optimized versions of the loop, and by a specific matrix multiplication subroutine. Inversion was by a generalized inverse algorithm and by a LAPACK subroutine. With the most efficient choices and parallel processing, creation of matrices for 30 000 animals would take a few hours. Matrices required to implement a unified approach can be computed efficiently. Optimizations can be either by modifications of existing code or by the use of efficient automatic optimizations provided by open source or third-party libraries. © 2011 Blackwell Verlag GmbH.

  2. An efficient sparse matrix multiplication scheme for the CYBER 205 computer

    NASA Technical Reports Server (NTRS)

    Lambiotte, Jules J., Jr.

    1988-01-01

    This paper describes the development of an efficient algorithm for computing the product of a matrix and vector on a CYBER 205 vector computer. The desire to provide software which allows the user to choose between the often conflicting goals of minimizing central processing unit (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of four types of storage is selected for each diagonal. The candidate storage types employed were chosen to be efficient on the CYBER 205 for diagonals which have nonzero structure which is dense, moderately sparse, very sparse and short, or very sparse and long; however, for many densities, no diagonal type is most efficient with respect to both resource requirements, and a trade-off must be made. For each diagonal, an initialization subroutine estimates the CPU time and storage required for each storage type based on results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the two resources. The adjusted resource requirements are then compared to select the most efficient storage and computational scheme.

  3. Multiscale Space-Time Computational Methods for Fluid-Structure Interactions

    DTIC Science & Technology

    2015-09-13

    prescribed fully or partially, is from an actual locust, extracted from high-speed, multi-camera video recordings of the locust in a wind tunnel . We use...With creative methods for coupling the fluid and structure, we can increase the scope and efficiency of the FSI modeling . Multiscale methods, which now...play an important role in computational mathematics, can also increase the accuracy and efficiency of the computer modeling techniques. The main

  4. Simple and Effective Algorithms: Computer-Adaptive Testing.

    ERIC Educational Resources Information Center

    Linacre, John Michael

    Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…

  5. Computing, Information, and Communications Technology (CICT) Program Overview

    NASA Technical Reports Server (NTRS)

    VanDalsem, William R.

    2003-01-01

    The Computing, Information and Communications Technology (CICT) Program's goal is to enable NASA's Scientific Research, Space Exploration, and Aerospace Technology Missions with greater mission assurance, for less cost, with increased science return through the development and use of advanced computing, information and communication technologies

  6. Using Business Simulations as Authentic Assessment Tools

    ERIC Educational Resources Information Center

    Neely, Pat; Tucker, Jan

    2012-01-01

    New modalities for assessing student learning exist as a result of advances in computer technology. Conventional measurement practices have been transformed into computer based testing. Although current testing replicates assessment processes used in college classrooms, a greater opportunity exists to use computer technology to create authentic…

  7. The potential benefits of photonics in the computing platform

    NASA Astrophysics Data System (ADS)

    Bautista, Jerry

    2005-03-01

    The increase in computational requirements for real-time image processing, complex computational fluid dynamics, very large scale data mining in the health industry/Internet, and predictive models for financial markets are driving computer architects to consider new paradigms that rely upon very high speed interconnects within and between computing elements. Further challenges result from reduced power requirements, reduced transmission latency, and greater interconnect density. Optical interconnects may solve many of these problems with the added benefit extended reach. In addition, photonic interconnects provide relative EMI immunity which is becoming an increasing issue with a greater dependence on wireless connectivity. However, to be truly functional, the optical interconnect mesh should be able to support arbitration, addressing, etc. completely in the optical domain with a BER that is more stringent than "traditional" communication requirements. Outlined are challenges in the advanced computing environment, some possible optical architectures and relevant platform technologies, as well roughly sizing these opportunities which are quite large relative to the more "traditional" optical markets.

  8. Computer-aided design of peptide near infrared fluorescent probe for tumor diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Congying; Gu, Yueqing

    2014-09-01

    Integrin αvβ3 receptors are expressed on activated endothelial cells during neovascularization to maintain tumor growth, so they become hot research tagets in cancer diagnosis. Peptides possess several attractive features when compared to protein and small molecule, such as small size and high structural compatibility with target proteins. Efficient design of high-affinity peptide ligands to Integrin αvβ3 receptors has been an important problem. Designed peptides in silico provide a valuable and high-selectivity peptide, meanwhile decrease the time of drug screening. In this study, we design peptide which can bind with integrin αvβ3 via computer, and then synthesis near infrared fluorescent probe. The characterization of this near infrared fluorescent probe was detected by UV. To investigate the tumor cell targeting of this probe, it was labeled with visible fluorescent dye Rhodamine B (RhB) for microscopy. To evaluate the targeting capability of this near infrared fluorescent probe, mice bearing integrin αvβ3 positive tumor xenografts were used. In vitro cellular experiments indicated that this probe have a clear binding affinity to αvβ3-positive tumor cells. In vivo experiments confirmed the receptor binding specificity of this probe. The peptide of computational design can bind with integrin αvβ3. Combined peptide near-infrared fluorescent probe with imaging technology use for clinical and tumor diagnosis have a greater development in future.

  9. The bioleaching potential of a bacterial consortium.

    PubMed

    Latorre, Mauricio; Cortés, María Paz; Travisany, Dante; Di Genova, Alex; Budinich, Marko; Reyes-Jara, Angélica; Hödar, Christian; González, Mauricio; Parada, Pilar; Bobadilla-Fazzini, Roberto A; Cambiazo, Verónica; Maass, Alejandro

    2016-10-01

    This work presents the molecular foundation of a consortium of five efficient bacteria strains isolated from copper mines currently used in state of the art industrial-scale biotechnology. The strains Acidithiobacillus thiooxidans Licanantay, Acidiphilium multivorum Yenapatur, Leptospirillum ferriphilum Pañiwe, Acidithiobacillus ferrooxidans Wenelen and Sulfobacillus thermosulfidooxidans Cutipay were selected for genome sequencing based on metal tolerance, oxidation activity and bioleaching of copper efficiency. An integrated model of metabolic pathways representing the bioleaching capability of this consortium was generated. Results revealed that greater efficiency in copper recovery may be explained by the higher functional potential of L. ferriphilum Pañiwe and At. thiooxidans Licanantay to oxidize iron and reduced inorganic sulfur compounds. The consortium had a greater capacity to resist copper, arsenic and chloride ion compared to previously described biomining strains. Specialization and particular components in these bacteria provided the consortium a greater ability to bioleach copper sulfide ores. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Cloud Computing: Short Term Impacts of 1:1 Computing in the Sixth Grade

    ERIC Educational Resources Information Center

    Bebell, Damian; Clarkson, Apryl; Burraston, James

    2014-01-01

    Many parents, educators, and policy makers see great potential for leveraging tools like laptop computers, tablets, and smartphones in the classrooms of the world. Under budget constraints and shared access to equipment for students and teachers, the impacts have been irregular but hint at greater possibilities in 1:1 student computing settings.…

  11. Synthetic biology: insights into biological computation.

    PubMed

    Manzoni, Romilde; Urrios, Arturo; Velazquez-Garcia, Silvia; de Nadal, Eulàlia; Posas, Francesc

    2016-04-18

    Organisms have evolved a broad array of complex signaling mechanisms that allow them to survive in a wide range of environmental conditions. They are able to sense external inputs and produce an output response by computing the information. Synthetic biology attempts to rationally engineer biological systems in order to perform desired functions. Our increasing understanding of biological systems guides this rational design, while the huge background in electronics for building circuits defines the methodology. In this context, biocomputation is the branch of synthetic biology aimed at implementing artificial computational devices using engineered biological motifs as building blocks. Biocomputational devices are defined as biological systems that are able to integrate inputs and return outputs following pre-determined rules. Over the last decade the number of available synthetic engineered devices has increased exponentially; simple and complex circuits have been built in bacteria, yeast and mammalian cells. These devices can manage and store information, take decisions based on past and present inputs, and even convert a transient signal into a sustained response. The field is experiencing a fast growth and every day it is easier to implement more complex biological functions. This is mainly due to advances in in vitro DNA synthesis, new genome editing tools, novel molecular cloning techniques, continuously growing part libraries as well as other technological advances. This allows that digital computation can now be engineered and implemented in biological systems. Simple logic gates can be implemented and connected to perform novel desired functions or to better understand and redesign biological processes. Synthetic biological digital circuits could lead to new therapeutic approaches, as well as new and efficient ways to produce complex molecules such as antibiotics, bioplastics or biofuels. Biological computation not only provides possible biomedical and biotechnological applications, but also affords a greater understanding of biological systems.

  12. Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution

    ERIC Educational Resources Information Center

    Subramanian, Venkat R.

    2006-01-01

    High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…

  13. Cognitive Load Theory vs. Constructivist Approaches: Which Best Leads to Efficient, Deep Learning?

    ERIC Educational Resources Information Center

    Vogel-Walcutt, J. J.; Gebrim, J. B.; Bowers, C.; Carper, T. M.; Nicholson, D.

    2011-01-01

    Computer-assisted learning, in the form of simulation-based training, is heavily focused upon by the military. Because computer-based learning offers highly portable, reusable, and cost-efficient training options, the military has dedicated significant resources to the investigation of instructional strategies that improve learning efficiency…

  14. Compressed Sensing for Chemistry

    NASA Astrophysics Data System (ADS)

    Sanders, Jacob Nathan

    Many chemical applications, from spectroscopy to quantum chemistry, involve measuring or computing a large amount of data, and then compressing this data to retain the most chemically-relevant information. In contrast, compressed sensing is an emergent technique that makes it possible to measure or compute an amount of data that is roughly proportional to its information content. In particular, compressed sensing enables the recovery of a sparse quantity of information from significantly undersampled data by solving an ℓ 1-optimization problem. This thesis represents the application of compressed sensing to problems in chemistry. The first half of this thesis is about spectroscopy. Compressed sensing is used to accelerate the computation of vibrational and electronic spectra from real-time time-dependent density functional theory simulations. Using compressed sensing as a drop-in replacement for the discrete Fourier transform, well-resolved frequency spectra are obtained at one-fifth the typical simulation time and computational cost. The technique is generalized to multiple dimensions and applied to two-dimensional absorption spectroscopy using experimental data collected on atomic rubidium vapor. Finally, a related technique known as super-resolution is applied to open quantum systems to obtain realistic models of a protein environment, in the form of atomistic spectral densities, at lower computational cost. The second half of this thesis deals with matrices in quantum chemistry. It presents a new use of compressed sensing for more efficient matrix recovery whenever the calculation of individual matrix elements is the computational bottleneck. The technique is applied to the computation of the second-derivative Hessian matrices in electronic structure calculations to obtain the vibrational modes and frequencies of molecules. When applied to anthracene, this technique results in a threefold speed-up, with greater speed-ups possible for larger molecules. The implementation of the method in the Q-Chem commercial software package is described. Moreover, the method provides a general framework for bootstrapping cheap low-accuracy calculations in order to reduce the required number of expensive high-accuracy calculations.

  15. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  16. Computational Properties of the Hippocampus Increase the Efficiency of Goal-Directed Foraging through Hierarchical Reinforcement Learning

    PubMed Central

    Chalmers, Eric; Luczak, Artur; Gruber, Aaron J.

    2016-01-01

    The mammalian brain is thought to use a version of Model-based Reinforcement Learning (MBRL) to guide “goal-directed” behavior, wherein animals consider goals and make plans to acquire desired outcomes. However, conventional MBRL algorithms do not fully explain animals' ability to rapidly adapt to environmental changes, or learn multiple complex tasks. They also require extensive computation, suggesting that goal-directed behavior is cognitively expensive. We propose here that key features of processing in the hippocampus support a flexible MBRL mechanism for spatial navigation that is computationally efficient and can adapt quickly to change. We investigate this idea by implementing a computational MBRL framework that incorporates features inspired by computational properties of the hippocampus: a hierarchical representation of space, “forward sweeps” through future spatial trajectories, and context-driven remapping of place cells. We find that a hierarchical abstraction of space greatly reduces the computational load (mental effort) required for adaptation to changing environmental conditions, and allows efficient scaling to large problems. It also allows abstract knowledge gained at high levels to guide adaptation to new obstacles. Moreover, a context-driven remapping mechanism allows learning and memory of multiple tasks. Simulating dorsal or ventral hippocampal lesions in our computational framework qualitatively reproduces behavioral deficits observed in rodents with analogous lesions. The framework may thus embody key features of how the brain organizes model-based RL to efficiently solve navigation and other difficult tasks. PMID:28018203

  17. Optimal design variable considerations in the use of phase change materials in indirect evaporative cooling

    NASA Astrophysics Data System (ADS)

    Chilakapaty, Ankit Paul

    The demand for sustainable, energy efficient and cost effective heating and cooling solutions is exponentially increasing with the rapid advancement of computation and information technology. Use of latent heat storage materials also known as phase change materials (PCMs) for load leveling is an innovative solution to the data center cooling demands. These materials are commercially available in the form of microcapsules dispersed in water, referred to as the microencapsulated phase change slurries and have higher heat capacity than water. The composition and physical properties of phase change slurries play significant role in energy efficiency of the cooling systems designed implementing these PCM slurries. Objective of this project is to study the effect of PCM particle size, shape and volumetric concentration on overall heat transfer potential of the cooling systems designed with PCM slurries as the heat transfer fluid (HTF). In this study uniform volume heat source model is developed for the simulation of heat transfer potential using phase change materials in the form of bulk temperature difference in a fully developed flow through a circular duct. Results indicate the heat transfer potential increases with PCM volumetric concentration with gradually diminishing returns. Also, spherical PCM particles offer greater heat transfer potential when compared to cylindrical particles. Results of this project will aid in efficient design of cooling systems based on PCM slurries.

  18. Implicit gas-kinetic unified algorithm based on multi-block docking grid for multi-body reentry flows covering all flow regimes

    NASA Astrophysics Data System (ADS)

    Peng, Ao-Ping; Li, Zhi-Hui; Wu, Jun-Lin; Jiang, Xin-Yu

    2016-12-01

    Based on the previous researches of the Gas-Kinetic Unified Algorithm (GKUA) for flows from highly rarefied free-molecule transition to continuum, a new implicit scheme of cell-centered finite volume method is presented for directly solving the unified Boltzmann model equation covering various flow regimes. In view of the difficulty in generating the single-block grid system with high quality for complex irregular bodies, a multi-block docking grid generation method is designed on the basis of data transmission between blocks, and the data structure is constructed for processing arbitrary connection relations between blocks with high efficiency and reliability. As a result, the gas-kinetic unified algorithm with the implicit scheme and multi-block docking grid has been firstly established and used to solve the reentry flow problems around the multi-bodies covering all flow regimes with the whole range of Knudsen numbers from 10 to 3.7E-6. The implicit and explicit schemes are applied to computing and analyzing the supersonic flows in near-continuum and continuum regimes around a circular cylinder with careful comparison each other. It is shown that the present algorithm and modelling possess much higher computational efficiency and faster converging properties. The flow problems including two and three side-by-side cylinders are simulated from highly rarefied to near-continuum flow regimes, and the present computed results are found in good agreement with the related DSMC simulation and theoretical analysis solutions, which verify the good accuracy and reliability of the present method. It is observed that the spacing of the multi-body is smaller, the cylindrical throat obstruction is greater with the flow field of single-body asymmetrical more obviously and the normal force coefficient bigger. While in the near-continuum transitional flow regime of near-space flying surroundings, the spacing of the multi-body increases to six times of the diameter of the single-body, the interference effects of the multi-bodies tend to be negligible. The computing practice has confirmed that it is feasible for the present method to compute the aerodynamics and reveal flow mechanism around complex multi-body vehicles covering all flow regimes from the gas-kinetic point of view of solving the unified Boltzmann model velocity distribution function equation.

  19. Parameter Sweep and Optimization of Loosely Coupled Simulations Using the DAKOTA Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elwasif, Wael R; Bernholdt, David E; Pannala, Sreekanth

    2012-01-01

    The increasing availability of large scale computing capabilities has accelerated the development of high-fidelity coupled simulations. Such simulations typically involve the integration of models that implement various aspects of the complex phenomena under investigation. Coupled simulations are playing an integral role in fields such as climate modeling, earth systems modeling, rocket simulations, computational chemistry, fusion research, and many other computational fields. Model coupling provides scientists with systematic ways to virtually explore the physical, mathematical, and computational aspects of the problem. Such exploration is rarely done using a single execution of a simulation, but rather by aggregating the results from manymore » simulation runs that, together, serve to bring to light novel knowledge about the system under investigation. Furthermore, it is often the case (particularly in engineering disciplines) that the study of the underlying system takes the form of an optimization regime, where the control parameter space is explored to optimize an objective functions that captures system realizability, cost, performance, or a combination thereof. Novel and flexible frameworks that facilitate the integration of the disparate models into a holistic simulation are used to perform this research, while making efficient use of the available computational resources. In this paper, we describe the integration of the DAKOTA optimization and parameter sweep toolkit with the Integrated Plasma Simulator (IPS), a component-based framework for loosely coupled simulations. The integration allows DAKOTA to exploit the internal task and resource management of the IPS to dynamically instantiate simulation instances within a single IPS instance, allowing for greater control over the trade-off between efficiency of resource utilization and time to completion. We present a case study showing the use of the combined DAKOTA-IPS system to aid in the design of a lithium ion battery (LIB) cell, by studying a coupled system involving the electrochemistry and ion transport at the lower length scales and thermal energy transport at the device scales. The DAKOTA-IPS system provides a flexible tool for use in optimization and parameter sweep studies involving loosely coupled simulations that is suitable for use in situations where changes to the constituent components in the coupled simulation are impractical due to intellectual property or code heritage issues.« less

  20. Polynomial-time quantum algorithm for the simulation of chemical dynamics

    PubMed Central

    Kassal, Ivan; Jordan, Stephen P.; Love, Peter J.; Mohseni, Masoud; Aspuru-Guzik, Alán

    2008-01-01

    The computational cost of exact methods for quantum simulation using classical computers grows exponentially with system size. As a consequence, these techniques can be applied only to small systems. By contrast, we demonstrate that quantum computers could exactly simulate chemical reactions in polynomial time. Our algorithm uses the split-operator approach and explicitly simulates all electron-nuclear and interelectronic interactions in quadratic time. Surprisingly, this treatment is not only more accurate than the Born–Oppenheimer approximation but faster and more efficient as well, for all reactions with more than about four atoms. This is the case even though the entire electronic wave function is propagated on a grid with appropriately short time steps. Although the preparation and measurement of arbitrary states on a quantum computer is inefficient, here we demonstrate how to prepare states of chemical interest efficiently. We also show how to efficiently obtain chemically relevant observables, such as state-to-state transition probabilities and thermal reaction rates. Quantum computers using these techniques could outperform current classical computers with 100 qubits. PMID:19033207

  1. Computationally efficient algorithm for Gaussian Process regression in case of structured samples

    NASA Astrophysics Data System (ADS)

    Belyaev, M.; Burnaev, E.; Kapushev, Y.

    2016-04-01

    Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.

  2. Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations

    NASA Astrophysics Data System (ADS)

    Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.

    2016-07-01

    Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.

  3. RNAiFold2T: Constraint Programming design of thermo-IRES switches.

    PubMed

    Garcia-Martin, Juan Antonio; Dotu, Ivan; Fernandez-Chamorro, Javier; Lozano, Gloria; Ramajo, Jorge; Martinez-Salas, Encarnacion; Clote, Peter

    2016-06-15

    RNA thermometers (RNATs) are cis-regulatory elements that change secondary structure upon temperature shift. Often involved in the regulation of heat shock, cold shock and virulence genes, RNATs constitute an interesting potential resource in synthetic biology, where engineered RNATs could prove to be useful tools in biosensors and conditional gene regulation. Solving the 2-temperature inverse folding problem is critical for RNAT engineering. Here we introduce RNAiFold2T, the first Constraint Programming (CP) and Large Neighborhood Search (LNS) algorithms to solve this problem. Benchmarking tests of RNAiFold2T against existent programs (adaptive walk and genetic algorithm) inverse folding show that our software generates two orders of magnitude more solutions, thus allowing ample exploration of the space of solutions. Subsequently, solutions can be prioritized by computing various measures, including probability of target structure in the ensemble, melting temperature, etc. Using this strategy, we rationally designed two thermosensor internal ribosome entry site (thermo-IRES) elements, whose normalized cap-independent translation efficiency is approximately 50% greater at 42 °C than 30 °C, when tested in reticulocyte lysates. Translation efficiency is lower than that of the wild-type IRES element, which on the other hand is fully resistant to temperature shift-up. This appears to be the first purely computational design of functional RNA thermoswitches, and certainly the first purely computational design of functional thermo-IRES elements. RNAiFold2T is publicly available as part of the new release RNAiFold3.0 at https://github.com/clotelab/RNAiFold and http://bioinformatics.bc.edu/clotelab/RNAiFold, which latter has a web server as well. The software is written in C ++ and uses OR-Tools CP search engine. clote@bc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  4. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    NASA Astrophysics Data System (ADS)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  5. The Next Generation of Personal Computers.

    ERIC Educational Resources Information Center

    Crecine, John P.

    1986-01-01

    Discusses factors converging to create high-capacity, low-cost nature of next generation of microcomputers: a coherent vision of what graphics workstation and future computing environment should be like; hardware developments leading to greater storage capacity at lower costs; and development of software and expertise to exploit computing power…

  6. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  7. Transfer Efficiency of Bacteria and Viruses from Porous and Nonporous Fomites to Fingers under Different Relative Humidity Conditions

    PubMed Central

    Gerba, Charles P.; Tamimi, Akrum H.; Kitajima, Masaaki; Maxwell, Sheri L.; Rose, Joan B.

    2013-01-01

    Fomites can serve as routes of transmission for both enteric and respiratory pathogens. The present study examined the effect of low and high relative humidity on fomite-to-finger transfer efficiency of five model organisms from several common inanimate surfaces (fomites). Nine fomites representing porous and nonporous surfaces of different compositions were studied. Escherichia coli, Staphylococcus aureus, Bacillus thuringiensis, MS2 coliphage, and poliovirus 1 were placed on fomites in 10-μl drops and allowed to dry for 30 min under low (15% to 32%) or high (40% to 65%) relative humidity. Fomite-to-finger transfers were performed using 1.0 kg/cm2 of pressure for 10 s. Transfer efficiencies were greater under high relative humidity for both porous and nonporous surfaces. Most organisms on average had greater transfer efficiencies under high relative humidity than under low relative humidity. Nonporous surfaces had a greater transfer efficiency (up to 57%) than porous surfaces (<6.8%) under low relative humidity, as well as under high relative humidity (nonporous, up to 79.5%; porous, <13.4%). Transfer efficiency also varied with fomite material and organism type. The data generated can be used in quantitative microbial risk assessment models to assess the risk of infection from fomite-transmitted human pathogens and the relative levels of exposure to different types of fomites and microorganisms. PMID:23851098

  8. Micron-pore-sized metallic filter tube membranes for filtration of particulates and water purification.

    PubMed

    Phelps, T J; Palumbo, A V; Bischoff, B L; Miller, C J; Fagan, L A; McNeilly, M S; Judkins, R R

    2008-07-01

    Robust filtering techniques capable of efficiently removing particulates and biological agents from water or air suffer from plugging, poor rejuvenation, low permeance, and high backpressure. Operational characteristics of pressure-driven separations are in part controlled by the membrane pore size, charge of particulates, transmembrane pressure and the requirement for sufficient water flux to overcome fouling. With long term use filters decline in permeance due to filter-cake plugging of pores, fouling, or filter deterioration. Though metallic filter tube development at ORNL has focused almost exclusively on gas separations, a small study examined the applicability of these membranes for tangential filtering of aqueous suspensions of bacterial-sized particles. A mixture of fluorescent polystyrene microspheres ranging in size from 0.5 to 6 microm in diameter simulated microorganisms in filtration studies. Compared to a commercial filter, the ORNL 0.6 microm filter averaged approximately 10-fold greater filtration efficiency of the small particles, several-fold greater permeance after considerable use and it returned to approximately 85% of the initial flow upon backflushing versus 30% for the commercial filter. After filtering several liters of the particle-containing suspension, the ORNL composite filter still exhibited greater than 50% of its initial permeance while the commercial filter had decreased to less than 20%. When considering a greater filtration efficiency, greater permeance per unit mass, greater percentage of rejuvenation upon backflushing (up to 3-fold), and likely greater performance with extended use, the ORNL 0.6 microm filters can potentially outperform the commercial filter by factors of 100-1,000 fold.

  9. Dual-foci detection in photoacoustic computed tomography with coplanar light illumination and acoustic detection: a phantom study.

    PubMed

    Lin, Xiangwei; Liu, Chengbo; Meng, Jing; Gong, Xiaojing; Lin, Riqiang; Sun, Mingjian; Song, Liang

    2018-05-01

    A dual-foci transducer with coplanar light illumination and acoustic detection was applied for the first time. It overcame the small directivity angle, low-sensitivity, and large datasets in conventional circular scanning or array-based photoacoustic computed tomography (PACT). The custom-designed transducer is focused on both the scanning plane with virtual-point detection and the elevation direction for large field of view (FOV) cross-sectional imaging. Moreover, a coplanar light illumination and acoustic detection configuration can provide ring-shaped light irradiation with highly efficient acoustic detection, which in principle has a better adaptability when imaging samples of irregular surfaces. Phantom experiments showed that our PACT system can achieve high resolution (∼0.5  mm), enhanced signal-to-noise ratio (16-dB improvement), and a more complete structure in a greater FOV with an equal number of sampling points compared with the results from a flat aperture transducer. This study provides the proof of concept for the fabrication of a sparse array with the dual-foci property and large aperture size for high-quality, low-cost, and high-speed photoacoustic imaging. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  10. FORTRAN Versions of Reformulated HFGMC Codes

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Aboudi, Jacob; Bednarcyk, Brett A.

    2006-01-01

    Several FORTRAN codes have been written to implement the reformulated version of the high-fidelity generalized method of cells (HFGMC). Various aspects of the HFGMC and its predecessors were described in several prior NASA Tech Briefs articles, the most recent being HFGMC Enhancement of MAC/GMC (LEW-17818-1), NASA Tech Briefs, Vol. 30, No. 3 (March 2006), page 34. The HFGMC is a mathematical model of micromechanics for simulating stress and strain responses of fiber/matrix and other composite materials. The HFGMC overcomes a major limitation of a prior version of the GMC by accounting for coupling of shear and normal stresses and thereby affords greater accuracy, albeit at a large computational cost. In the reformulation of the HFGMC, the issue of computational efficiency was addressed: as a result, codes that implement the reformulated HFGMC complete their calculations about 10 times as fast as do those that implement the HFGMC. The present FORTRAN implementations of the reformulated HFGMC were written to satisfy a need for compatibility with other FORTRAN programs used to analyze structures and composite materials. The FORTRAN implementations also afford capabilities, beyond those of the basic HFGMC, for modeling inelasticity, fiber/matrix debonding, and coupled thermal, mechanical, piezo, and electromagnetic effects.

  11. Nonlocal quantum effective actions in Weyl-Flat spacetimes

    NASA Astrophysics Data System (ADS)

    Bautista, Teresa; Benevides, André; Dabholkar, Atish

    2018-06-01

    Virtual massless particles in quantum loops lead to nonlocal effects which can have interesting consequences, for example, for primordial magnetogenesis in cosmology or for computing finite N corrections in holography. We describe how the quantum effective actions summarizing these effects can be computed efficiently for Weyl-flat metrics by integrating the Weyl anomaly or, equivalently, the local renormalization group equation. This method relies only on the local Schwinger-DeWitt expansion of the heat kernel and allows for a re-summation of the anomalous leading large logarithms of the scale factor, log a( x), in situations where the Weyl factor changes by several e-foldings. As an illustration, we obtain the quantum effective action for the Yang-Mills field coupled to massless matter, and the self-interacting massless scalar field. Our action reduces to the nonlocal action obtained using the Barvinsky-Vilkovisky covariant perturbation theory in the regime R 2 ≪ ∇2 R for a typical curvature scale R, but has a greater range of validity effectively re-summing the covariant perturbation theory to all orders in curvatures. In particular, it is applicable also in the opposite regime R 2 ≫ ∇2 R, which is often of interest in cosmology.

  12. Low Mach number fluctuating hydrodynamics for electrolytes

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; Bell, John B.; Donev, Aleksandar; Garcia, Alejandro L.

    2016-11-01

    We formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids [A. Donev et al., Phys. Fluids 27, 037103 (2015), 10.1063/1.4913571], we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the mass and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. We demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second order in the deterministic setting and for length scales much greater than the Debye length gives results consistent with an electroneutral approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.

  13. Targeting an efficient target-to-target interval for P300 speller brain–computer interfaces

    PubMed Central

    Sellers, Eric W.; Wang, Xingyu

    2013-01-01

    Longer target-to-target intervals (TTI) produce greater P300 event-related potential amplitude, which can increase brain–computer interface (BCI) classification accuracy and decrease the number of flashes needed for accurate character classification. However, longer TTIs requires more time for each trial, which will decrease the information transfer rate of BCI. In this paper, a P300 BCI using a 7 × 12 matrix explored new flash patterns (16-, 18- and 21-flash pattern) with different TTIs to assess the effects of TTI on P300 BCI performance. The new flash patterns were designed to minimize TTI, decrease repetition blindness, and examine the temporal relationship between each flash of a given stimulus by placing a minimum of one (16-flash pattern), two (18-flash pattern), or three (21-flash pattern) non-target flashes between each target flashes. Online results showed that the 16-flash pattern yielded the lowest classification accuracy among the three patterns. The results also showed that the 18-flash pattern provides a significantly higher information transfer rate (ITR) than the 21-flash pattern; both patterns provide high ITR and high accuracy for all subjects. PMID:22350331

  14. Plant metabolic modeling: achieving new insight into metabolism and metabolic engineering.

    PubMed

    Baghalian, Kambiz; Hajirezaei, Mohammad-Reza; Schreiber, Falk

    2014-10-01

    Models are used to represent aspects of the real world for specific purposes, and mathematical models have opened up new approaches in studying the behavior and complexity of biological systems. However, modeling is often time-consuming and requires significant computational resources for data development, data analysis, and simulation. Computational modeling has been successfully applied as an aid for metabolic engineering in microorganisms. But such model-based approaches have only recently been extended to plant metabolic engineering, mainly due to greater pathway complexity in plants and their highly compartmentalized cellular structure. Recent progress in plant systems biology and bioinformatics has begun to disentangle this complexity and facilitate the creation of efficient plant metabolic models. This review highlights several aspects of plant metabolic modeling in the context of understanding, predicting and modifying complex plant metabolism. We discuss opportunities for engineering photosynthetic carbon metabolism, sucrose synthesis, and the tricarboxylic acid cycle in leaves and oil synthesis in seeds and the application of metabolic modeling to the study of plant acclimation to the environment. The aim of the review is to offer a current perspective for plant biologists without requiring specialized knowledge of bioinformatics or systems biology. © 2014 American Society of Plant Biologists. All rights reserved.

  15. An efficient and accurate technique to compute the absorption, emission, and transmission of radiation by the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.

    1990-01-01

    CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.

  16. Plant Metabolic Modeling: Achieving New Insight into Metabolism and Metabolic Engineering

    PubMed Central

    Baghalian, Kambiz; Hajirezaei, Mohammad-Reza; Schreiber, Falk

    2014-01-01

    Models are used to represent aspects of the real world for specific purposes, and mathematical models have opened up new approaches in studying the behavior and complexity of biological systems. However, modeling is often time-consuming and requires significant computational resources for data development, data analysis, and simulation. Computational modeling has been successfully applied as an aid for metabolic engineering in microorganisms. But such model-based approaches have only recently been extended to plant metabolic engineering, mainly due to greater pathway complexity in plants and their highly compartmentalized cellular structure. Recent progress in plant systems biology and bioinformatics has begun to disentangle this complexity and facilitate the creation of efficient plant metabolic models. This review highlights several aspects of plant metabolic modeling in the context of understanding, predicting and modifying complex plant metabolism. We discuss opportunities for engineering photosynthetic carbon metabolism, sucrose synthesis, and the tricarboxylic acid cycle in leaves and oil synthesis in seeds and the application of metabolic modeling to the study of plant acclimation to the environment. The aim of the review is to offer a current perspective for plant biologists without requiring specialized knowledge of bioinformatics or systems biology. PMID:25344492

  17. Evaluation of reinitialization-free nonvolatile computer systems for energy-harvesting Internet of things applications

    NASA Astrophysics Data System (ADS)

    Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro

    2017-08-01

    In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.

  18. Unified treatment of microscopic boundary conditions and efficient algorithms for estimating tangent operators of the homogenized behavior in the computational homogenization method

    NASA Astrophysics Data System (ADS)

    Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic

    2017-03-01

    This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.

  19. 40 CFR 63.683 - Standards: General.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... HAP biodegradation efficiency (Rbio) for the biological treatment process is equal to or greater than 95 percent. The HAP biodegradation efficiency (Rbio) shall be determined in accordance with the...

  20. 40 CFR 63.683 - Standards: General.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... HAP biodegradation efficiency (Rbio) for the biological treatment process is equal to or greater than 95 percent. The HAP biodegradation efficiency (Rbio) shall be determined in accordance with the...

  1. 40 CFR 63.683 - Standards: General.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... HAP biodegradation efficiency (Rbio) for the biological treatment process is equal to or greater than 95 percent. The HAP biodegradation efficiency (Rbio) shall be determined in accordance with the...

  2. 40 CFR 63.683 - Standards: General.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... HAP biodegradation efficiency (Rbio) for the biological treatment process is equal to or greater than 95 percent. The HAP biodegradation efficiency (Rbio) shall be determined in accordance with the...

  3. 40 CFR 63.683 - Standards: General.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... HAP biodegradation efficiency (Rbio) for the biological treatment process is equal to or greater than 95 percent. The HAP biodegradation efficiency (Rbio) shall be determined in accordance with the...

  4. Probabilistic methods for rotordynamics analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  5. Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.

    PubMed

    Qu, Hong; Yi, Zhang; Yang, Simon X

    2013-06-01

    Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach.

  6. Distributing and storing data efficiently by means of special datasets in the ATLAS collaboration

    NASA Astrophysics Data System (ADS)

    Köneke, Karsten; ATLAS Collaboration

    2011-12-01

    With the start of the LHC physics program, the ATLAS experiment started to record vast amounts of data. This data has to be distributed and stored on the world-wide computing grid in a smart way in order to enable an effective and efficient analysis by physicists. This article describes how the ATLAS collaboration chose to create specialized reduced datasets in order to efficiently use computing resources and facilitate physics analyses.

  7. High-Performance Computing for the Electromagnetic Modeling and Simulation of Interconnects

    NASA Technical Reports Server (NTRS)

    Schutt-Aine, Jose E.

    1996-01-01

    The electromagnetic modeling of packages and interconnects plays a very important role in the design of high-speed digital circuits, and is most efficiently performed by using computer-aided design algorithms. In recent years, packaging has become a critical area in the design of high-speed communication systems and fast computers, and the importance of the software support for their development has increased accordingly. Throughout this project, our efforts have focused on the development of modeling and simulation techniques and algorithms that permit the fast computation of the electrical parameters of interconnects and the efficient simulation of their electrical performance.

  8. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    NASA Technical Reports Server (NTRS)

    Pham, Long; Chen, Aijun; Kempler, Steven; Lynnes, Christopher; Theobald, Michael; Asghar, Esfandiari; Campino, Jane; Vollmer, Bruce

    2011-01-01

    Cloud Computing has been implemented in several commercial arenas. The NASA Nebula Cloud Computing platform is an Infrastructure as a Service (IaaS) built in 2008 at NASA Ames Research Center and 2010 at GSFC. Nebula is an open source Cloud platform intended to: a) Make NASA realize significant cost savings through efficient resource utilization, reduced energy consumption, and reduced labor costs. b) Provide an easier way for NASA scientists and researchers to efficiently explore and share large and complex data sets. c) Allow customers to provision, manage, and decommission computing capabilities on an as-needed bases

  9. A class of parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.

  10. Inference from Samples of DNA Sequences Using a Two-Locus Model

    PubMed Central

    Griffiths, Robert C.

    2011-01-01

    Abstract Performing inference on contemporary samples of DNA sequence data is an important and challenging task. Computationally intensive methods such as importance sampling (IS) are attractive because they make full use of the available data, but in the presence of recombination the large state space of genealogies can be prohibitive. In this article, we make progress by developing an efficient IS proposal distribution for a two-locus model of sequence data. We show that the proposal developed here leads to much greater efficiency, outperforming existing IS methods that could be adapted to this model. Among several possible applications, the algorithm can be used to find maximum likelihood estimates for mutation and crossover rates, and to perform ancestral inference. We illustrate the method on previously reported sequence data covering two loci either side of the well-studied TAP2 recombination hotspot. The two loci are themselves largely non-recombining, so we obtain a gene tree at each locus and are able to infer in detail the effect of the hotspot on their joint ancestry. We summarize this joint ancestry by introducing the gene graph, a summary of the well-known ancestral recombination graph. PMID:21210733

  11. Testing the FLI in the region of the Pallas asteroid family

    NASA Astrophysics Data System (ADS)

    Todorović, N.; Novaković, B.

    2015-08-01

    Computation of the fast Lyapunov indicator (FLI) is one of the most efficient numerical ways to characterize dynamical nature of motion and to detect phase-space structures in a large variety of dynamical models. Despite its effectiveness, FLI was mainly used for symplectic maps or simple Hamiltonians, but it has never been used to study dynamics of asteroids to a greater extent. This research shows that FLI could also be successfully applied to real (Solar system) dynamics. For this purpose, we focus on the main belt region where the Pallas asteroid family is located. By using the full Solar system model, different sets of initial conditions and different integration times, we managed not only to visualize a large multiplet of resonances located in the region, but also their structures, chaotic boundaries, stability islands therein and the positions of their mutual interaction. In the end, we have identified some of the most dominant resonances present in the region and established a link between these resonances and chaotic areas visible in our maps. We have illustrated that FLI once again has shown its efficiency to detect dynamical structures in the main belt, e.g. in the Pallas asteroid family, with a surprisingly good clarity.

  12. Algal culture studies for CELSS

    NASA Technical Reports Server (NTRS)

    Radmer, R.; Behrens, P.; Arnett, K.; Gladue, R.; Cox, J.; Lieberman, D.

    1987-01-01

    Microalgae are well-suited as a component of a Closed Environmental Life Support System (CELSS), since they can couple the closely related functions of food production and atmospheric regeneration. The objective was to provide a basis for predicting the response of CELSS algal cultures, and thus the food supply and air regeneration system, to changes in the culture parameters. Scenedesmus growth was measured as a function of light intensity, and the spectral dependence of light absorption by the algae as well as algal respiration in the light were determined as a function of cell concentration. These results were used to test and confirm a mathematical model that describes the productivity of an algal culture in terms of the competing processes of photosynthesis and respiration. The relationship of algal productivity to cell concentration was determined at different carbon dioxide concentrations, temperatures, and light intensities. The maximum productivity achieved by an air-grown culture was found to be within 10% of the computed maximum productivity, indicating that CO2 was very efficiently removed from the gas stream by the algal culture. Measurements of biomass productivity as a function of cell concentration at different light intensities indicated that both the productivity and efficiency of light utilization were greater at higher light intensities.

  13. Experimental investigation of low aspect ratio, large amplitude, aeroelastic energy harvesting systems

    NASA Astrophysics Data System (ADS)

    Kirschmeier, Benjamin; Summerour, Jacob; Bryant, Matthew

    2017-04-01

    Interest in clean, stable, and renewable energy harvesting devices has increased dramatically with the volatility of petroleum markets. Specifically, research in aero/hydro kinetic devices has created numerous new horizontal and vertical axis wind turbines, and oscillating wing turbines. Oscillating wing turbines (OWTs) differ from their wind turbine cousins by having a rectangular swept area compared to a circular swept area. The OWT systems also possess a lower tip speed that reduces the overall noise produced by the system. OWTs have undergone significant computational analysis to uncover the underlying flow physics that can drive the system to high efficiencies for single wing oscillations. When two of these devices are placed in tandem configuration, i.e. one placed downstream of the other, they either can constructively or destructively interact. When constructive interactions occurred, they enhance the system efficiency to greater than that of two devices on their own. A new experimental design investigates the dependency of interaction modes on the pitch stiffness of the downstream wing. The experimental results demonstrated that interaction modes are functions of convective time scale and downstream wing pitch stiffness. Heterogeneous combinations of pitch stiffness exhibited constructive and destructive lock-in phenomena whereas the homogeneous combination exhibited only destructive interactions.

  14. Inflection point caustic problems and solutions for high-gain dual-shaped reflectors

    NASA Technical Reports Server (NTRS)

    Galindo-Israel, Victor; Veruttipong, Thavath; Imbriale, William; Rengarajan, Sembiam

    1990-01-01

    The singular nature of the uniform geometrical theory of diffraction (UTD) subreflector scattered field at the vicinity of the main reflector edge (for a high-gain antenna design) is investigated. It is shown that the singularity in the UTD edge-diffracted and slope-diffracted fields is due to the reflection distance parameter approaching infinity in the transition functions. While the geometrical optics (GO) and UTD edge-diffracted fields exhibit singularities of the same order, the edge slope-diffracted field singularity is more significant and is substantial for greater subreflector edge tapers. The diffraction analysis of such a subreflector in the vicinity of the main reflector edge has been carried out efficiently and accurately by a stationary phase evaluation of the phi-integral, whereas the theta-integral is carried out numerically. Computational results from UTD and physical optics (PO) analysis of a 34-m ground station dual-shaped reflector confirm the analytical formulations for both circularly symmetric and offset asymmetric subreflectors. It is concluded that the proposed PO(theta)GO(phi) technique can be used to study the spillover or noise temperature characteristics of a high-gain reflector antenna efficiently and accurately.

  15. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  16. Efficient storage, computation, and exposure of computer-generated holograms by electron-beam lithography.

    PubMed

    Newman, D M; Hawley, R W; Goeckel, D L; Crawford, R D; Abraham, S; Gallagher, N C

    1993-05-10

    An efficient storage format was developed for computer-generated holograms for use in electron-beam lithography. This method employs run-length encoding and Lempel-Ziv-Welch compression and succeeds in exposing holograms that were previously infeasible owing to the hologram's tremendous pattern-data file size. These holograms also require significant computation; thus the algorithm was implemented on a parallel computer, which improved performance by 2 orders of magnitude. The decompression algorithm was integrated into the Cambridge electron-beam machine's front-end processor.Although this provides much-needed ability, some hardware enhancements will be required in the future to overcome inadequacies in the current front-end processor that result in a lengthy exposure time.

  17. Key factors of eddy current separation for recovering aluminum from crushed e-waste.

    PubMed

    Ruan, Jujun; Dong, Lipeng; Zheng, Jie; Zhang, Tao; Huang, Mingzhi; Xu, Zhenming

    2017-02-01

    Recovery of e-waste in China had caused serious pollutions. Eddy current separation is an environment-friendly technology of separating nonferrous metallic particles from crushed e-waste. However, due to complex particle characters, separation efficiency of traditional eddy current separator was low. In production, controllable operation factors of eddy current separation are feeding speed, (ωR-v), and S p . There is little special information about influencing mechanism and critical parameters of these factors in eddy current separation. This paper provided the special information of these key factors in eddy current separation of recovering aluminum particles from crushed waste refrigerator cabinets. Detachment angles increased as the increase of (ωR-v). Separation efficiency increased with the growing of detachment angles. Aluminum particles were completely separated from plastic particles in critical parameters of feeding speed 0.5m/s and detachment angles greater than 6.61deg. S p /S m of aluminum particles in crushed waste refrigerators ranged from 0.08 to 0.51. Separation efficiency increased as the increase of S p /S m . This enlightened us to develop new separator to separate smaller nonferrous metallic particles in e-waste recovery. High feeding speed destroyed separation efficiency. However, greater S p of aluminum particles brought positive impact on separation efficiency. Greater S p could increase critical feeding speed to offer greater throughput of eddy current separation. This paper will guide eddy current separation in production of recovering nonferrous metals from crushed e-waste. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. High-Performance Computing Systems and Operations | Computational Science |

    Science.gov Websites

    NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate

  19. Computer use, language, and literacy in safety net clinic communication

    PubMed Central

    Barton, Jennifer L; Lyles, Courtney R; Wu, Michael; Yelin, Edward H; Martinez, Diana; Schillinger, Dean

    2017-01-01

    Objective: Patients with limited health literacy (LHL) and limited English proficiency (LEP) experience suboptimal communication and health outcomes. Electronic health record implementation in safety net clinics may affect communication with LHL and LEP patients. We investigated the associations between safety net clinician computer use and patient-provider communication for patients with LEP and LHL. Materials and Methods: We video-recorded encounters at 5 academically affiliated US public hospital clinics between English- and Spanish-speaking patients with chronic conditions and their primary and specialty care clinicians. We analyzed changes in communication behaviors (coded with the Roter Interaction Analysis System) with each additional point on a clinician computer use score, controlling for clinician type and visit length and stratified by English proficiency and health literacy status. Results: Greater clinician computer use was associated with more biomedical statements (+12.4, P = .03) and less positive affect (−0.6, P < .01) from LEP/LHL patients. In visits with patients with adequate English proficiency/health literacy, greater clinician computer use was associated with less positive patient affect (−0.9, P < .01), fewer clinician psychosocial statements (−3.5, P < .05), greater clinician verbal dominance (+0.09, P < .01), and lower ratings on quality of care and communication. Conclusion: Higher clinician computer use was associated with more biomedical focus with LEP/LHL patients, and clinician verbal dominance and lower ratings with patients with adequate English proficiency and health literacy. Discussion: Implementation research should explore interventions to enhance relationship-centered communication for diverse patient populations in the computer era. PMID:27274017

  20. Computer use, language, and literacy in safety net clinic communication.

    PubMed

    Ratanawongsa, Neda; Barton, Jennifer L; Lyles, Courtney R; Wu, Michael; Yelin, Edward H; Martinez, Diana; Schillinger, Dean

    2017-01-01

    Patients with limited health literacy (LHL) and limited English proficiency (LEP) experience suboptimal communication and health outcomes. Electronic health record implementation in safety net clinics may affect communication with LHL and LEP patients.We investigated the associations between safety net clinician computer use and patient-provider communication for patients with LEP and LHL. We video-recorded encounters at 5 academically affiliated US public hospital clinics between English- and Spanish-speaking patients with chronic conditions and their primary and specialty care clinicians. We analyzed changes in communication behaviors (coded with the Roter Interaction Analysis System) with each additional point on a clinician computer use score, controlling for clinician type and visit length and stratified by English proficiency and health literacy status. Greater clinician computer use was associated with more biomedical statements (+12.4, P = .03) and less positive affect (-0.6, P < .01) from LEP/LHL patients. In visits with patients with adequate English proficiency/health literacy, greater clinician computer use was associated with less positive patient affect (-0.9, P < .01), fewer clinician psychosocial statements (-3.5, P < .05), greater clinician verbal dominance (+0.09, P < .01), and lower ratings on quality of care and communication. Higher clinician computer use was associated with more biomedical focus with LEP/LHL patients, and clinician verbal dominance and lower ratings with patients with adequate English proficiency and health literacy. Implementation research should explore interventions to enhance relationship-centered communication for diverse patient populations in the computer era. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  1. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems

    PubMed Central

    Wu, Jun; Su, Zhou; Li, Jianhua

    2017-01-01

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on “friend” relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems. PMID:28758943

  2. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems.

    PubMed

    Wu, Jun; Su, Zhou; Wang, Shen; Li, Jianhua

    2017-07-30

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on "friend" relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems.

  3. Reappraisal of solid selective emitters

    NASA Technical Reports Server (NTRS)

    Chubb, Donald L.

    1990-01-01

    New rare earth oxide emitters show greater efficiency than previous emitters. As a result, based on a simple model the efficiency of these emitters was calculated. Results indicate that the emission band of the selective emitter must be at relatively low energy (less than or equal to .52 eV) to obtain maximum efficiency at moderate emitter temperatures (less than or equal to 1500 K). Thus low bandgap energy PV materials are required to obtain an efficient thermophotovoltaic (TPV) system. Of the 4 specific rare earths (Nd, Ho, Er, Yb) studied Ho has the largest efficiency at moderate temperatures (72 percent at 1500 K). A comparison was made between a selective emitter TPV system and a TPV system that uses a thermal emitter plus a band pass filter to make the thermal emitter behave like a selective emitter. Results of the comparison indicate that only for very optimistic filter and thermal emitter properties will the filter TPV system have a greater efficiency than the selective emitter system.

  4. Synchronized Pair Configuration in Virtualization-Based Lab for Learning Computer Networks

    ERIC Educational Resources Information Center

    Kongcharoen, Chaknarin; Hwang, Wu-Yuin; Ghinea, Gheorghita

    2017-01-01

    More studies are concentrating on using virtualization-based labs to facilitate computer or network learning concepts. Some benefits are lower hardware costs and greater flexibility in reconfiguring computer and network environments. However, few studies have investigated effective mechanisms for using virtualization fully for collaboration.…

  5. Computer Augmented Learning; A Survey.

    ERIC Educational Resources Information Center

    Kindred, J.

    The report contains a description and summary of computer augmented learning devices and systems. The devices are of two general types programed instruction systems based on the teaching machines pioneered by Pressey and developed by Skinner, and the so-called "docile" systems that permit greater user-direction with the computer under student…

  6. A fast indirect method to compute functions of genomic relationships concerning genotyped and ungenotyped individuals, for diversity management.

    PubMed

    Colleau, Jean-Jacques; Palhière, Isabelle; Rodríguez-Ramilo, Silvia T; Legarra, Andres

    2017-12-01

    Pedigree-based management of genetic diversity in populations, e.g., using optimal contributions, involves computation of the [Formula: see text] type yielding elements (relationships) or functions (usually averages) of relationship matrices. For pedigree-based relationships [Formula: see text], a very efficient method exists. When all the individuals of interest are genotyped, genomic management can be addressed using the genomic relationship matrix [Formula: see text]; however, to date, the computational problem of efficiently computing [Formula: see text] has not been well studied. When some individuals of interest are not genotyped, genomic management should consider the relationship matrix [Formula: see text] that combines genotyped and ungenotyped individuals; however, direct computation of [Formula: see text] is computationally very demanding, because construction of a possibly huge matrix is required. Our work presents efficient ways of computing [Formula: see text] and [Formula: see text], with applications on real data from dairy sheep and dairy goat breeding schemes. For genomic relationships, an efficient indirect computation with quadratic instead of cubic cost is [Formula: see text], where Z is a matrix relating animals to genotypes. For the relationship matrix [Formula: see text], we propose an indirect method based on the difference between vectors [Formula: see text], which involves computation of [Formula: see text] and of products such as [Formula: see text] and [Formula: see text], where [Formula: see text] is a working vector derived from [Formula: see text]. The latter computation is the most demanding but can be done using sparse Cholesky decompositions of matrix [Formula: see text], which allows handling very large genomic and pedigree data files. Studies based on simulations reported in the literature show that the trends of average relationships in [Formula: see text] and [Formula: see text] differ as genomic selection proceeds. When selection is based on genomic relationships but management is based on pedigree data, the true genetic diversity is overestimated. However, our tests on real data from sheep and goat obtained before genomic selection started do not show this. We present efficient methods to compute elements and statistics of the genomic relationships [Formula: see text] and of matrix [Formula: see text] that combines ungenotyped and genotyped individuals. These methods should be useful to monitor and handle genomic diversity.

  7. The Canadian Hydrological Model (CHM): A multi-scale, variable-complexity hydrological model for cold regions

    NASA Astrophysics Data System (ADS)

    Marsh, C.; Pomeroy, J. W.; Wheater, H. S.

    2016-12-01

    There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.

  8. The Impact of Time Delay on the Content of Discussions at a Computer-Mediated Conference

    NASA Astrophysics Data System (ADS)

    Huntley, Byron C.; Thatcher, Andrew

    2008-11-01

    This study investigates the relationship between the content of computer-mediated discussions and the time delay between online postings. The study aims to broaden understanding of the dynamics of computer-mediated discussion regarding the time delay and the actual content of computer-mediated discussions (knowledge construction, social aspects, amount of words and number of postings) which has barely been researched. The computer-mediated discussions of the CybErg 2005 virtual conference served as the sample for this study. The Interaction Analysis Model [1] was utilized to analyze the level of knowledge construction in the content of the computer-mediated discussions. Correlations have been computed for all combinations of the variables. The results demonstrate that knowledge construction, social aspects and amount of words generated within postings were independent of, and not affected by, the time delay between the postings and the posting from which the reply was formulated. When greater numbers of words were utilized within postings, this was typically associated with a greater level of knowledge construction. Social aspects in the discussion were found to neither advantage nor disadvantage the overall effectiveness of the computer-mediated discussion.

  9. An efficient venturi scrubber system to remove submicron particles in exhaust gas.

    PubMed

    Tsai, Chuen-Jinn; Lin, Chia-Hung; Wang, Yu-Min; Hunag, Cheng-Hsiung; Li, Shou-Nan; Wu, Zong-Xue; Wang, Feng-Cai

    2005-03-01

    An efficient venturi scrubber system making use of heterogeneous nucleation and condensational growth of particles was designed and tested to remove fine particles from the exhaust of a local scrubber where residual SiH4 gas was abated and lots of fine SiO2 particles were generated. In front of the venturi scrubber, normal-temperature fine-water mist mixes with high-temperature exhaust gas to cool it to the saturation temperature, allowing submicron particles to grow into micron sizes. The grown particles are then scrubbed efficiently in the venturi scrubber. Test results show that the present venturi scrubber system is effective for removing submicron particles. For SiO2 particles greater than 0.1microm, the removal efficiency is greater than 80-90%, depending on particle concentration. The corresponding pressure drop is relatively low. For example, the pressure drop of the venturi scrubber is approximately 15.4 +/- 2.4 cm H2O when the liquid-to-gas ratio is 1.50 L/m3. A theoretical calculation has been conducted to simulate particle growth process and the removal efficiency of the venturi scrubber. The theoretical results agree with the experimental data reasonably well when SiO2 particle diameter is greater than 0.1 microm.

  10. Excitation of Earth-ionosphere waveguide in the ELF and lower VLF bands by modulated ionospheric current

    NASA Astrophysics Data System (ADS)

    Field, E. C.; Bloom, R. M.

    1993-05-01

    In this report, the principal of reciprocity is used in conjunction with a full-wave propagation code to calculate ground-level fields excited by ionospheric currents modulated at frequencies between 50 and 100 Hz with HF heaters. Results show the dependence on source orientation, altitude, and dimension and therefore pertain to experiments using the HIPAS or HAARP ionospheric heaters. In the end-fire mode, the waveguide excitation efficiency of an ELF HED in the ionosphere is up to 20 dB greater than for a ground-based antenna, provided its altitude does not exceed 80 to 90 km. The highest efficiency occurs for a source altitude of around 70 km; if that altitude is raised to 100 km, the efficiency drops by about 20 dB in the day and 10 dB at night. That efficiency does not account for the greater conductivity modulation that might be achieved at altitudes greater than 70 km, however. The trade-off between the altitude dependencies of the excitation efficiency and maximum achievable modulation depends on the ERP of the HF heater, the optimum altitude increasing with increasing ERP. For HIPAS the best modulation altitude is around 70 km, whereas for HAARP there might be marginal value in modulating at attitudes as high as 100 km.

  11. A method to efficiently apply a biogeochemical model to a landscape.

    Treesearch

    Robert E. Kennedy; David P. Turner; Warren B. Cohen; Michael Guzy

    2006-01-01

    Biogeochemical models offer an important means of understanding carbon dynamics, but the computational complexity of many models means that modeling all grid cells on a large landscape is computationally burdensome. Because most biogeochemical models ignore adjacency effects between cells, however, a more efficient approach is possible. Recognizing that spatial...

  12. Solving wood chip transport problems with computer simulation.

    Treesearch

    Dennis P. Bradley; Sharon A. Winsauer

    1976-01-01

    Efficient chip transport operations are difficult to achieve due to frequent and often unpredictable changes in distance to market, chipping rate, time spent at the mill, and equipment costs. This paper describes a computer simulation model that allows a logger to design an efficient transport system in response to these changing factors.

  13. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  14. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  15. Terascale direct numerical simulations of turbulent combustion using S3D

    NASA Astrophysics Data System (ADS)

    Chen, J. H.; Choudhary, A.; de Supinski, B.; DeVries, M.; Hawkes, E. R.; Klasky, S.; Liao, W. K.; Ma, K. L.; Mellor-Crummey, J.; Podhorszki, N.; Sankaran, R.; Shende, S.; Yoo, C. S.

    2009-01-01

    Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.

  16. Mathematical computer programs: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Computer programs, routines, and subroutines for aiding engineers, scientists, and mathematicians in direct problem solving are presented. Also included is a group of items that affords the same users greater flexibility in the use of software.

  17. TDat: An Efficient Platform for Processing Petabyte-Scale Whole-Brain Volumetric Images.

    PubMed

    Li, Yuxin; Gong, Hui; Yang, Xiaoquan; Yuan, Jing; Jiang, Tao; Li, Xiangning; Sun, Qingtao; Zhu, Dan; Wang, Zhenyu; Luo, Qingming; Li, Anan

    2017-01-01

    Three-dimensional imaging of whole mammalian brains at single-neuron resolution has generated terabyte (TB)- and even petabyte (PB)-sized datasets. Due to their size, processing these massive image datasets can be hindered by the computer hardware and software typically found in biological laboratories. To fill this gap, we have developed an efficient platform named TDat, which adopts a novel data reformatting strategy by reading cuboid data and employing parallel computing. In data reformatting, TDat is more efficient than any other software. In data accessing, we adopted parallelization to fully explore the capability for data transmission in computers. We applied TDat in large-volume data rigid registration and neuron tracing in whole-brain data with single-neuron resolution, which has never been demonstrated in other studies. We also showed its compatibility with various computing platforms, image processing software and imaging systems.

  18. Numerical algorithm comparison for the accurate and efficient computation of high-incidence vortical flow

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    1991-01-01

    Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.

  19. Efficient quantum walk on a quantum processor

    PubMed Central

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.

    2016-01-01

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor. PMID:27146471

  20. Efficient Computation of Difference Vibrational Spectra in Isothermal-Isobaric Ensemble.

    PubMed

    Joutsuka, Tatsuya; Morita, Akihiro

    2016-11-03

    Difference spectroscopy between two close systems is widely used to augment its selectivity to the different parts of the observed system, though the molecular dynamics calculation of tiny difference spectra would be computationally extraordinary demanding by subtraction of two spectra. Therefore, we have proposed an efficient computational algorithm of difference spectra without resorting to the subtraction. The present paper reports our extension of the theoretical method in the isothermal-isobaric (NPT) ensemble. The present theory expands our applications of analysis including pressure dependence of the spectra. We verified that the present theory yields accurate difference spectra in the NPT condition as well, with remarkable computational efficiency over the straightforward subtraction by several orders of magnitude. This method is further applied to vibrational spectra of liquid water with varying pressure and succeeded in reproducing tiny difference spectra by pressure change. The anomalous pressure dependence is elucidated in relation to other properties of liquid water.

  1. An efficient method for hybrid density functional calculation with spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui

    2018-03-01

    In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.

  2. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  3. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  4. Efficient quantum circuits for one-way quantum computing.

    PubMed

    Tanamoto, Tetsufumi; Liu, Yu-Xi; Hu, Xuedong; Nori, Franco

    2009-03-13

    While Ising-type interactions are ideal for implementing controlled phase flip gates in one-way quantum computing, natural interactions between solid-state qubits are most often described by either the XY or the Heisenberg models. We show an efficient way of generating cluster states directly using either the imaginary SWAP (iSWAP) gate for the XY model, or the sqrt[SWAP] gate for the Heisenberg model. Our approach thus makes one-way quantum computing more feasible for solid-state devices.

  5. Printed Multi-Turn Loop Antenna for RF Bio-Telemetry

    NASA Technical Reports Server (NTRS)

    Simons, Rainee N.; Hall, David G.; Miranda, Felix A.

    2004-01-01

    In this paper, a novel printed multi-turn loop antenna for contact-less powering and RF telemetry from implantable bio- MEMS sensors at a design frequency of 300 MHz is demonstrated. In addition, computed values of input reactance, radiation resistance, skin effect resistance, and radiation efficiency for the printed multi-turn loop antenna are presented. The computed input reactance is compared with the measured values and shown to be in fair agreement. The computed radiation efficiency at the design frequency is about 24 percent.

  6. Computationally Efficient Clustering of Audio-Visual Meeting Data

    NASA Astrophysics Data System (ADS)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  7. Image detection and compression for memory efficient system analysis

    NASA Astrophysics Data System (ADS)

    Bayraktar, Mustafa

    2015-02-01

    The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.

  8. Efficient GW calculations using eigenvalue-eigenvector decomposition of the dielectric matrix

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy-Viet; Pham, T. Anh; Rocca, Dario; Galli, Giulia

    2011-03-01

    During the past 25 years, the GW method has been successfully used to compute electronic quasi-particle excitation spectra of a variety of materials. It is however a computationally intensive technique, as it involves summations over occupied and empty electronic states, to evaluate both the Green function (G) and the dielectric matrix (DM) entering the expression of the screened Coulomb interaction (W). Recent developments have shown that eigenpotentials of DMs can be efficiently calculated without any explicit evaluation of empty states. In this work, we will present a computationally efficient approach to the calculations of GW spectra by combining a representation of DMs in terms of its eigenpotentials and a recently developed iterative algorithm. As a demonstration of the efficiency of the method, we will present calculations of the vertical ionization potentials of several systems. Work was funnded by SciDAC-e DE-FC02-06ER25777.

  9. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  10. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  11. GPU-accelerated element-free reverse-time migration with Gauss points partition

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong

    2018-06-01

    An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.

  12. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  13. Convergence Acceleration of a Navier-Stokes Solver for Efficient Static Aeroelastic Computations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru; Guruswamy, Guru P.

    1995-01-01

    New capabilities have been developed for a Navier-Stokes solver to perform steady-state simulations more efficiently. The flow solver for solving the Navier-Stokes equations is based on a combination of the lower-upper factored symmetric Gauss-Seidel implicit method and the modified Harten-Lax-van Leer-Einfeldt upwind scheme. A numerically stable and efficient pseudo-time-marching method is also developed for computing steady flows over flexible wings. Results are demonstrated for transonic flows over rigid and flexible wings.

  14. High-efficiency AlGaAs-GaAs Cassegrainian concentrator cells

    NASA Technical Reports Server (NTRS)

    Werthen, J. G.; Hamaker, H. C.; Virshup, G. F.; Lewis, C. R.; Ford, C. W.

    1985-01-01

    AlGaAs-GaAs heteroface space concentrator solar cells have been fabricated by metalorganic chemical vapor deposition. AMO efficiencies as high as 21.1% have been observed both for p-n and np structures under concentration (90 to 100X) at 25 C. Both cell structures are characterized by high quantum efficiencies and their performances are close to those predicted by a realistic computer model. In agreement with the computer model, the n-p cell exhibits a higher short-circuit current density.

  15. Computational and experimental analysis of the flow in an annular centrifugal contactor

    NASA Astrophysics Data System (ADS)

    Wardle, Kent E.

    The annular centrifugal contactor has been developed for solvent extraction processes for recycling used nuclear fuel. The compact size and high efficiency of these contactors have made them the choice for advanced reprocessing schemes and a key equipment for a proposed future advanced fuel cycle facility. While a sufficient base of experience exists to facilitate successful operation of current contactor technology, a more complete understanding of the fluid flow within the contactor would enable further advancements in design and operation of future units and greater confidence for use of such contactors in a variety of other solvent extraction applications. This research effort has coupled computational fluid dynamics modeling with a variety of experimental measurements and observations to provide a valid detailed analysis of the flow within the centrifugal contactor. CFD modeling of the free surface flow in the annular mixing zone using the Volume of Fluid (VOF) volume tracking method combined with Large Eddy Simulation (LES) of turbulence was found to have very good agreement with the experimental measurements and observations. A detailed study of the flow and mixing for different housing vane geometries was performed and it was found that the four straight mixing vane geometry had greater mixing for the flow rate simulated and more predictable operation over a range of low to moderate flow rates. The separation zone was also modeled providing a useful description of the flow in this region and identifying critical design features. It is anticipated that this work will form a foundation for additional efforts at improving the design and operation of centrifugal contactors and provide a framework for progress towards simulation of solvent extraction processes.

  16. Solving groundwater flow problems by conjugate-gradient methods and the strongly implicit procedure

    USGS Publications Warehouse

    Hill, Mary C.

    1990-01-01

    The performance of the preconditioned conjugate-gradient method with three preconditioners is compared with the strongly implicit procedure (SIP) using a scalar computer. The preconditioners considered are the incomplete Cholesky (ICCG) and the modified incomplete Cholesky (MICCG), which require the same computer storage as SIP as programmed for a problem with a symmetric matrix, and a polynomial preconditioner (POLCG), which requires less computer storage than SIP. Although POLCG is usually used on vector computers, it is included here because of its small storage requirements. In this paper, published comparisons of the solvers are evaluated, all four solvers are compared for the first time, and new test cases are presented to provide a more complete basis by which the solvers can be judged for typical groundwater flow problems. Based on nine test cases, the following conclusions are reached: (1) SIP is actually as efficient as ICCG for some of the published, linear, two-dimensional test cases that were reportedly solved much more efficiently by ICCG; (2) SIP is more efficient than other published comparisons would indicate when common convergence criteria are used; and (3) for problems that are three-dimensional, nonlinear, or both, and for which common convergence criteria are used, SIP is often more efficient than ICCG, and is sometimes more efficient than MICCG.

  17. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE PAGES

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil; ...

    2018-03-22

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  18. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  19. Reducing Vehicle Weight and Improving U.S. Energy Efficiency Using Integrated Computational Materials Engineering

    NASA Astrophysics Data System (ADS)

    Joost, William J.

    2012-09-01

    Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.

  20. Efficiency of Energy Use in the United States

    ERIC Educational Resources Information Center

    Hirst, Eric; Moyers, John C.

    1973-01-01

    Describes ways to reduce energy consumption in transportation, space heating, and air conditioning. Greater efficiency for energy use from an engineering point of view is possible in present circumstances. (PS)

Top