Sample records for extremely computationally expensive

  1. Towards Wearable Cognitive Assistance

    DTIC Science & Technology

    2013-12-01

    ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: mobile computing, cloud...It presents a muli-tiered mobile system architecture that offers tight end-to-end latency bounds on compute-intensive cognitive assistance...to an entire neighborhood or an entire city is extremely expensive and time-consuming. Physical infrastructure in public spaces tends to evolve very

  2. Data Structures for Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahan, Simon

    As computing problems of national importance grow, the government meets the increased demand by funding the development of ever larger systems. The overarching goal of the work supported in part by this grant is to increase efficiency of programming and performing computations on these large computing systems. In past work, we have demonstrated that some of these computations once thought to require expensive hardware designs and/or complex, special-purpose programming may be executed efficiently on low-cost commodity cluster computing systems using a general-purpose “latency-tolerant” programming framework. One important developed application of the ideas underlying this framework is graph database technology supportingmore » social network pattern matching used by US intelligence agencies to more quickly identify potential terrorist threats. This database application has been spun out by the Pacific Northwest National Laboratory, a Department of Energy Laboratory, into a commercial start-up, Trovares Inc. We explore an alternative application of the same underlying ideas to a well-studied challenge arising in engineering: solving unstructured sparse linear equations. Solving these equations is key to predicting the behavior of large electronic circuits before they are fabricated. Predicting that behavior ahead of fabrication means that designs can optimized and errors corrected ahead of the expense of manufacture.« less

  3. An approximate, maximum terminal velocity descent to a point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisler, G.R.; Hull, D.G.

    1987-01-01

    No closed form control solution exists for maximizing the terminal velocity of a hypersonic glider at an arbitrary point. As an alternative, this study uses neighboring extremal theory to provide a sampled data feedback law to guide the vehicle to a constrained ground range and altitude. The guidance algorithm is divided into two parts: 1) computation of a nominal, approximate, maximum terminal velocity trajectory to a constrained final altitude and computation of the resulting unconstrained groundrange, and 2) computation of the neighboring extremal control perturbation at the sample value of flight path angle to compensate for changes in the approximatemore » physical model and enable the vehicle to reach the on-board computed groundrange. The trajectories are characterized by glide and dive flight to the target to minimize the time spent in the denser parts of the atmosphere. The proposed on-line scheme successfully brings the final altitude and range constraints together, as well as compensates for differences in flight model, atmosphere, and aerodynamics at the expense of guidance update computation time. Comparison with an independent, parameter optimization solution for the terminal velocity is excellent. 6 refs., 3 figs.« less

  4. Numerical Experiments with a Turbulent Single-Mode Rayleigh-Taylor Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cloutman, L.D.

    2000-04-01

    Direct numerical simulation is a powerful tool for studying turbulent flows. Unfortunately, it is also computationally expensive and often beyond the reach of the largest, fastest computers. Consequently, a variety of turbulence models have been devised to allow tractable and affordable simulations of averaged flow fields. Unfortunately, these present a variety of practical difficulties, including the incorporation of varying degrees of empiricism and phenomenology, which leads to a lack of universality. This unsatisfactory state of affairs has led to the speculation that one can avoid the expense and bother of using a turbulence model by relying on the grid andmore » numerical diffusion of the computational fluid dynamics algorithm to introduce a spectral cutoff on the flow field and to provide dissipation at the grid scale, thereby mimicking two main effects of a large eddy simulation model. This paper shows numerical examples of a single-mode Rayleigh-Taylor instability in which this procedure produces questionable results. We then show a dramatic improvement when two simple subgrid-scale models are employed. This study also illustrates the extreme sensitivity to initial conditions that is a common feature of turbulent flows.« less

  5. Extreme-Scale Stochastic Particle Tracing for Uncertain Unsteady Flow Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Hanqi; He, Wenbin; Seo, Sangmin

    2016-11-13

    We present an efficient and scalable solution to estimate uncertain transport behaviors using stochastic flow maps (SFM,) for visualizing and analyzing uncertain unsteady flows. SFM computation is extremely expensive because it requires many Monte Carlo runs to trace densely seeded particles in the flow. We alleviate the computational cost by decoupling the time dependencies in SFMs so that we can process adjacent time steps independently and then compose them together for longer time periods. Adaptive refinement is also used to reduce the number of runs for each location. We then parallelize over tasks—packets of particles in our design—to achieve highmore » efficiency in MPI/thread hybrid programming. Such a task model also enables CPU/GPU coprocessing. We show the scalability on two supercomputers, Mira (up to 1M Blue Gene/Q cores) and Titan (up to 128K Opteron cores and 8K GPUs), that can trace billions of particles in seconds.« less

  6. Approximating the Generalized Voronoi Diagram of Closely Spaced Objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, John; Daniel, Eric; Pascucci, Valerio

    2015-06-22

    We present an algorithm to compute an approximation of the generalized Voronoi diagram (GVD) on arbitrary collections of 2D or 3D geometric objects. In particular, we focus on datasets with closely spaced objects; GVD approximation is expensive and sometimes intractable on these datasets using previous algorithms. With our approach, the GVD can be computed using commodity hardware even on datasets with many, extremely tightly packed objects. Our approach is to subdivide the space with an octree that is represented with an adjacency structure. We then use a novel adaptive distance transform to compute the distance function on octree vertices. Themore » computed distance field is sampled more densely in areas of close object spacing, enabling robust and parallelizable GVD surface generation. We demonstrate our method on a variety of data and show example applications of the GVD in 2D and 3D.« less

  7. Randomized Approaches for Nearest Neighbor Search in Metric Space When Computing the Pairwise Distance Is Extremely Expensive

    NASA Astrophysics Data System (ADS)

    Wang, Lusheng; Yang, Yong; Lin, Guohui

    Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.

  8. Wedge sampling for computing clustering coefficients and triangle counts on large graphs

    DOE PAGES

    Seshadhri, C.; Pinar, Ali; Kolda, Tamara G.

    2014-05-08

    Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of such graphs. Some of the most useful graph metrics are based on triangles, such as those measuring social cohesion. Despite the importance of these triadic measures, algorithms to compute them can be extremely expensive. We discuss the method of wedge sampling. This versatile technique allows for the fast and accurate approximation of various types of clustering coefficients and triangle counts. Furthermore, these techniques are extensible to counting directed triangles in digraphs. Our methods come with provable andmore » practical time-approximation tradeoffs for all computations. We provide extensive results that show our methods are orders of magnitude faster than the state of the art, while providing nearly the accuracy of full enumeration.« less

  9. ELOPTA: a novel microcontroller-based operant device.

    PubMed

    Hoffman, Adam M; Song, Jianjian; Tuttle, Elaina M

    2007-11-01

    Operant devices have been used for many years in animal behavior research, yet such devices a regenerally highly specialized and quite expensive. Although commercial models are somewhat adaptable and resilient, they are also extremely expensive and are controlled by difficult to learn proprietary software. As an alternative to commercial devices, we have designed and produced a fully functional, programmable operant device, using a PICmicro microcontroller (Microchip Technology, Inc.). The electronic operant testing apparatus (ELOPTA) is designed to deliver food when a study animal, in this case a bird, successfully depresses the correct sequence of illuminated keys. The device logs each keypress and can detect and log whenever a test animal i spositioned at the device. Data can be easily transferred to a computer and imported into any statistical analysis software. At about 3% the cost of a commercial device, ELOPTA will advance behavioral sciences, including behavioral ecology, animal learning and cognition, and ethology.

  10. Efficient Semiparametric Inference Under Two-Phase Sampling, With Applications to Genetic Association Studies.

    PubMed

    Tao, Ran; Zeng, Donglin; Lin, Dan-Yu

    2017-01-01

    In modern epidemiological and clinical studies, the covariates of interest may involve genome sequencing, biomarker assay, or medical imaging and thus are prohibitively expensive to measure on a large number of subjects. A cost-effective solution is the two-phase design, under which the outcome and inexpensive covariates are observed for all subjects during the first phase and that information is used to select subjects for measurements of expensive covariates during the second phase. For example, subjects with extreme values of quantitative traits were selected for whole-exome sequencing in the National Heart, Lung, and Blood Institute (NHLBI) Exome Sequencing Project (ESP). Herein, we consider general two-phase designs, where the outcome can be continuous or discrete, and inexpensive covariates can be continuous and correlated with expensive covariates. We propose a semiparametric approach to regression analysis by approximating the conditional density functions of expensive covariates given inexpensive covariates with B-spline sieves. We devise a computationally efficient and numerically stable EM-algorithm to maximize the sieve likelihood. In addition, we establish the consistency, asymptotic normality, and asymptotic efficiency of the estimators. Furthermore, we demonstrate the superiority of the proposed methods over existing ones through extensive simulation studies. Finally, we present applications to the aforementioned NHLBI ESP.

  11. GROMACS 4:  Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.

    PubMed

    Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik

    2008-03-01

    Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.

  12. Soapy: an adaptive optics simulation written purely in Python for rapid concept development

    NASA Astrophysics Data System (ADS)

    Reeves, Andrew

    2016-07-01

    Soapy is a newly developed Adaptive Optics (AO) simulation which aims be a flexible and fast to use tool-kit for many applications in the field of AO. It is written purely in the Python language, adding to and taking advantage of the already rich ecosystem of scientific libraries and programs. The simulation has been designed to be extremely modular, such that each component can be used stand-alone for projects which do not require a full end-to-end simulation. Ease of use, modularity and code clarity have been prioritised at the expense of computational performance. Though this means the code is not yet suitable for large studies of Extremely Large Telescope AO systems, it is well suited to education, exploration of new AO concepts and investigations of current generation telescopes.

  13. Combining convolutional neural networks and Hough Transform for classification of images containing lines

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy

    2017-03-01

    In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.

  14. A Parallel Numerical Algorithm To Solve Linear Systems Of Equations Emerging From 3D Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.

  15. 48 CFR 9904.410-60 - Illustrations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... budgets for the other segment should be removed from B's G&A expense pool and transferred to the other...; all home office expenses allocated to Segment H are included in Segment H's G&A expense pool. (2) This... cost of scientific computer operations in its G&A expense pool. The scientific computer is used...

  16. 48 CFR 9904.410-60 - Illustrations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... budgets for the other segment should be removed from B's G&A expense pool and transferred to the other...; all home office expenses allocated to Segment H are included in Segment H's G&A expense pool. (2) This... cost of scientific computer operations in its G&A expense pool. The scientific computer is used...

  17. Reducing the Time and Cost of Testing Engines

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Producing a new aircraft engine currently costs approximately $1 billion, with 3 years of development time for a commercial engine and 10 years for a military engine. The high development time and cost make it extremely difficult to transition advanced technologies for cleaner, quieter, and more efficient new engines. To reduce this time and cost, NASA created a vision for the future where designers would use high-fidelity computer simulations early in the design process in order to resolve critical design issues before building the expensive engine hardware. To accomplish this vision, NASA's Glenn Research Center initiated a collaborative effort with the aerospace industry and academia to develop its Numerical Propulsion System Simulation (NPSS), an advanced engineering environment for the analysis and design of aerospace propulsion systems and components. Partners estimate that using NPSS has the potential to dramatically reduce the time, effort, and expense necessary to design and test jet engines by generating sophisticated computer simulations of an aerospace object or system. These simulations will permit an engineer to test various design options without having to conduct costly and time-consuming real-life tests. By accelerating and streamlining the engine system design analysis and test phases, NPSS facilitates bringing the final product to market faster. NASA's NPSS Version (V)1.X effort was a task within the Agency s Computational Aerospace Sciences project of the High Performance Computing and Communication program, which had a mission to accelerate the availability of high-performance computing hardware and software to the U.S. aerospace community for its use in design processes. The technology brings value back to NASA by improving methods of analyzing and testing space transportation components.

  18. Adaptive statistical iterative reconstruction use for radiation dose reduction in pediatric lower-extremity CT: impact on diagnostic image quality.

    PubMed

    Shah, Amisha; Rees, Mitchell; Kar, Erica; Bolton, Kimberly; Lee, Vincent; Panigrahy, Ashok

    2018-06-01

    For the past several years, increased levels of imaging radiation and cumulative radiation to children has been a significant concern. Although several measures have been taken to reduce radiation dose during computed tomography (CT) scan, the newer dose reduction software adaptive statistical iterative reconstruction (ASIR) has been an effective technique in reducing radiation dose. To our knowledge, no studies are published that assess the effect of ASIR on extremity CT scans in children. To compare radiation dose, image noise, and subjective image quality in pediatric lower extremity CT scans acquired with and without ASIR. The study group consisted of 53 patients imaged on a CT scanner equipped with ASIR software. The control group consisted of 37 patients whose CT images were acquired without ASIR. Image noise, Computed Tomography Dose Index (CTDI) and dose length product (DLP) were measured. Two pediatric radiologists rated the studies in subjective categories: image sharpness, noise, diagnostic acceptability, and artifacts. The CTDI (p value = 0.0184) and DLP (p value <0.0002) were significantly decreased with the use of ASIR compared with non-ASIR studies. However, the subjective ratings for sharpness (p < 0.0001) and diagnostic acceptability of the ASIR images (p < 0.0128) were decreased compared with standard, non-ASIR CT studies. Adaptive statistical iterative reconstruction reduces radiation dose for lower extremity CTs in children, but at the expense of diagnostic imaging quality. Further studies are warranted to determine the specific utility of ASIR for pediatric musculoskeletal CT imaging.

  19. 24 CFR 990.170 - Computation of utilities expense level (UEL): Overview.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... level (UEL): Overview. 990.170 Section 990.170 Housing and Urban Development Regulations Relating to... Expenses § 990.170 Computation of utilities expense level (UEL): Overview. (a) General. The UEL for each... by the payable consumption level multiplied by the inflation factor. The UEL is expressed in terms of...

  20. Multifidelity-CMA: a multifidelity approach for efficient personalisation of 3D cardiac electromechanical models.

    PubMed

    Molléro, Roch; Pennec, Xavier; Delingette, Hervé; Garny, Alan; Ayache, Nicholas; Sermesant, Maxime

    2018-02-01

    Personalised computational models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However, the simulation of a single heartbeat with a 3D cardiac electromechanical model can be long and computationally expensive, which makes some practical applications, such as the estimation of model parameters from clinical data (the personalisation), very slow. Here we introduce an original multifidelity approach between a 3D cardiac model and a simplified "0D" version of this model, which enables to get reliable (and extremely fast) approximations of the global behaviour of the 3D model using 0D simulations. We then use this multifidelity approximation to speed-up an efficient parameter estimation algorithm, leading to a fast and computationally efficient personalisation method of the 3D model. In particular, we show results on a cohort of 121 different heart geometries and measurements. Finally, an exploitable code of the 0D model with scripts to perform parameter estimation will be released to the community.

  1. OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping

    2017-02-01

    The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.

  2. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  3. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  4. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  5. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  6. 47 CFR 32.6124 - General purpose computers expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...

  7. Highly Scalable Asynchronous Computing Method for Partial Differential Equations: A Path Towards Exascale

    NASA Astrophysics Data System (ADS)

    Konduri, Aditya

    Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.

  8. 41 CFR 301-10.451 - May I be reimbursed for the cost of collision damage waiver (CDW) or theft insurance?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ALLOWABLE TRAVEL EXPENSES 10-TRANSPORTATION EXPENSES Special Conveyances Rental Automobiles § 301-10.451 May... Government includes full coverage insurance for damages resulting from an accident while performing official... cause extreme difficulty for an employee involved in an accident. ...

  9. 41 CFR 301-10.451 - May I be reimbursed for the cost of collision damage waiver (CDW) or theft insurance?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ALLOWABLE TRAVEL EXPENSES 10-TRANSPORTATION EXPENSES Special Conveyances Rental Automobiles § 301-10.451 May... Government includes full coverage insurance for damages resulting from an accident while performing official... cause extreme difficulty for an employee involved in an accident. ...

  10. 41 CFR 301-10.451 - May I be reimbursed for the cost of collision damage waiver (CDW) or theft insurance?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ALLOWABLE TRAVEL EXPENSES 10-TRANSPORTATION EXPENSES Special Conveyances Rental Automobiles § 301-10.451 May... Government includes full coverage insurance for damages resulting from an accident while performing official... cause extreme difficulty for an employee involved in an accident. ...

  11. 41 CFR 301-10.451 - May I be reimbursed for the cost of collision damage waiver (CDW) or theft insurance?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ALLOWABLE TRAVEL EXPENSES 10-TRANSPORTATION EXPENSES Special Conveyances Rental Automobiles § 301-10.451 May... Government includes full coverage insurance for damages resulting from an accident while performing official... cause extreme difficulty for an employee involved in an accident. ...

  12. 41 CFR 301-10.451 - May I be reimbursed for the cost of collision damage waiver (CDW) or theft insurance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ALLOWABLE TRAVEL EXPENSES 10-TRANSPORTATION EXPENSES Special Conveyances Rental Automobiles § 301-10.451 May... Government includes full coverage insurance for damages resulting from an accident while performing official... cause extreme difficulty for an employee involved in an accident. ...

  13. Methodology and Estimates of Scour at Selected Bridge Sites in Alaska

    USGS Publications Warehouse

    Heinrichs, Thomas A.; Kennedy, Ben W.; Langley, Dustin E.; Burrows, Robert L.

    2001-01-01

    The U.S. Geological Survey estimated scour depths at 325 bridges in Alaska as part of a cooperative agreement with the Alaska Department of Transportation and Public Facilities. The department selected these sites from approximately 806 State-owned bridges as potentially susceptible to scour during extreme floods. Pier scour and contraction scour were computed for the selected bridges by using methods recommended by the Federal Highway Administration. The U.S. Geological Survey used a four-step procedure to estimate scour: (1) Compute magnitudes of the 100- and 500-year floods. (2) Determine cross-section geometry and hydraulic properties for each bridge site. (3) Compute the water-surface profile for the 100- and 500-year floods. (4) Compute contraction and pier scour. This procedure is unique because the cross sections were developed from existing data on file to make a quantitative estimate of scour. This screening method has the advantage of providing scour depths and bed elevations for comparison with bridge-foundation elevations without the time and expense of a field survey. Four examples of bridge-scour analyses are summarized in the appendix.

  14. Computational reduction strategies for the detection of steady bifurcations in incompressible fluid-dynamics: Applications to Coanda effect in cardiology

    NASA Astrophysics Data System (ADS)

    Pitton, Giuseppe; Quaini, Annalisa; Rozza, Gianluigi

    2017-09-01

    We focus on reducing the computational costs associated with the hydrodynamic stability of solutions of the incompressible Navier-Stokes equations for a Newtonian and viscous fluid in contraction-expansion channels. In particular, we are interested in studying steady bifurcations, occurring when non-unique stable solutions appear as physical and/or geometric control parameters are varied. The formulation of the stability problem requires solving an eigenvalue problem for a partial differential operator. An alternative to this approach is the direct simulation of the flow to characterize the asymptotic behavior of the solution. Both approaches can be extremely expensive in terms of computational time. We propose to apply Reduced Order Modeling (ROM) techniques to reduce the demanding computational costs associated with the detection of a type of steady bifurcations in fluid dynamics. The application that motivated the present study is the onset of asymmetries (i.e., symmetry breaking bifurcation) in blood flow through a regurgitant mitral valve, depending on the Reynolds number and the regurgitant mitral valve orifice shape.

  15. Gaussian process surrogates for failure detection: A Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Wang, Hongqiao; Lin, Guang; Li, Jinglai

    2016-05-01

    An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.

  16. VAX CLuster upgrade: Report of a CPC task force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, J.; Berry, H.; Kessler, P.

    The CSCF VAX cluster provides interactive computing for 100 users during prime time, plus a considerable amount of daytime and overnight batch processing. While this cluster represents less than 10% of the VAX computing power at BNL (6 MIPS out of 70), it has served as an important center for this larger network, supporting special hardware and software too expensive to maintain on every machine. In addition, it is the only unrestricted facility available to VAX/VMS users (other machines are typically dedicated to special projects). This committee's analysis shows that the cpu's on the CSCF cluster are currently badly oversaturated,more » frequently giving extremely poor interactive response. Short batch jobs (a necessary part of interactive work) typically take 3 to 4 times as long to execute as they would on an idle machine. There is also an immediate need for more scratch disk space and user permanent file space.« less

  17. Proceeding On : Parallelisation Of Critical Code Passages In PHOENIX/3D

    NASA Astrophysics Data System (ADS)

    Arkenberg, Mario; Wichert, Viktoria; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach here, by introducing especially adapted, parallel numerical methods and correspondingly parallelising time critical code passages. In the following, we present our work on PHOENIX/3D.While parallelisation is generally worthwhile, it requires revision of time-consuming subroutines with respect to separability of localised data and variables in order to determine the optimal approach. Of course, the same applies to the code structure. The importance of this ongoing work can be showcased by recently derived benchmark results, which were generated utilis- ing MPI and OpenMP. Furthermore, the need for a careful and thorough choice of an adequate, machine dependent setup is discussed.

  18. Application of pulsed multi-ion irradiations in radiation damage research: A stochastic cluster dynamics simulation study

    NASA Astrophysics Data System (ADS)

    Hoang, Tuan L.; Nazarov, Roman; Kang, Changwoo; Fan, Jiangyuan

    2018-07-01

    Under the multi-ion irradiation conditions present in accelerated material-testing facilities or fission/fusion nuclear reactors, the combined effects of atomic displacements with radiation products may induce complex synergies in the structural materials. However, limited access to multi-ion irradiation facilities and the lack of computational models capable of simulating the evolution of complex defects and their synergies make it difficult to understand the actual physical processes taking place in the materials under these extreme conditions. In this paper, we propose the application of pulsed single/dual-beam irradiation as replacements for the expensive steady triple-beam irradiation to study radiation damages in materials under multi-ion irradiation.

  19. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2012-10-01 2012-10-01 false Computers and data processing equipment (account...

  20. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2013-10-01 2013-10-01 false Computers and data processing equipment (account...

  1. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2011-10-01 2011-10-01 false Computers and data processing equipment (account...

  2. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2014-10-01 2014-10-01 false Computers and data processing equipment (account...

  3. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2010-10-01 2010-10-01 false Computers and data processing equipment (account...

  4. Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT

    PubMed Central

    Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster

    2016-01-01

    Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701

  5. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less

  6. Accelerating epistasis analysis in human genetics with consumer graphics hardware.

    PubMed

    Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H

    2009-07-24

    Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.

  7. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  8. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  9. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  10. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  11. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  12. 47 CFR 69.156 - Marketing expenses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Marketing expenses. 69.156 Section 69.156... Computation of Charges for Price Cap Local Exchange Carriers § 69.156 Marketing expenses. Effective July 1, 2000, the marketing expenses formerly allocated to the common line and traffic sensitive baskets, and...

  13. Developing a healthcare law library.

    PubMed

    Sconyers, J M

    1998-01-01

    Legal materials are expensive, bulky, and extremely time sensitive. Selecting the appropriate means of ensuring easy access to easily-retrievable, timely legal materials is of extreme importance to any lawyer. The author gives an overview of the various means of retrieving necessary research, including the strengths and weaknesses of each of the various options.

  14. An asymptotically consistent approximant for the equatorial bending angle of light due to Kerr black holes

    NASA Astrophysics Data System (ADS)

    Barlow, Nathaniel S.; Weinstein, Steven J.; Faber, Joshua A.

    2017-07-01

    An accurate closed-form expression is provided to predict the bending angle of light as a function of impact parameter for equatorial orbits around Kerr black holes of arbitrary spin. This expression is constructed by assuring that the weak- and strong-deflection limits are explicitly satisfied while maintaining accuracy at intermediate values of impact parameter via the method of asymptotic approximants (Barlow et al 2017 Q. J. Mech. Appl. Math. 70 21-48). To this end, the strong deflection limit for a prograde orbit around an extremal black hole is examined, and the full non-vanishing asymptotic behavior is determined. The derived approximant may be an attractive alternative to computationally expensive elliptical integrals used in black hole simulations.

  15. Family Child Care Calendar-Keeper[TM] 2001: A Record Keeping System Including Nutrition Information for Child Care Providers. Twenty-Fourth Edition.

    ERIC Educational Resources Information Center

    Beuch, Beth, Ed.; Beuch, Ethel, Ed.; Schloff, Pam, Ed.

    Noting that accurate recordkeeping for tax purposes is extremely important for family child care providers, this calendar provides a format for recording typical family child care expenses and other information. Included are the following: (1) monthly expense charts with categories matching Schedule C; (2) attendance and payment log; (3) payment…

  16. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... private expense). Do not use the clause when the only deliverable items are computer software or computer software documentation (see 227.72), commercial items developed exclusively at private expense (see 227... the clause in architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013...

  17. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... private expense). Do not use the clause when the only deliverable items are computer software or computer software documentation (see 227.72), commercial items developed exclusively at private expense (see 227... the clause in architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013...

  18. Condor-COPASI: high-throughput computing for biochemical networks

    PubMed Central

    2012-01-01

    Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945

  19. Unsteady Three-Dimensional Simulation of a Shear Coaxial GO2/GH2 Rocket Injector with RANS and Hybrid-RAN-LES/DES Using Flamelet Models

    NASA Technical Reports Server (NTRS)

    Westra, Doug G.; West, Jeffrey S.; Richardson, Brian R.

    2015-01-01

    Historically, the analysis and design of liquid rocket engines (LREs) has relied on full-scale testing and one-dimensional empirical tools. The testing is extremely expensive and the one-dimensional tools are not designed to capture the highly complex, and multi-dimensional features that are inherent to LREs. Recent advances in computational fluid dynamics (CFD) tools have made it possible to predict liquid rocket engine performance, stability, to assess the effect of complex flow features, and to evaluate injector-driven thermal environments, to mitigate the cost of testing. Extensive efforts to verify and validate these CFD tools have been conducted, to provide confidence for using them during the design cycle. Previous validation efforts have documented comparisons of predicted heat flux thermal environments with test data for a single element gaseous oxygen (GO2) and gaseous hydrogen (GH2) injector. The most notable validation effort was a comprehensive validation effort conducted by Tucker et al. [1], in which a number of different groups modeled a GO2/GH2 single element configuration by Pal et al [2]. The tools used for this validation comparison employed a range of algorithms, from both steady and unsteady Reynolds Averaged Navier-Stokes (U/RANS) calculations, large-eddy simulations (LES), detached eddy simulations (DES), and various combinations. A more recent effort by Thakur et al. [3] focused on using a state-of-the-art CFD simulation tool, Loci/STREAM, on a two-dimensional grid. Loci/STREAM was chosen because it has a unique, very efficient flamelet parameterization of combustion reactions that are too computationally expensive to simulate with conventional finite-rate chemistry calculations. The current effort focuses on further advancement of validation efforts, again using the Loci/STREAM tool with the flamelet parameterization, but this time with a three-dimensional grid. Comparisons to the Pal et al. heat flux data will be made for both RANS and Hybrid RANSLES/ Detached Eddy simulations (DES). Computation costs will be reported, along with comparison of accuracy and cost to much less expensive two-dimensional RANS simulations of the same geometry.

  20. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    PubMed

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  1. 24 CFR 990.165 - Computation of project expense level (PEL).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Computation of project expense level (PEL). 990.165 Section 990.165 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF...

  2. Model reduction by trimming for a class of semi-Markov reliability models and the corresponding error bound

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Palumbo, Daniel L.

    1991-01-01

    Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.

  3. Advanced 3D Characterization and Reconstruction of Reactor Materials FY16 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fromm, Bradley; Hauch, Benjamin; Sridharan, Kumar

    2016-12-01

    A coordinated effort to link advanced materials characterization methods and computational modeling approaches is critical to future success for understanding and predicting the behavior of reactor materials that operate at extreme conditions. The difficulty and expense of working with nuclear materials have inhibited the use of modern characterization techniques on this class of materials. Likewise, mesoscale simulation efforts have been impeded due to insufficient experimental data necessary for initialization and validation of the computer models. The objective of this research is to develop methods to integrate advanced materials characterization techniques developed for reactor materials with state-of-the-art mesoscale modeling and simulationmore » tools. Research to develop broad-ion beam sample preparation, high-resolution electron backscatter diffraction, and digital microstructure reconstruction techniques; and methods for integration of these techniques into mesoscale modeling tools are detailed. Results for both irradiated and un-irradiated reactor materials are presented for FY14 - FY16 and final remarks are provided.« less

  4. A Computationally Efficient Hypothesis Testing Method for Epistasis Analysis using Multifactor Dimensionality Reduction

    PubMed Central

    Pattin, Kristine A.; White, Bill C.; Barney, Nate; Gui, Jiang; Nelson, Heather H.; Kelsey, Karl R.; Andrew, Angeline S.; Karagas, Margaret R.; Moore, Jason H.

    2008-01-01

    Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free data mining method for detecting, characterizing, and interpreting epistasis in the absence of significant main effects in genetic and epidemiologic studies of complex traits such as disease susceptibility. The goal of MDR is to change the representation of the data using a constructive induction algorithm to make nonadditive interactions easier to detect using any classification method such as naïve Bayes or logistic regression. Traditionally, MDR constructed variables have been evaluated with a naïve Bayes classifier that is combined with 10-fold cross validation to obtain an estimate of predictive accuracy or generalizability of epistasis models. Traditionally, we have used permutation testing to statistically evaluate the significance of models obtained through MDR. The advantage of permutation testing is that it controls for false-positives due to multiple testing. The disadvantage is that permutation testing is computationally expensive. This is in an important issue that arises in the context of detecting epistasis on a genome-wide scale. The goal of the present study was to develop and evaluate several alternatives to large-scale permutation testing for assessing the statistical significance of MDR models. Using data simulated from 70 different epistasis models, we compared the power and type I error rate of MDR using a 1000-fold permutation test with hypothesis testing using an extreme value distribution (EVD). We find that this new hypothesis testing method provides a reasonable alternative to the computationally expensive 1000-fold permutation test and is 50 times faster. We then demonstrate this new method by applying it to a genetic epidemiology study of bladder cancer susceptibility that was previously analyzed using MDR and assessed using a 1000-fold permutation test. PMID:18671250

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Juliane

    MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.

  6. 48 CFR 9905.506-60 - Illustrations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., installs a computer service center to begin operations on May 1. The operating expense related to the new... operating expenses of the computer service center for the 8-month part of the cost accounting period may be... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Illustrations. 9905.506-60...

  7. 26 CFR 1.50B-1 - Definitions of WIN expenses and WIN employees.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... employee. (c) Trade or business expenses. The term “WIN expenses” includes only salaries and wages which... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Definitions of WIN expenses and WIN employees. 1... INCOME TAXES Rules for Computing Credit for Expenses of Work Incentive Programs § 1.50B-1 Definitions of...

  8. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    NASA Astrophysics Data System (ADS)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  9. 47 CFR 32.6112 - Motor vehicle expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Motor vehicle expense. 32.6112 Section 32.6112 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS.../or to other Plant Specific Operations Expense accounts. These amounts shall be computed on the basis...

  10. Blind detection of isolated astrophysical pulses in the spatial Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Schmid, Natalia A.; Prestage, Richard M.

    2018-07-01

    We present a novel approach for the detection of isolated transients in pulsar surveys and fast radio transient observations. Rather than the conventional approach of performing a computationally expensive blind dispersion measure search, we take the spatial Fourier transform (SFT) of short (˜ few seconds) sections of data. A transient will have a characteristic signature in the SFT domain, and we present a blind statistic which may be used to detect this signature at an empirical zero false alarm rate. The method has been evaluated using simulations, and also applied to two fast radio burst observations. In addition to its use for current observations, we expect this method will be extremely beneficial for future multibeam observations made by telescopes equipped with phased array feeds.

  11. Blind detection of isolated astrophysical pulses in the spatial Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Schmid, Natalia A.; Prestage, Richard M.

    2018-04-01

    We present a novel approach for the detection of isolated transients in pulsar surveys and fast radio transient observations. Rather than the conventional approach of performing a computationally expensive blind DM search, we take the spatial Fourier transform (SFT) of short (˜ few seconds) sections of data. A transient will have a characteristic signature in the SFT domain, and we present a blind statistic which may be used to detect this signature at an empirical zero False Alarm Rate (FAR). The method has been evaluated using simulations, and also applied to two fast radio burst observations. In addition to its use for current observations, we expect this method will be extremely beneficial for future multi-beam observations made by telescopes equipped with phased array feeds.

  12. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  13. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  14. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  15. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  16. 47 CFR 32.6121 - Land and building expense.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... operate the telecommunications network shall be charged to Account 6531, Power Expense, and the cost of separately metered electricity used for operating specific types of equipment, such as computers, shall be... SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6121 Land and...

  17. 7 CFR 1484.53 - What are the requirements for documenting and reporting contributions?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... contribution must be documented by the Cooperator, showing the method of computing non-cash contributions, salaries, and travel expenses. (b) Each Cooperator must keep records of the methods used to compute the value of non-cash contributions, and (1) Copies of invoices or receipts for expenses paid by the U.S...

  18. 26 CFR 1.213-1 - Medical, dental, etc., expenses.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... medical care includes the diagnosis, cure, mitigation, treatment, or prevention of disease. Expenses paid... taxable year for insurance that constitute expenses paid for medical care shall, for purposes of computing... care of the taxpayer, his spouse, or a dependent of the taxpayer and not be compensated for by...

  19. 26 CFR 1.556-2 - Adjustments to taxable income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... of deductions for trade or business expenses and depreciation which are allocable to the operation... computed without the deduction of the amount disallowed under section 556(b)(5), relating to expenses and... disallowed under section 556(b)(5), relating to expenses and depreciation applicable to property of the...

  20. A Test of the Validity of Inviscid Wall-Modeled LES

    NASA Astrophysics Data System (ADS)

    Redman, Andrew; Craft, Kyle; Aikens, Kurt

    2015-11-01

    Computational expense is one of the main deterrents to more widespread use of large eddy simulations (LES). As such, it is important to reduce computational costs whenever possible. In this vein, it may be reasonable to assume that high Reynolds number flows with turbulent boundary layers are inviscid when using a wall model. This assumption relies on the grid being too coarse to resolve either the viscous length scales in the outer flow or those near walls. We are not aware of other studies that have suggested or examined the validity of this approach. The inviscid wall-modeled LES assumption is tested here for supersonic flow over a flat plate on three different grids. Inviscid and viscous results are compared to those of another wall-modeled LES as well as experimental data - the results appear promising. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively, with the current LES application. Recommendations are presented as are future areas of research. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  1. Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates

    NASA Astrophysics Data System (ADS)

    Moore, Christopher J.; Gair, Jonathan R.

    2014-12-01

    Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.

  2. Deep learning in the small sample size setting: cascaded feed forward neural networks for medical image segmentation

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke

    2016-03-01

    Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.

  3. Multidisciplinary optimization of an HSCT wing using a response surface methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giunta, A.A.; Grossman, B.; Mason, W.H.

    1994-12-31

    Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less

  4. Computer simulations of space-borne meteorological systems on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Halem, M.

    1984-01-01

    Because of the extreme expense involved in developing and flight testing meteorological instruments, an extensive series of numerical modeling experiments to simulate the performance of meteorological observing systems were performed on CYBER 205. The studies compare the relative importance of different global measurements of individual and composite systems of the meteorological variables needed to determine the state of the atmosphere. The assessments are made in terms of the systems ability to improve 12 hour global forecasts. Each experiment involves the daily assimilation of simulated data that is obtained from a data set called nature. This data is obtained from two sources: first, a long two-month general circulation integration with the GLAS 4th Order Forecast Model and second, global analysis prepared by the National Meteorological Center, NOAA, from the current observing systems twice daily.

  5. As-built data capture of complex piping using photogrammetry technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morray, J.P.; Ziu, C.G.

    1995-11-01

    Plant owners face an increasingly difficult and expensive task of updating drawings, both regarding the plant logic and physical layout. Through the use of photogrammetry technology, H-H spectrum has created a complete operating plant data capture service, with the result that the task of recording accurate plant configurations has become assured and economical. The technology has proven to be extremely valuable for the capture of complex piping configurations, as well as entire plant facilities, and yields accuracy within 1/4 inch. The method uses photographs and workstation technology to quickly document and compute the plant layout, with all components, regardless ofmore » size, included in the resulting model. The system has the capability to compute actual 3-D coordinates of any point based on previous triangulations, allowing for an immediate assessment of accuracy. This ensures a consistent level of accuracy, which is impossible to achieve in a manual approach. Due to the speed of the process, the approach is very important in hazardous/difficult environments such as nuclear power facilities or offshore platforms.« less

  6. Gesture therapy: a vision-based system for upper extremity stroke rehabilitation.

    PubMed

    Sucar, L; Luis, Roger; Leder, Ron; Hernandez, Jorge; Sanchez, Israel

    2010-01-01

    Stroke is the main cause of motor and cognitive disabilities requiring therapy in the world. Therefor it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. We have developed a low-cost vision-based system that allows stroke survivors to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a virtual environment for facilitating repetitive movement training, with computer vision algorithms that track the hand of a patient, using an inexpensive camera and a personal computer. This system, called Gesture Therapy, includes a gripper with a pressure sensor to include hand and finger rehabilitation; and it tracks the head of the patient to detect and avoid trunk compensation. It has been evaluated in a controlled clinical trial at the National Institute for Neurology and Neurosurgery in Mexico City, comparing it with conventional occupational therapy. In this paper we describe the latest version of the Gesture Therapy System and summarize the results of the clinical trail.

  7. Learning a force field for the martensitic phase transformation in Zr

    NASA Astrophysics Data System (ADS)

    Zong, Hongxiang; Pilania, Ghanshyam; Ramprasad, Rampi; Lookman, Turab

    Atomic simulations provide an effective means to understand the underlying physics of martensitic transformations under extreme conditions. However, this is still a challenge for certain phase transforming metals due to the lack of an accurate classical force field. Quantum molecular dynamics (QMD) simulations are accurate but expensive. During the course of QMD simulations, similar configurations are constantly visited and revisited. Machine Learning can effectively learn from past visits and, therefore, eliminate such redundancies. In this talk, we will discuss the development of a hybrid ML-QMD method in which on-demand, on-the-fly quantum mechanical (QM) calculations are performed to accelerate calculations of interatomic forces at much lower computational costs. Using Zirconium as a model system for which accurate atomisctic potentials are currently unvailable we will demonstrate the feasibility and effectiveness of our approach. Specifically, the computed structural phase transformation behavior within the ML-QMD approach will be compared with available experimental results. Furthermore, results on phonons, stacking fault energies, and activation barriers for the homogeneous martensitic transformation in Zr will be presented.

  8. An Adaptive Multiscale Finite Element Method for Large Scale Simulations

    DTIC Science & Technology

    2015-09-28

    Illinois at Urbana-Champaign Abstract Hypersonic vehicles are subjected to extreme acoustic, thermal and mechanical loading with strong spatial and temporal...07/15/2012 Reporting Period End Date 07/14/2015 Abstract Hypersonic vehicles are subjected to extreme acoustic, thermal and mechanical loading with...gradients and for extended periods of time. Long duration, 3-D simulations of non-linear response of these vehicles , is prohibitively expensive using

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  10. Metamodels for Computer-Based Engineering Design: Survey and Recommendations

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.

    1997-01-01

    The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.

  11. Fractures from trampolines: results from a national database, 2002 to 2011.

    PubMed

    Loder, Randall T; Schultz, William; Sabatino, Meagan

    2014-01-01

    No study specifically analyzes trampoline fracture patterns across a large population. The purpose of this study was to determine such patterns. We queried the National Electronic Injury Surveillance System database for trampoline injuries between 2002 and 2011, and the patients were analyzed by age, sex, race, anatomic location of the injury, geographical location of the injury, and disposition from the emergency department (ED). Statistical analyses were performed with SUDAAN 10 software. Estimated expenses were determined using 2010 data. There were an estimated 1,002,735 ED visits for trampoline-related injuries; 288,876 (29.0%) sustained fractures. The average age for those with fractures was 9.5 years; 92.7% were aged 16 years or younger; 51.7% were male, 95.1% occurred at home, and 9.9% were admitted. The fractures were located in the upper extremity (59.9%), lower extremity (35.7%), and axial skeleton (spine, skull/face, rib/sternum) (4.4%-spine 1.0%, skull/face 2.9%, rib/sternum 0.5%). Those in the axial skeleton were older (16.5 y) than the upper extremity (8.7 y) or lower extremity (10.0 y) (P<0.0001) and more frequently male (67.9%). Lower extremity fractures were more frequently female (54.0%) (P<0.0001). The forearm (37%) and elbow (19%) were most common in the upper extremity; elbow fractures were most frequently admitted (20.0%). The tibia/fibula (39.5%) and ankle (31.5%) were most common in the lower extremity; femur fractures were most frequently admitted (57.9%). Cervical (36.4%) and lumbar (24.7%) were most common locations in the spine; cervical fractures were the most frequently admitted (75.6%). The total ED expense for all trampoline injuries over this 10-year period was $1.002 billion and $408 million for fractures. Trampoline fractures most frequently involve the upper extremity followed by the lower extremity, >90% occur in children. The financial burden to society is large. Further efforts for prevention are needed.

  12. Using quantum chemistry muscle to flex massive systems: How to respond to something perturbing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertoni, Colleen

    Computational chemistry uses the theoretical advances of quantum mechanics and the algorithmic and hardware advances of computer science to give insight into chemical problems. It is currently possible to do highly accurate quantum chemistry calculations, but the most accurate methods are very computationally expensive. Thus it is only feasible to do highly accurate calculations on small molecules, since typically more computationally efficient methods are also less accurate. The overall goal of my dissertation work has been to try to decrease the computational expense of calculations without decreasing the accuracy. In particular, my dissertation work focuses on fragmentation methods, intermolecular interactionsmore » methods, analytic gradients, and taking advantage of new hardware.« less

  13. 47 CFR 36.311 - Network Support/General Support Expenses-Accounts 6110 and 6120 (Class B Telephone Companies...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...

  14. 47 CFR 36.311 - Network Support/General Support Expenses-Accounts 6110 and 6120 (Class B Telephone Companies...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...

  15. 47 CFR 36.311 - Network Support/General Support Expenses-Accounts 6110 and 6120 (Class B Telephone Companies...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (Class A Telephone Companies). 36.311 Section 36.311 Telecommunication FEDERAL COMMUNICATIONS COMMISSION..., office equipment, and general purpose computers. (b) The expenses in these account are apportioned among...

  16. Assessing Domestic Right-Wing Extremism Using the Theory of Collective Behavior

    DTIC Science & Technology

    2009-12-01

    DOMESTIC RIGHT-WING EXTREMISM USING THE THEORY OF COLLECTIVE BEHAVIOR Major Arnold C. Baldoza, United States Air Force B.S., Ateneo de Manila...emerged, but these [were] expensive avocations. … Public transportation became more expensive…. Income increased for those who continued to work...September 14, 2001, http://www.washingtonpost.com/ac2/wp-dyn/A28620-2001Sep14 (accessed October 17, 2009). 56 roots are attributed to Alexis de

  17. Evaluation of COTS Electronic Parts for Extreme Temperature Use in NASA Missions

    NASA Technical Reports Server (NTRS)

    Patterson, Richard L.; Hammoud, Ahmad; Elbuluk, Malik

    2008-01-01

    Electronic systems capable of extreme temperature operation are required for many future NASA space exploration missions where it is desirable to have smaller, lighter, and less expensive spacecraft and probes. Presently, spacecraft on-board electronics are maintained at about room temperature by use of thermal control systems. An Extreme Temperature Electronics Program at the NASA Glenn Research Center focuses on development of electronics suitable for space exploration missions. The effects of exposure to extreme temperatures and thermal cycling are being investigated for commercial-off-the-shelf components as well as for components specially developed for harsh environments. An overview of this program along with selected data is presented.

  18. Weathering a Perfect Storm from Space

    USGS Publications Warehouse

    Love, Jeffrey J.

    2016-01-01

    Extreme space-weather events — intense solar and geomagnetic storms — have occurred in the past: most recently in 1859, 1921 and 1989. So scientists expect that, sooner or later, another extremely intense spaceweather event will strike Earth again. Such storms have the potential to cause widespread interference with and damage to technological systems. A National Academy of Sciences study projects that an extreme space-weather event could end up costing the American economy more than $1 trillion. The question now is whether or not we will take the actions needed to avoid such expensive consequences. Let’s assume that we do. Below is an imagined scenario of how, sometime in the future, an extreme space-weather event might play out.

  19. Implementation of a low-cost Interim 21CFR11 compliance solution for laboratory environments.

    PubMed

    Greene, Jack E

    2003-01-01

    In the recent past, compliance with 21CFR11 has become a major buzzword within the pharmaceutical and biotechnology industries. While commercial solutions exist, implementation and validation are expensive and cumbersome. Frequent implementation of new features via point releases further complicates purchasing decisions by making it difficult to weigh the risk of non-compliance against the costs of too frequent upgrades. This presentation discusses a low-cost interim solution to the problem. While this solution does not address 100% of the issues raised by 21CFR11, it does implement and validate: (1) computer system security; (2) backup and restore ability on the electronic records store; and (3) an automated audit trail mechanism that captures the date, time and user identification whenever electronic records are created, modified or deleted. When coupled with enhanced procedural controls, this solution provides an acceptable level of compliance at extremely low cost.

  20. Formulation of a drinkable peanut-based therapeutic food for malnourished children using plant sources.

    PubMed

    Nabuuma, Deborah; Nakimbugwe, Dorothy; Byaruhanga, Yusuf B; Saalia, Firibu Kwesi; Phillips, Robert Dixon; Chen, Jinru

    2013-06-01

    High ingredient costs continue to hamper local production of therapeutic foods (TFs). Development of formulations without milk, the most expensive ingredient, is one way of reducing cost. This study formulated a ready-to-drink peanut-based TF that matched the nutrient composition of F100 using plant sources. Three least cost formulations namely, A, B and C were designed using computer formulation software with peanuts, beans, sesame, cowpeas and grain amaranth as ingredients. A 100 g portion of the TF provided 101-111 kcal, 5 g protein and 5.3-6.5 g fat. Consumer acceptability hedonic tests showed that the products were liked (extremely and moderately) by 62-65% of mothers. These results suggest that nutrient dense TFs formulated from only plant sources have the potential to be used in the rehabilitation phase of the management of malnourished children after clinical testing.

  1. Ceramic Adhesive for High Temperatures

    NASA Technical Reports Server (NTRS)

    Stevens, Everett G.

    1987-01-01

    Fused-silica/magnesium-phosphate adhesive resists high temperatures and vibrations. New adhesive unaffected by extreme temperatures and vibrations. Assuring direct bonding of gap filters to tile sidewalls, adhesive obviates expensive and time-consuming task of removal, treatment, and replacement of tiles.

  2. High-resolution downscaling for hydrological management

    NASA Astrophysics Data System (ADS)

    Ulbrich, Uwe; Rust, Henning; Meredith, Edmund; Kpogo-Nuwoklo, Komlan; Vagenas, Christos

    2017-04-01

    Hydrological modellers and water managers require high-resolution climate data to model regional hydrologies and how these may respond to future changes in the large-scale climate. The ability to successfully model such changes and, by extension, critical infrastructure planning is often impeded by a lack of suitable climate data. This typically takes the form of too-coarse data from climate models, which are not sufficiently detailed in either space or time to be able to support water management decisions and hydrological research. BINGO (Bringing INnovation in onGOing water management; ) aims to bridge the gap between the needs of hydrological modellers and planners, and the currently available range of climate data, with the overarching aim of providing adaptation strategies for climate change-related challenges. Producing the kilometre- and sub-daily-scale climate data needed by hydrologists through continuous simulations is generally computationally infeasible. To circumvent this hurdle, we adopt a two-pronged approach involving (1) selective dynamical downscaling and (2) conditional stochastic weather generators, with the former presented here. We take an event-based approach to downscaling in order to achieve the kilometre-scale input needed by hydrological modellers. Computational expenses are minimized by identifying extremal weather patterns for each BINGO research site in lower-resolution simulations and then only downscaling to the kilometre-scale (convection permitting) those events during which such patterns occur. Here we (1) outline the methodology behind the selection of the events, and (2) compare the modelled precipitation distribution and variability (preconditioned on the extremal weather patterns) with that found in observations.

  3. 76 FR 9349 - Jim Woodruff Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-17

    ... month. Southeastern would compute its purchased power obligation for each delivery point monthly... rates to include a pass-through of purchased power expenses. The capacity and energy charges to preference customers can be reduced because purchased power expenses will be recovered in a separate, pass...

  4. Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)

    EPA Science Inventory

    Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.

  5. Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis.

    PubMed

    You, Zhu-Hong; Lei, Ying-Ke; Zhu, Lin; Xia, Junfeng; Wang, Bing

    2013-01-01

    Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time.

  6. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  7. A Case against Computer Symbolic Manipulation in School Mathematics Today.

    ERIC Educational Resources Information Center

    Waits, Bert K.; Demana, Franklin

    1992-01-01

    Presented are two reasons discouraging computer symbol manipulation systems use in school mathematics at present: cost for computer laboratories or expensive pocket computers; and impracticality of exact solution representations. Although development with this technology in mathematics education advances, graphing calculators are recommended to…

  8. Low-cost method for producing extreme ultraviolet lithography optics

    DOEpatents

    Folta, James A [Livermore, CA; Montcalm, Claude [Fort Collins, CO; Taylor, John S [Livermore, CA; Spiller, Eberhard A [Mt. Kisco, NY

    2003-11-21

    Spherical and non-spherical optical elements produced by standard optical figuring and polishing techniques are extremely expensive. Such surfaces can be cheaply produced by diamond turning; however, the roughness in the diamond turned surface prevent their use for EUV lithography. These ripples are smoothed with a coating of polyimide before applying a 60 period Mo/Si multilayer to reflect a wavelength of 134 .ANG. and have obtained peak reflectivities close to 63%. The savings in cost are about a factor of 100.

  9. A short note on the use of the red-black tree in Cartesian adaptive mesh refinement algorithms

    NASA Astrophysics Data System (ADS)

    Hasbestan, Jaber J.; Senocak, Inanc

    2017-12-01

    Mesh adaptivity is an indispensable capability to tackle multiphysics problems with large disparity in time and length scales. With the availability of powerful supercomputers, there is a pressing need to extend time-proven computational techniques to extreme-scale problems. Cartesian adaptive mesh refinement (AMR) is one such method that enables simulation of multiscale, multiphysics problems. AMR is based on construction of octrees. Originally, an explicit tree data structure was used to generate and manipulate an adaptive Cartesian mesh. At least eight pointers are required in an explicit approach to construct an octree. Parent-child relationships are then used to traverse the tree. An explicit octree, however, is expensive in terms of memory usage and the time it takes to traverse the tree to access a specific node. For these reasons, implicit pointerless methods have been pioneered within the computer graphics community, motivated by applications requiring interactivity and realistic three dimensional visualization. Lewiner et al. [1] provides a concise review of pointerless approaches to generate an octree. Use of a hash table and Z-order curve are two key concepts in pointerless methods that we briefly discuss next.

  10. Design by Prototype: Examples from the National Aeronautics and Space Administration

    NASA Technical Reports Server (NTRS)

    Mulenburg, Gerald M.; Gundo, Daniel P.

    2002-01-01

    This paper describes and provides exa.mples of a technique called Design-by-Prototype used in the development of research hardware at the National Aeronautics and Space Administration's (NASA) Ames Research Center. This is not a new idea. Artisans and great masters have used prototyping as a design technique for centuries. They created prototypes to try out their ideas before making the primary artifact they were planning. This abstract is itself a prototype for others to use in determining the value of the paper it describes. At the Ames Research Center Design-by-Prototype is used for developing unique, one-of-a-kind hardware for small, high-risk projects. The need tor this new/old process is the proliferation of computer "design tools" that can result in both excessive time expended in design, and a lack of imbedded reality in the final product. Despite creating beautiful three-dimensional models and detailed computer drawings that can consume hundreds of engineering hours, the resulting designs can be extremely difficult to make, requiring many changes that add to the cost and schedule. Much design time can be saved and expensive rework eliminated using Design-by-Prototype.

  11. Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations

    NASA Astrophysics Data System (ADS)

    Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.

    2017-12-01

    Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.

  12. Usage of Parameterized Fatigue Spectra and Physics-Based Systems Engineering Models for Wind Turbine Component Sizing: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Taylor; Guo, Yi; Veers, Paul

    Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less

  13. CORDIC-based digital signal processing (DSP) element for adaptive signal processing

    NASA Astrophysics Data System (ADS)

    Bolstad, Gregory D.; Neeld, Kenneth B.

    1995-04-01

    The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.

  14. Dimension reduction method for SPH equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2011-08-26

    Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less

  15. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  16. Noise in Neuronal and Electronic Circuits: A General Modeling Framework and Non-Monte Carlo Simulation Techniques.

    PubMed

    Kilinc, Deniz; Demir, Alper

    2017-08-01

    The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.

  17. 78 FR 50374 - Proposed Information Collection; Comment Request; Information and Communication Technology Survey

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-19

    ... expenses (purchases; and operating leases and rental payments) for four types of information and communication technology equipment and software (computers and peripheral equipment; ICT equipment, excluding computers and peripherals; electromedical and electrotherapeutic apparatus; and computer software, including...

  18. Study of Volumetrically Heated Ultra-High Energy Density Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocca, Jorge J.

    2016-10-27

    Heating dense matter to millions of degrees is important for applications, but requires complex and expensive methods. The major goal of the project was to demonstrate using a compact laser the creation of a new ultra-high energy density plasma regime characterized by simultaneous extremely high temperature and high density, and to study it combining experimental measurements and advanced simulations. We have demonstrated that trapping of intense femtosecond laser pulses deep within ordered nanowire arrays can heat near solid density matter into a new ultra hot plasma regime. Extreme electron densities, and temperatures of several tens of million degrees were achievedmore » using laser pulses of only 0.5 J energy from a compact laser. Our x-ray spectra and simulations showed that extremely highly ionized plasma volumes several micrometers in depth are generated by irradiation of gold and Nickel nanowire arrays with femtosecond laser pulses of relativistic intensities. We obtained extraordinarily high degrees of ionization (e.g. we peeled 52 electrons from gold atoms, and up to 26 electrons from nickel atoms). In the process we generated Gigabar pressures only exceeded in the central hot spot of highly compressed thermonuclear fusion plasmas.. The plasma created after the dissolved wires expand, collide, and thermalize, is computed to have a thermal energy density of 0.3 GJ cm -3 and a pressure of 1-2 Gigabar. These are pressures only exceeded in highly compressed thermonuclear fusion plasmas. Scaling these results to higher laser intensities promises to create plasmas with temperatures and pressures exceeding those in the center of the sun.« less

  19. Low-Cost Terminal Alternative for Learning Center Managers. Final Report.

    ERIC Educational Resources Information Center

    Nix, C. Jerome; And Others

    This study established the feasibility of replacing high performance and relatively expensive computer terminals with less expensive ones adequate for supporting specific tasks of Advanced Instructional System (AIS) at Lowry AFB, Colorado. Surveys of user requirements and available devices were conducted and the results used in a system analysis.…

  20. On the dimension of complex responses in nonlinear structural vibrations

    NASA Astrophysics Data System (ADS)

    Wiebe, R.; Spottswood, S. M.

    2016-07-01

    The ability to accurately model engineering systems under extreme dynamic loads would prove a major breakthrough in many aspects of aerospace, mechanical, and civil engineering. Extreme loads frequently induce both nonlinearities and coupling which increase the complexity of the response and the computational cost of finite element models. Dimension reduction has recently gained traction and promises the ability to distill dynamic responses down to a minimal dimension without sacrificing accuracy. In this context, the dimensionality of a response is related to the number of modes needed in a reduced order model to accurately simulate the response. Thus, an important step is characterizing the dimensionality of complex nonlinear responses of structures. In this work, the dimensionality of the nonlinear response of a post-buckled beam is investigated. Significant detail is dedicated to carefully introducing the experiment, the verification of a finite element model, and the dimensionality estimation algorithm as it is hoped that this system may help serve as a benchmark test case. It is shown that with minor modifications, the method of false nearest neighbors can quantitatively distinguish between the response dimension of various snap-through, non-snap-through, random, and deterministic loads. The state-space dimension of the nonlinear system in question increased from 2-to-10 as the system response moved from simple, low-level harmonic to chaotic snap-through. Beyond the problem studied herein, the techniques developed will serve as a prescriptive guide in developing fast and accurate dimensionally reduced models of nonlinear systems, and eventually as a tool for adaptive dimension-reduction in numerical modeling. The results are especially relevant in the aerospace industry for the design of thin structures such as beams, panels, and shells, which are all capable of spatio-temporally complex dynamic responses that are difficult and computationally expensive to model.

  1. 24 CFR 5.603 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (including a substance abuse or mental health treatment program), or other work activities. Extremely low... shall have the meanings set forth below: Adjusted income. See § 5.611. Annual income. See § 5.609. Child care expenses. Amounts anticipated to be paid by the family for the care of children under 13 years of...

  2. Financing Facilities 101

    ERIC Educational Resources Information Center

    Sawyer, Thomas H.

    2006-01-01

    Successful sport entities have a variety of stable revenue sources to meet increasing expenses. There are three major categories of revenue sources: public, private, and a combination of public and private sources. It is extremely important for the sport manager to ensure that all varieties of revenue streams are maintained. The public sources of…

  3. Wrongful Adoption: Law, Policy and Practice.

    ERIC Educational Resources Information Center

    Freundlich, Madelyn; Peterson, Lisa

    The past decade has seen an increase in cases where adoptive parents fail to receive accurate or complete information about a child's physical, emotional, or developmental problems or about the child's birth family and history. In these cases adoptive parents are confronted with extremely expensive medical care or mental health care. This…

  4. EVALUATING THE POTENTIAL EFFICACY OF THREE ANTIFUNGAL SEALANTS OF DUCT LINER AND GALVANIZED STEEL AS USED IN HVAC SYSTEMS

    EPA Science Inventory

    Current recommendations for remediation of fiberglass duct materials contaminated with fungi specify complete removal, which can be extremely expensive, but in-place duct cleaning may not provide adequate protection from regrowth of fungal contamination. Therefore, a common pract...

  5. Web-Based Job Submission Interface for the GAMESS Computational Chemistry Program

    ERIC Educational Resources Information Center

    Perri, M. J.; Weber, S. H.

    2014-01-01

    A Web site is described that facilitates use of the free computational chemistry software: General Atomic and Molecular Electronic Structure System (GAMESS). Its goal is to provide an opportunity for undergraduate students to perform computational chemistry experiments without the need to purchase expensive software.

  6. Computing Systems | High-Performance Computing | NREL

    Science.gov Websites

    investigate, build, and test models of complex phenomena or entire integrated systems-that cannot be directly observed or manipulated in the lab, or would be too expensive or time consuming. Models and visualizations

  7. Final Report of the Project "From the finite element method to the virtual element method"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manzini, Gianmarco; Gyrya, Vitaliy

    The Finite Element Method (FEM) is a powerful numerical tool that is being used in a large number of engineering applications. The FEM is constructed on triangular/tetrahedral and quadrilateral/hexahedral meshes. Extending the FEM to general polygonal/polyhedral meshes in straightforward way turns out to be extremely difficult and leads to very complex and computationally expensive schemes. The reason for this failure is that the construction of the basis functions on elements with a very general shape is a non-trivial and complex task. In this project we developed a new family of numerical methods, dubbed the Virtual Element Method (VEM) for themore » numerical approximation of partial differential equations (PDE) of elliptic type suitable to polygonal and polyhedral unstructured meshes. We successfully formulated, implemented and tested these methods and studied both theoretically and numerically their stability, robustness and accuracy for diffusion problems, convection-reaction-diffusion problems, the Stokes equations and the biharmonic equations.« less

  8. Ultrahigh-order Maxwell solver with extreme scalability for electromagnetic PIC simulations of plasmas

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri; Vay, Jean-Luc

    2018-07-01

    The advent of massively parallel supercomputers, with their distributed-memory technology using many processing units, has favored the development of highly-scalable local low-order solvers at the expense of harder-to-scale global very high-order spectral methods. Indeed, FFT-based methods, which were very popular on shared memory computers, have been largely replaced by finite-difference (FD) methods for the solution of many problems, including plasmas simulations with electromagnetic Particle-In-Cell methods. For some problems, such as the modeling of so-called "plasma mirrors" for the generation of high-energy particles and ultra-short radiations, we have shown that the inaccuracies of standard FD-based PIC methods prevent the modeling on present supercomputers at sufficient accuracy. We demonstrate here that a new method, based on the use of local FFTs, enables ultrahigh-order accuracy with unprecedented scalability, and thus for the first time the accurate modeling of plasma mirrors in 3D.

  9. A Method for Automated Detection of Usability Problems from Client User Interface Events

    PubMed Central

    Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.

    2005-01-01

    Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121

  10. Efficient bootstrap estimates for tail statistics

    NASA Astrophysics Data System (ADS)

    Breivik, Øyvind; Aarnes, Ole Johan

    2017-03-01

    Bootstrap resamples can be used to investigate the tail of empirical distributions as well as return value estimates from the extremal behaviour of the sample. Specifically, the confidence intervals on return value estimates or bounds on in-sample tail statistics can be obtained using bootstrap techniques. However, non-parametric bootstrapping from the entire sample is expensive. It is shown here that it suffices to bootstrap from a small subset consisting of the highest entries in the sequence to make estimates that are essentially identical to bootstraps from the entire sample. Similarly, bootstrap estimates of confidence intervals of threshold return estimates are found to be well approximated by using a subset consisting of the highest entries. This has practical consequences in fields such as meteorology, oceanography and hydrology where return values are calculated from very large gridded model integrations spanning decades at high temporal resolution or from large ensembles of independent and identically distributed model fields. In such cases the computational savings are substantial.

  11. TinyOS-based quality of service management in wireless sensor networks

    USGS Publications Warehouse

    Peterson, N.; Anusuya-Rangappa, L.; Shirazi, B.A.; Huang, R.; Song, W.-Z.; Miceli, M.; McBride, D.; Hurson, A.; LaHusen, R.

    2009-01-01

    Previously the cost and extremely limited capabilities of sensors prohibited Quality of Service (QoS) implementations in wireless sensor networks. With advances in technology, sensors are becoming significantly less expensive and the increases in computational and storage capabilities are opening the door for new, sophisticated algorithms to be implemented. Newer sensor network applications require higher data rates with more stringent priority requirements. We introduce a dynamic scheduling algorithm to improve bandwidth for high priority data in sensor networks, called Tiny-DWFQ. Our Tiny-Dynamic Weighted Fair Queuing scheduling algorithm allows for dynamic QoS for prioritized communications by continually adjusting the treatment of communication packages according to their priorities and the current level of network congestion. For performance evaluation, we tested Tiny-DWFQ, Tiny-WFQ (traditional WFQ algorithm implemented in TinyOS), and FIFO queues on an Imote2-based wireless sensor network and report their throughput and packet loss. Our results show that Tiny-DWFQ performs better in all test cases. ?? 2009 IEEE.

  12. Incorporation of a two metre long PET scanner in STIR

    NASA Astrophysics Data System (ADS)

    Tsoumpas, C.; Brain, C.; Dyke, T.; Gold, D.

    2015-09-01

    The Explorer project aims to investigate the potential benefits of a total-body 2 metre long PET scanner. The following investigation incorporates this scanner in STIR library and demonstrates the capabilities and weaknesses of existing reconstruction (FBP and OSEM) and single scatter simulation algorithms. It was found that sensible images are reconstructed but at the expense of high memory and processing time demands. FBP requires 4 hours on a core; OSEM: 2 hours per iteration if ran in parallel on 15-cores of a high performance computer. The single scatter simulation algorithm shows that on a short scale, up to a fifth of the scanner length, the assumption that the scatter between direct rings is similar to the scatter between the oblique rings is approximately valid. However, for more extreme cases this assumption is not longer valid, which illustrates that consideration of the oblique rings within the single scatter simulation will be necessary, if this scatter correction is the method of choice.

  13. Hydrocode and Molecular Dynamics modelling of uniaxial shock wave experiments on Silicon

    NASA Astrophysics Data System (ADS)

    Stubley, Paul; McGonegle, David; Patel, Shamim; Suggit, Matthew; Wark, Justin; Higginbotham, Andrew; Comley, Andrew; Foster, John; Rothman, Steve; Eggert, Jon; Kalantar, Dan; Smith, Ray

    2015-06-01

    Recent experiments have provided further evidence that the response of silicon to shock compression has anomalous properties, not described by the usual two-wave elastic-plastic response. A recent experimental campaign on the Orion laser in particular has indicated a complex multi-wave response. While Molecular Dynamics (MD) simulations can offer a detailed insight into the response of crystals to uniaxial compression, they are extremely computationally expensive. For this reason, we are adapting a simple quasi-2D hydrodynamics code to capture phase change under uniaxial compression, and the intervening mixed phase region, keeping track of the stresses and strains in each of the phases. This strain information is of such importance because a large number of shock experiments use diffraction as a key diagnostic, and these diffraction patterns depend solely on the elastic strains in the sample. We present here a comparison of the new hydrodynamics code with MD simulations, and show that the simulated diffraction taken from the code agrees qualitatively with measured diffraction from our recent Orion campaign.

  14. Extreme-Scale Algorithms & Software Resilience (EASIR) Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, James W.

    This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emergingmore » memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a subset of the IEEE Floating Point Standard 754-2008, uses just 6 words to represent a “reproducible accumulator,” and requires just one read-only pass over the data, or one reduction in parallel. New instructions based on this work are being considered for inclusion in the future IEEE 754-2018 floating-point standard, and new reproducible BLAS are being considered for the next version of the BLAS standard.« less

  15. Upper extremity pain and computer use among engineering graduate students.

    PubMed

    Schlossberg, Eric B; Morrow, Sandra; Llosa, Augusto E; Mamary, Edward; Dietrich, Peter; Rempel, David M

    2004-09-01

    The objective of this study was to investigate risk factors associated with persistent or recurrent upper extremity and neck pain among engineering graduate students. A random sample of 206 Electrical Engineering and Computer Science (EECS) graduate students at a large public university completed an online questionnaire. Approximately 60% of respondents reported upper extremity or neck pain attributed to computer use and reported a mean pain severity score of 4.5 (+/-2.2; scale 0-10). In a final logistic regression model, female gender, years of computer use, and hours of computer use per week were significantly associated with pain. The high prevalence of upper extremity pain reported by graduate students suggests a public health need to identify interventions that will reduce symptom severity and prevent impairment.

  16. 41 CFR 301-10.301 - How do I compute my mileage reimbursement?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false How do I compute my...-TRANSPORTATION EXPENSES Privately Owned Vehicle (POV) § 301-10.301 How do I compute my mileage reimbursement? You compute mileage reimbursement by multiplying the distance traveled, determined under § 301-10.302 of this...

  17. Is Your School Y2K-OK?

    ERIC Educational Resources Information Center

    Bates, Martine G.

    1999-01-01

    The most vulnerable Y2K areas for schools are networked computers, free-standing personal computers, software, and embedded chips in utilities such as telephones and fire alarms. Expensive, time-consuming procedures and software have been developed for testing and bringing most computers into compliance. Districts need a triage prioritization…

  18. Understanding the Internet.

    ERIC Educational Resources Information Center

    Oblinger, Diana

    The Internet is an international network linking hundreds of smaller computer networks in North America, Europe, and Asia. Using the Internet, computer users can connect to a variety of computers with little effort or expense. The potential for use by college faculty is enormous. The largest problem faced by most users is understanding what such…

  19. "Mini", "Midi" and the Student.

    ERIC Educational Resources Information Center

    Edwards, Perry; Broadwell, Bruce

    Mini- and midi-computers have been introduced into the computer science program at Sierra College to afford students more direct contact with computers. The college's administration combined with the Science and Business departments to share the expense and utilization of the program. The National Cash Register Century 100 and the Data General…

  20. 48 CFR 970.5227-1 - Rights in data-facilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... software. (2) Computer software, as used in this clause, means (i) computer programs which are data... software. The term “data” does not include data incidental to the administration of this contract, such as... this clause, means data, other than computer software, developed at private expense that embody trade...

  1. RenderMan design principles

    NASA Technical Reports Server (NTRS)

    Apodaca, Tony; Porter, Tom

    1989-01-01

    The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.

  2. Extending Strong Scaling of Quantum Monte Carlo to the Exascale

    NASA Astrophysics Data System (ADS)

    Shulenburger, Luke; Baczewski, Andrew; Luo, Ye; Romero, Nichols; Kent, Paul

    Quantum Monte Carlo is one of the most accurate and most computationally expensive methods for solving the electronic structure problem. In spite of its significant computational expense, its massively parallel nature is ideally suited to petascale computers which have enabled a wide range of applications to relatively large molecular and extended systems. Exascale capabilities have the potential to enable the application of QMC to significantly larger systems, capturing much of the complexity of real materials such as defects and impurities. However, both memory and computational demands will require significant changes to current algorithms to realize this possibility. This talk will detail both the causes of the problem and potential solutions. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the US Department of Energys National Nuclear Security Administration under contract DE-AC04-94AL85000.

  3. Direct volume estimation without segmentation

    NASA Astrophysics Data System (ADS)

    Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.

    2015-03-01

    Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.

  4. Development of hardware accelerator for molecular dynamics simulations: a computation board that calculates nonbonded interactions in cooperation with fast multipole method.

    PubMed

    Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro

    2003-04-15

    Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003

  5. Project APhiD: A Lorenz-gauged A-Φ decomposition for parallelized computation of ultra-broadband electromagnetic induction in a fully heterogeneous Earth

    NASA Astrophysics Data System (ADS)

    Weiss, Chester J.

    2013-08-01

    An essential element for computational hypothesis testing, data inversion and experiment design for electromagnetic geophysics is a robust forward solver, capable of easily and quickly evaluating the electromagnetic response of arbitrary geologic structure. The usefulness of such a solver hinges on the balance among competing desires like ease of use, speed of forward calculation, scalability to large problems or compute clusters, parsimonious use of memory access, accuracy and by necessity, the ability to faithfully accommodate a broad range of geologic scenarios over extremes in length scale and frequency content. This is indeed a tall order. The present study addresses recent progress toward the development of a forward solver with these properties. Based on the Lorenz-gauged Helmholtz decomposition, a new finite volume solution over Cartesian model domains endowed with complex-valued electrical properties is shown to be stable over the frequency range 10-2-1010 Hz and range 10-3-105 m in length scale. Benchmark examples are drawn from magnetotellurics, exploration geophysics, geotechnical mapping and laboratory-scale analysis, showing excellent agreement with reference analytic solutions. Computational efficiency is achieved through use of a matrix-free implementation of the quasi-minimum-residual (QMR) iterative solver, which eliminates explicit storage of finite volume matrix elements in favor of "on the fly" computation as needed by the iterative Krylov sequence. Further efficiency is achieved through sparse coupling matrices between the vector and scalar potentials whose non-zero elements arise only in those parts of the model domain where the conductivity gradient is non-zero. Multi-thread parallelization in the QMR solver through OpenMP pragmas is used to reduce the computational cost of its most expensive step: the single matrix-vector product at each iteration. High-level MPI communicators farm independent processes to available compute nodes for simultaneous computation of multi-frequency or multi-transmitter responses.

  6. Efficient Data-Worth Analysis Using a Multilevel Monte Carlo Method Applied in Oil Reservoir Simulations

    NASA Astrophysics Data System (ADS)

    Lu, D.; Ricciuto, D. M.; Evans, K. J.

    2017-12-01

    Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.

  7. Nonlinear Model Predictive Control for Cooperative Control and Estimation

    NASA Astrophysics Data System (ADS)

    Ru, Pengkai

    Recent advances in computational power have made it possible to do expensive online computations for control systems. It is becoming more realistic to perform computationally intensive optimization schemes online on systems that are not intrinsically stable and/or have very small time constants. Being one of the most important optimization based control approaches, model predictive control (MPC) has attracted a lot of interest from the research community due to its natural ability to incorporate constraints into its control formulation. Linear MPC has been well researched and its stability can be guaranteed in the majority of its application scenarios. However, one issue that still remains with linear MPC is that it completely ignores the system's inherent nonlinearities thus giving a sub-optimal solution. On the other hand, if achievable, nonlinear MPC, would naturally yield a globally optimal solution and take into account all the innate nonlinear characteristics. While an exact solution to a nonlinear MPC problem remains extremely computationally intensive, if not impossible, one might wonder if there is a middle ground between the two. We tried to strike a balance in this dissertation by employing a state representation technique, namely, the state dependent coefficient (SDC) representation. This new technique would render an improved performance in terms of optimality compared to linear MPC while still keeping the problem tractable. In fact, the computational power required is bounded only by a constant factor of the completely linearized MPC. The purpose of this research is to provide a theoretical framework for the design of a specific kind of nonlinear MPC controller and its extension into a general cooperative scheme. The controller is designed and implemented on quadcopter systems.

  8. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  9. The Balancing of Parental Support and Pressure in Fostering Collegiate Athletes

    ERIC Educational Resources Information Center

    Coles, Jon

    2017-01-01

    Enrolling children in youth sport is an American tradition to keep children active while learning life lessons such as teamwork, commitment, and work ethic. However, youth sport is becoming extremely expensive and demanding, resulting in high parental involvement. Research shows over-involved parents cause stress, anxiety, and even burnout. The…

  10. A Robust Epoxy Resins @ Stearic Acid-Mg(OH)2 Micronanosheet Superhydrophobic Omnipotent Protective Coating for Real-Life Applications.

    PubMed

    Si, Yifan; Guo, Zhiguang; Liu, Weimin

    2016-06-29

    Superhydrophobic coating has extremely high application value and practicability. However, some difficult problems such as weak mechanical strength, the need for expensive toxic reagents, and a complex preparation process are all hard to avoid, and these problems have impeded the superhydrophobic coating's real-life application for a long time. Here, we demonstrate one kind of omnipotent epoxy resins @ stearic acid-Mg(OH)2 superhydrophobic coating via a simple antideposition route and one-step superhydrophobization process. The whole preparation process is facile, and expensive toxic reagents needed. This omnipotent coating can be applied on any solid substrate with great waterproof ability, excellent mechanical stability, and chemical durability, which can be stored in a realistic environment for more than 1 month. More significantly, this superhydrophobic coating also has four protective abilities, antifouling, anticorrosion, anti-icing, and flame-retardancy, to cope with a variety of possible extreme natural environments. Therefore, this omnipotent epoxy resins @ stearic acid-Mg(OH)2 superhydrophobic coating not only satisfies real-life need but also has great application potential in many respects.

  11. De Novo Protein Structure Prediction

    NASA Astrophysics Data System (ADS)

    Hung, Ling-Hong; Ngan, Shing-Chung; Samudrala, Ram

    An unparalleled amount of sequence data is being made available from large-scale genome sequencing efforts. The data provide a shortcut to the determination of the function of a gene of interest, as long as there is an existing sequenced gene with similar sequence and of known function. This has spurred structural genomic initiatives with the goal of determining as many protein folds as possible (Brenner and Levitt, 2000; Burley, 2000; Brenner, 2001; Heinemann et al., 2001). The purpose of this is twofold: First, the structure of a gene product can often lead to direct inference of its function. Second, since the function of a protein is dependent on its structure, direct comparison of the structures of gene products can be more sensitive than the comparison of sequences of genes for detecting homology. Presently, structural determination by crystallography and NMR techniques is still slow and expensive in terms of manpower and resources, despite attempts to automate the processes. Computer structure prediction algorithms, while not providing the accuracy of the traditional techniques, are extremely quick and inexpensive and can provide useful low-resolution data for structure comparisons (Bonneau and Baker, 2001). Given the immense number of structures which the structural genomic projects are attempting to solve, there would be a considerable gain even if the computer structure prediction approach were applicable to a subset of proteins.

  12. Segmentation of White Blood Cells From Microscopic Images Using a Novel Combination of K-Means Clustering and Modified Watershed Algorithm.

    PubMed

    Ghane, Narjes; Vard, Alireza; Talebi, Ardeshir; Nematollahy, Pardis

    2017-01-01

    Recognition of white blood cells (WBCs) is the first step to diagnose some particular diseases such as acquired immune deficiency syndrome, leukemia, and other blood-related diseases that are usually done by pathologists using an optical microscope. This process is time-consuming, extremely tedious, and expensive and needs experienced experts in this field. Thus, a computer-aided diagnosis system that assists pathologists in the diagnostic process can be so effective. Segmentation of WBCs is usually a first step in developing a computer-aided diagnosis system. The main purpose of this paper is to segment WBCs from microscopic images. For this purpose, we present a novel combination of thresholding, k-means clustering, and modified watershed algorithms in three stages including (1) segmentation of WBCs from a microscopic image, (2) extraction of nuclei from cell's image, and (3) separation of overlapping cells and nuclei. The evaluation results of the proposed method show that similarity measures, precision, and sensitivity respectively were 92.07, 96.07, and 94.30% for nucleus segmentation and 92.93, 97.41, and 93.78% for cell segmentation. In addition, statistical analysis presents high similarity between manual segmentation and the results obtained by the proposed method.

  13. Architecture for WSN Nodes Integration in Context Aware Systems Using Semantic Messages

    NASA Astrophysics Data System (ADS)

    Larizgoitia, Iker; Muguira, Leire; Vazquez, Juan Ignacio

    Wireless sensor networks (WSN) are becoming extremely popular in the development of context aware systems. Traditionally WSN have been focused on capturing data, which was later analyzed and interpreted in a server with more computational power. In this kind of scenario the problem of representing the sensor information needs to be addressed. Every node in the network might have different sensors attached; therefore their correspondent packet structures will be different. The server has to be aware of the meaning of every single structure and data in order to be able to interpret them. Multiple sensors, multiple nodes, multiple packet structures (and not following a standard format) is neither scalable nor interoperable. Context aware systems have solved this problem with the use of semantic technologies. They provide a common framework to achieve a standard definition of any domain. Nevertheless, these representations are computationally expensive, so a WSN cannot afford them. The work presented in this paper tries to bridge the gap between the sensor information and its semantic representation, by defining a simple architecture that enables the definition of this information natively in a semantic way, achieving the integration of the semantic information in the network packets. This will have several benefits, the most important being the possibility of promoting every WSN node to a real semantic information source.

  14. Large-scale protein-protein interactions detection by integrating big biosensing data with computational model.

    PubMed

    You, Zhu-Hong; Li, Shuai; Gao, Xin; Luo, Xin; Ji, Zhen

    2014-01-01

    Protein-protein interactions are the basis of biological functions, and studying these interactions on a molecular level is of crucial importance for understanding the functionality of a living cell. During the past decade, biosensors have emerged as an important tool for the high-throughput identification of proteins and their interactions. However, the high-throughput experimental methods for identifying PPIs are both time-consuming and expensive. On the other hand, high-throughput PPI data are often associated with high false-positive and high false-negative rates. Targeting at these problems, we propose a method for PPI detection by integrating biosensor-based PPI data with a novel computational model. This method was developed based on the algorithm of extreme learning machine combined with a novel representation of protein sequence descriptor. When performed on the large-scale human protein interaction dataset, the proposed method achieved 84.8% prediction accuracy with 84.08% sensitivity at the specificity of 85.53%. We conducted more extensive experiments to compare the proposed method with the state-of-the-art techniques, support vector machine. The achieved results demonstrate that our approach is very promising for detecting new PPIs, and it can be a helpful supplement for biosensor-based PPI data detection.

  15. [Diagnostic possibilities of digital volume tomography].

    PubMed

    Lemkamp, Michael; Filippi, Andreas; Berndt, Dorothea; Lambrecht, J Thomas

    2006-01-01

    Cone beam computed tomography allows high quality 3D images of cranio-facial structures. Although detail resolution is increased, x-ray exposition is reduced compared to classic computer tomography. The volume is analysed in three orthogonal plains, which can be rotated independently without quality loss. Cone beam computed tomography seems to be a less expensive and less x-ray exposing alternative to classic computer tomography.

  16. hPIN/hTAN: Low-Cost e-Banking Secure against Untrusted Computers

    NASA Astrophysics Data System (ADS)

    Li, Shujun; Sadeghi, Ahmad-Reza; Schmitz, Roland

    We propose hPIN/hTAN, a low-cost token-based e-banking protection scheme when the adversary has full control over the user's computer. Compared with existing hardware-based solutions, hPIN/hTAN depends on neither second trusted channel, nor secure keypad, nor computationally expensive encryption module.

  17. The AAHA Computer Program. American Animal Hospital Association.

    PubMed

    Albers, J W

    1986-07-01

    The American Animal Hospital Association Computer Program should benefit all small animal practitioners. Through the availability of well-researched and well-developed certified software, veterinarians will have increased confidence in their purchase decisions. With the expansion of computer applications to improve practice management efficiency, veterinary computer systems will further justify their initial expense. The development of the Association's veterinary computer network will provide a variety of important services to the profession.

  18. Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    For optimization problems associated with engineering design, parameter estimation, image reconstruction, and other optimization/simulation applications, low accuracy function and gradient values are frequently much less expensive to obtain than high accuracy values. Here, researchers investigate the computational performance of trust region methods for nonlinear optimization when high accuracy evaluations are unavailable or prohibitively expensive, and confirm earlier theoretical predictions when the algorithm is convergent even with relative gradient errors of 0.5 or more. The proper choice of the amount of accuracy to use in function and gradient evaluations can result in orders-of-magnitude savings in computational cost.

  19. Multiobjective Aerodynamic Shape Optimization Using Pareto Differential Evolution and Generalized Response Surface Metamodels

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.

  20. Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis

    PubMed Central

    2013-01-01

    Background Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. Results We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. Conclusions When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time. PMID:23815620

  1. Learning Optimized Local Difference Binaries for Scalable Augmented Reality on Mobile Devices.

    PubMed

    Xin Yang; Kwang-Ting Cheng

    2014-06-01

    The efficiency, robustness and distinctiveness of a feature descriptor are critical to the user experience and scalability of a mobile augmented reality (AR) system. However, existing descriptors are either too computationally expensive to achieve real-time performance on a mobile device such as a smartphone or tablet, or not sufficiently robust and distinctive to identify correct matches from a large database. As a result, current mobile AR systems still only have limited capabilities, which greatly restrict their deployment in practice. In this paper, we propose a highly efficient, robust and distinctive binary descriptor, called Learning-based Local Difference Binary (LLDB). LLDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. To select an optimized set of grid cell pairs, we densely sample grid cells from an image patch and then leverage a modified AdaBoost algorithm to automatically extract a small set of critical ones with the goal of maximizing the Hamming distance between mismatches while minimizing it between matches. Experimental results demonstrate that LLDB is extremely fast to compute and to match against a large database due to its high robustness and distinctiveness. Compared to the state-of-the-art binary descriptors, primarily designed for speed, LLDB has similar efficiency for descriptor construction, while achieving a greater accuracy and faster matching speed when matching over a large database with 2.3M descriptors on mobile devices.

  2. Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows

    NASA Astrophysics Data System (ADS)

    Crowell, Andrew Rippetoe

    This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.

  3. Multiscale Computational Analysis of Nitrogen and Oxygen Gas-Phase Thermochemistry in Hypersonic Flows

    NASA Astrophysics Data System (ADS)

    Bender, Jason D.

    Understanding hypersonic aerodynamics is important for the design of next-generation aerospace vehicles for space exploration, national security, and other applications. Ground-level experimental studies of hypersonic flows are difficult and expensive; thus, computational science plays a crucial role in this field. Computational fluid dynamics (CFD) simulations of extremely high-speed flows require models of chemical and thermal nonequilibrium processes, such as dissociation of diatomic molecules and vibrational energy relaxation. Current models are outdated and inadequate for advanced applications. We describe a multiscale computational study of gas-phase thermochemical processes in hypersonic flows, starting at the atomic scale and building systematically up to the continuum scale. The project was part of a larger effort centered on collaborations between aerospace scientists and computational chemists. We discuss the construction of potential energy surfaces for the N4, N2O2, and O4 systems, focusing especially on the multi-dimensional fitting problem. A new local fitting method named L-IMLS-G2 is presented and compared with a global fitting method. Then, we describe the theory of the quasiclassical trajectory (QCT) approach for modeling molecular collisions. We explain how we implemented the approach in a new parallel code for high-performance computing platforms. Results from billions of QCT simulations of high-energy N2 + N2, N2 + N, and N2 + O2 collisions are reported and analyzed. Reaction rate constants are calculated and sets of reactive trajectories are characterized at both thermal equilibrium and nonequilibrium conditions. The data shed light on fundamental mechanisms of dissociation and exchange reactions -- and their coupling to internal energy transfer processes -- in thermal environments typical of hypersonic flows. We discuss how the outcomes of this investigation and other related studies lay a rigorous foundation for new macroscopic models for hypersonic CFD. This research was supported by the Department of Energy Computational Science Graduate Fellowship and by the Air Force Office of Scientific Research Multidisciplinary University Research Initiative.

  4. 26 CFR 1.50B-3 - Estates and trusts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Estates and trusts. 1.50B-3 Section 1.50B-3... Computing Credit for Expenses of Work Incentive Programs § 1.50B-3 Estates and trusts. (a) General rule—(1) In general. In the case of an estate or trust, WIN expenses (as defined in paragraph (a) of § 1.50B-1...

  5. Solving Campus Parking Shortages: New Solutions for an Old Problem

    ERIC Educational Resources Information Center

    Millard-Ball, Adam; Siegman, Patrick; Tumlin, Jeffrey

    2004-01-01

    Universities and colleges across the country are faced with growth in the campus population and the loss of surface parking lots for new buildings. The response of many institutions is to build new garages with the assumption that parking demand ratios will remain the same. Such an approach, however, can be extremely expensive--upwards of …

  6. External insulation systems for cryogenic storage systems. Volume 1: Optical properties of Kapton and report of process variable study

    NASA Technical Reports Server (NTRS)

    Frank, A. M.

    1974-01-01

    Investigations are conducted into the optical properties of the glass and Kapton substrate materials, and three variables were chosen: deposition rate, sputter gas pressure, and film contamination time. Substrate tests have shown that fabrication of an dielectric broadband reflector would require an extremely complex and expensive filter design.

  7. Small-diameter success stories III.

    Treesearch

    Jean Livingston

    2008-01-01

    More than 73 million acres of our national forests and millions more in public and private forestlands are in need of some form of restoration. Our forests are declining in health because of major changes over the years in forest structure and composition. However, restoration of these overstocked stands is extremely expensive. If new, economical, and value-added uses...

  8. Paradigm Paralysis and the Plight of the PC in Education.

    ERIC Educational Resources Information Center

    O'Neil, Mick

    1998-01-01

    Examines the varied factors involved in providing Internet access in K-12 education, including expense, computer installation and maintenance, and security, and explores how the network computer could be useful in this context. Operating systems and servers are discussed. (MSE)

  9. Computational Modeling in Concert with Laboratory Studies: Application to B Cell Differentiation

    EPA Science Inventory

    Remediation is expensive, so accurate prediction of dose-response is important to help control costs. Dose response is a function of biological mechanisms. Computational models of these mechanisms improve the efficiency of research and provide the capability for prediction.

  10. A Talking Computers System for Persons with Vision and Speech Handicaps. Final Report.

    ERIC Educational Resources Information Center

    Visek & Maggs, Urbana, IL.

    This final report contains a detailed description of six software systems designed to assist individuals with blindness and/or speech disorders in using inexpensive, off-the-shelf computers rather than expensive custom-made devices. The developed software is not written in the native machine language of any particular brand of computer, but in the…

  11. A characterization of workflow management systems for extreme-scale applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  12. A characterization of workflow management systems for extreme-scale applications

    DOE PAGES

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...

    2017-02-16

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  13. [Cost analysis for navigation in knee endoprosthetics].

    PubMed

    Cerha, O; Kirschner, S; Günther, K-P; Lützner, J

    2009-12-01

    Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to 300-395 depending on the navigation system. Computer-assisted TKA is associated with additional costs. From an economical point of view an amount of more than 50 procedures per year appears to be favourable. The cost-effectiveness could be estimated if long-term results will show a reduction of revisions or a better clinical outcome.

  14. Content Range and Precision of a Computer Adaptive Test of Upper Extremity Function for Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Montpetit, Kathleen; Haley, Stephen; Bilodeau, Nathalie; Ni, Pengsheng; Tian, Feng; Gorton, George, III; Mulcahey, M. J.

    2011-01-01

    This article reports on the content range and measurement precision of an upper extremity (UE) computer adaptive testing (CAT) platform of physical function in children with cerebral palsy. Upper extremity items representing skills of all abilities were administered to 305 parents. These responses were compared with two traditional standardized…

  15. Use of less expensive cigarettes in six cities in China: findings from the International Tobacco Control (ITC) China Survey.

    PubMed

    Li, Qiang; Hyland, Andrew; Fong, Geoffrey T; Jiang, Yuan; Elton-Marshall, Tara

    2010-10-01

    The existence of less expensive cigarettes in China may undermine public health. The aim of the current study is to examine the use of less expensive cigarettes in six cities in China. Data was from the baseline wave of the International Tobacco Control (ITC) China Survey of 4815 adult urban smokers in 6 cities, conducted between April and August 2006. The percentage of smokers who reported buying less expensive cigarettes (the lowest pricing tertile within each city) at last purchase was computed. Complex sample multivariate logistic regression models were used to identify factors associated with use of less expensive cigarettes. The association between the use of less expensive cigarettes and intention to quit smoking was also examined. Smokers who reported buying less expensive cigarettes at last purchase tended to be older, heavier smokers, to have lower education and income, and to think more about the money spent on smoking in the last month. Smokers who bought less expensive cigarettes at the last purchase and who were less knowledgeable about the health harm of smoking were less likely to intend to quit smoking. Measures need to be taken to minimise the price differential among cigarette brands and to increase smokers' health knowledge, which may in turn increase their intentions to quit.

  16. 49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...

  17. 49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...

  18. 49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...

  19. Computer-assisted coding and clinical documentation: first things first.

    PubMed

    Tully, Melinda; Carmichael, Angela

    2012-10-01

    Computer-assisted coding tools have the potential to drive improvements in seven areas: Transparency of coding. Productivity (generally by 20 to 25 percent for inpatient claims). Accuracy (by improving specificity of documentation). Cost containment (by reducing overtime expenses, audit fees, and denials). Compliance. Efficiency. Consistency.

  20. Development of Comprehensive Reduced Kinetic Models for Supersonic Reacting Shear Layer Simulations

    NASA Technical Reports Server (NTRS)

    Zambon, A. C.; Chelliah, H. K.; Drummond, J. P.

    2006-01-01

    Large-scale simulations of multi-dimensional unsteady turbulent reacting flows with detailed chemistry and transport can be computationally extremely intensive even on distributed computing architectures. With the development of suitable reduced chemical kinetic models, the number of scalar variables to be integrated can be decreased, leading to a significant reduction in the computational time required for the simulation with limited loss of accuracy in the results. A general MATLAB-based automated mechanism reduction procedure is presented to reduce any complex starting mechanism (detailed or skeletal) with minimal human intervention. Based on the application of the quasi steady-state (QSS) approximation for certain chemical species and on the elimination of the fast reaction rates in the mechanism, several comprehensive reduced models, capable of handling different fuels such as C2H4, CH4 and H2, have been developed and thoroughly tested for several combustion problems (ignition, propagation and extinction) and physical conditions (reactant compositions, temperatures, and pressures). A key feature of the present reduction procedure is the explicit solution of the concentrations of the QSS species, needed for the evaluation of the elementary reaction rates. In contrast, previous approaches relied on an implicit solution due to the strong coupling between QSS species, requiring computationally expensive inner iterations. A novel algorithm, based on the definition of a QSS species coupling matrix, is presented to (i) introduce appropriate truncations to the QSS algebraic relations and (ii) identify the optimal sequence for the explicit solution of the concentration of the QSS species. With the automatic generation of the relevant source code, the resulting reduced models can be readily implemented into numerical codes.

  1. An Automated System for Generating Situation-Specific Decision Support in Clinical Order Entry from Local Empirical Data

    ERIC Educational Resources Information Center

    Klann, Jeffrey G.

    2011-01-01

    Clinical Decision Support is one of the only aspects of health information technology that has demonstrated decreased costs and increased quality in healthcare delivery, yet it is extremely expensive and time-consuming to create, maintain, and localize. Consequently, a majority of health care systems do not utilize it, and even when it is…

  2. Investigation of “benign” ionic content in epoxy that induces microelectronic device failure

    Treesearch

    Gregory T. Schueneman; Jeffery Kingsbury; Edmund Klinkerch

    2011-01-01

    Microelectronics and the devices dependent upon them have the extremely challenging requirements of becoming more capable and less expensive every year. This drives the industry to pack more functions into an ever smaller footprint until the next technological revolution. Adding to this situation is the removal of lead from the bill of materials followed closely by...

  3. Students as Library Detectives and Books as Clues: An Application of ISD in K-12 Schools

    ERIC Educational Resources Information Center

    Moore, Kimberly J.; Knowlton, Dave S.

    2006-01-01

    Instructional Systems Design (ISD) is the systematic development of instructional strategies that are grounded in both learning and instructional theory. ISD is usually overlooked as a useful methodology in K-12 schools. ISD is extremely expensive in terms of the time required to develop instruction, and it often seems more appropriate as an…

  4. Extreme-Scale Computing Project Aims to Advance Precision Oncology | FNLCR Staging

    Cancer.gov

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict dru

  5. Accuracy of a disability instrument to identify workers likely to develop upper extremity musculoskeletal disorders.

    PubMed

    Stover, Bert; Silverstein, Barbara; Wickizer, Thomas; Martin, Diane P; Kaufman, Joel

    2007-06-01

    Work related upper extremity musculoskeletal disorders (MSD) result in substantial disability, and expense. Identifying workers or jobs with high risk can trigger intervention before workers are injured or the condition worsens. We investigated a disability instrument, the QuickDASH, as a workplace screening tool to identify workers at high risk of developing upper extremity MSDs. Subjects included workers reporting recurring upper extremity MSD symptoms in the past 7 days (n = 559). The QuickDASH was reasonably accurate at baseline with sensitivity of 73% for MSD diagnosis, and 96% for symptom severity. Specificity was 56% for diagnosis, and 53% for symptom severity. At 1-year follow-up sensitivity and specificity for MSD diagnosis was 72% and 54%, respectively, as predicted by the baseline QuickDASH score. For symptom severity, sensitivity and specificity were 86% and 52%. An a priori target sensitivity of 70% and specificity of 50% was met by symptom severity, work pace and quality, and MSD diagnosis. The QuickDASH may be useful for identifying jobs or workers with increased risk for upper extremity MSDs. It may provide an efficient health surveillance screening tool useful for targeting early workplace intervention for prevention of upper extremity MSD problems.

  6. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... expense). Do not use the clause when the only deliverable items are computer software or computer software... architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013 with its Alternate I in... Software Previously Delivered to the Government, in solicitations when the resulting contract will require...

  7. 48 CFR 227.7103-6 - Contract clauses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... expense). Do not use the clause when the only deliverable items are computer software or computer software... architect-engineer and construction contracts. (b)(1) Use the clause at 252.227-7013 with its Alternate I in... Software Previously Delivered to the Government, in solicitations when the resulting contract will require...

  8. A COMPUTATIONALLY EFFICIENT HYBRID APPROACH FOR DYNAMIC GAS/AEROSOL TRANSFER IN AIR QUALITY MODELS. (R826371C005)

    EPA Science Inventory

    Dynamic mass transfer methods have been developed to better describe the interaction of the aerosol population with semi-volatile species such as nitrate, ammonia, and chloride. Unfortunately, these dynamic methods are computationally expensive. Assumptions are often made to r...

  9. Looking At Display Technologies

    ERIC Educational Resources Information Center

    Bull, Glen; Bull, Gina

    2005-01-01

    A projection system in a classroom with an Internet connection provides a window on the world. Until recently, projectors were expensive and difficult to maintain. Technological advances have resulted in solid-state projectors that require little maintenance and cost no more than a computer. Adding a second or third computer to a classroom…

  10. Predicting protein amidation sites by orchestrating amino acid sequence features

    NASA Astrophysics Data System (ADS)

    Zhao, Shuqiu; Yu, Hua; Gong, Xiujun

    2017-08-01

    Amidation is the fourth major category of post-translational modifications, which plays an important role in physiological and pathological processes. Identifying amidation sites can help us understanding the amidation and recognizing the original reason of many kinds of diseases. But the traditional experimental methods for predicting amidation sites are often time-consuming and expensive. In this study, we propose a computational method for predicting amidation sites by orchestrating amino acid sequence features. Three kinds of feature extraction methods are used to build a feature vector enabling to capture not only the physicochemical properties but also position related information of the amino acids. An extremely randomized trees algorithm is applied to choose the optimal features to remove redundancy and dependence among components of the feature vector by a supervised fashion. Finally the support vector machine classifier is used to label the amidation sites. When tested on an independent data set, it shows that the proposed method performs better than all the previous ones with the prediction accuracy of 0.962 at the Matthew's correlation coefficient of 0.89 and area under curve of 0.964.

  11. Study of hypervelocity projectile impact on thick metal plates

    DOE PAGES

    Roy, Shawoon K.; Trabia, Mohamed; O’Toole, Brendan; ...

    2016-01-01

    Hypervelocity impacts generate extreme pressure and shock waves in impacted targets that undergo severe localized deformation within a few microseconds. These impact experiments pose unique challenges in terms of obtaining accurate measurements. Similarly, simulating these experiments is not straightforward. This paper proposed an approach to experimentally measure the velocity of the back surface of an A36 steel plate impacted by a projectile. All experiments used a combination of a two-stage light-gas gun and the photonic Doppler velocimetry (PDV) technique. The experimental data were used to benchmark and verify computational studies. Two different finite-element methods were used to simulate the experiments:more » Lagrangian-based smooth particle hydrodynamics (SPH) and Eulerian-based hydrocode. Both codes used the Johnson-Cook material model and the Mie-Grüneisen equation of state. Experiments and simulations were compared based on the physical damage area and the back surface velocity. Finally, the results of this study showed that the proposed simulation approaches could be used to reduce the need for expensive experiments.« less

  12. The advance of non-invasive detection methods in osteoarthritis

    NASA Astrophysics Data System (ADS)

    Dai, Jiao; Chen, Yanping

    2011-06-01

    Osteoarthritis (OA) is one of the most prevalent chronic diseases which badly affected the patients' living quality and economy. Detection and evaluation technology can provide basic information for early treatment. A variety of imaging methods in OA were reviewed, such as conventional X-ray, computed tomography (CT), ultrasound (US), magnetic resonance imaging (MRI) and near-infrared spectroscopy (NIRS). Among the existing imaging modalities, the spatial resolution of X-ray is extremely high; CT is a three-dimensional method, which has high density resolution; US as an evaluation method of knee OA discriminates lesions sensitively between normal cartilage and degenerative one; as a sensitive and nonionizing method, MRI is suitable for the detection of early OA, but the cost is too expensive for routine use; NIRS is a safe, low cost modality, and is also good at detecting early stage OA. In a word, each method has its own advantages, but NIRS is provided with broader application prospect, and it is likely to be used in clinical daily routine and become the golden standard for diagnostic detection.

  13. A virtual climate library of surface temperature over North America for 1979-2015

    NASA Astrophysics Data System (ADS)

    Kravtsov, Sergey; Roebber, Paul; Brazauskas, Vytaras

    2017-10-01

    The most comprehensive continuous-coverage modern climatic data sets, known as reanalyses, come from combining state-of-the-art numerical weather prediction (NWP) models with diverse available observations. These reanalysis products estimate the path of climate evolution that actually happened, and their use in a probabilistic context—for example, to document trends in extreme events in response to climate change—is, therefore, limited. Free runs of NWP models without data assimilation can in principle be used for the latter purpose, but such simulations are computationally expensive and are prone to systematic biases. Here we produce a high-resolution, 100-member ensemble simulation of surface atmospheric temperature over North America for the 1979-2015 period using a comprehensive spatially extended non-stationary statistical model derived from the data based on the North American Regional Reanalysis. The surrogate climate realizations generated by this model are independent from, yet nearly statistically congruent with reality. This data set provides unique opportunities for the analysis of weather-related risk, with applications in agriculture, energy development, and protection of human life.

  14. A virtual climate library of surface temperature over North America for 1979–2015

    PubMed Central

    Kravtsov, Sergey; Roebber, Paul; Brazauskas, Vytaras

    2017-01-01

    The most comprehensive continuous-coverage modern climatic data sets, known as reanalyses, come from combining state-of-the-art numerical weather prediction (NWP) models with diverse available observations. These reanalysis products estimate the path of climate evolution that actually happened, and their use in a probabilistic context—for example, to document trends in extreme events in response to climate change—is, therefore, limited. Free runs of NWP models without data assimilation can in principle be used for the latter purpose, but such simulations are computationally expensive and are prone to systematic biases. Here we produce a high-resolution, 100-member ensemble simulation of surface atmospheric temperature over North America for the 1979–2015 period using a comprehensive spatially extended non-stationary statistical model derived from the data based on the North American Regional Reanalysis. The surrogate climate realizations generated by this model are independent from, yet nearly statistically congruent with reality. This data set provides unique opportunities for the analysis of weather-related risk, with applications in agriculture, energy development, and protection of human life. PMID:29039842

  15. A virtual climate library of surface temperature over North America for 1979-2015.

    PubMed

    Kravtsov, Sergey; Roebber, Paul; Brazauskas, Vytaras

    2017-10-17

    The most comprehensive continuous-coverage modern climatic data sets, known as reanalyses, come from combining state-of-the-art numerical weather prediction (NWP) models with diverse available observations. These reanalysis products estimate the path of climate evolution that actually happened, and their use in a probabilistic context-for example, to document trends in extreme events in response to climate change-is, therefore, limited. Free runs of NWP models without data assimilation can in principle be used for the latter purpose, but such simulations are computationally expensive and are prone to systematic biases. Here we produce a high-resolution, 100-member ensemble simulation of surface atmospheric temperature over North America for the 1979-2015 period using a comprehensive spatially extended non-stationary statistical model derived from the data based on the North American Regional Reanalysis. The surrogate climate realizations generated by this model are independent from, yet nearly statistically congruent with reality. This data set provides unique opportunities for the analysis of weather-related risk, with applications in agriculture, energy development, and protection of human life.

  16. Verification and Improvement of Flamelet Approach for Non-Premixed Flames

    NASA Technical Reports Server (NTRS)

    Zaitsev, S.; Buriko, Yu.; Guskov, O.; Kopchenov, V.; Lubimov, D.; Tshepin, S.; Volkov, D.

    1997-01-01

    Studies in the mathematical modeling of the high-speed turbulent combustion has received renewal attention in the recent years. The review of fundamentals, approaches and extensive bibliography was presented by Bray, Libbi and Williams. In order to obtain accurate predictions for turbulent combustible flows, the effects of turbulent fluctuations on the chemical source terms should be taken into account. The averaging of chemical source terms requires to utilize probability density function (PDF) model. There are two main approaches which are dominant in high-speed combustion modeling now. In the first approach, PDF form is assumed based on intuitia of modelliers (see, for example, Spiegler et.al.; Girimaji; Baurle et.al.). The second way is much more elaborate and it is based on the solution of evolution equation for PDF. This approach was proposed by S.Pope for incompressible flames. Recently, it was modified for modeling of compressible flames in studies of Farschi; Hsu; Hsu, Raji, Norris; Eifer, Kollman. But its realization in CFD is extremely expensive in computations due to large multidimensionality of PDF evolution equation (Baurle, Hsu, Hassan).

  17. Large-scale expensive black-box function optimization

    NASA Astrophysics Data System (ADS)

    Rashid, Kashif; Bailey, William; Couët, Benoît

    2012-09-01

    This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.

  18. Use of less expensive cigarettes in six cities in China: findings from the International Tobacco Control (ITC) China Survey

    PubMed Central

    Hyland, Andrew; Fong, Geoffrey T; Jiang, Yuan; Elton-Marshall, Tara

    2010-01-01

    Objective The existence of less expensive cigarettes in China may undermine public health. The aim of the current study is to examine the use of less expensive cigarettes in six cities in China. Methods Data was from the baseline wave of the International Tobacco Control (ITC) China Survey of 4815 adult urban smokers in 6 cities, conducted between April and August 2006. The percentage of smokers who reported buying less expensive cigarettes (the lowest pricing tertile within each city) at last purchase was computed. Complex sample multivariate logistic regression models were used to identify factors associated with use of less expensive cigarettes. The association between the use of less expensive cigarettes and intention to quit smoking was also examined. Results Smokers who reported buying less expensive cigarettes at last purchase tended to be older, heavier smokers, to have lower education and income, and to think more about the money spent on smoking in the last month. Smokers who bought less expensive cigarettes at the last purchase and who were less knowledgeable about the health harm of smoking were less likely to intend to quit smoking. Conclusions Measures need to be taken to minimise the price differential among cigarette brands and to increase smokers' health knowledge, which may in turn increase their intentions to quit. PMID:20935199

  19. How morphometric characteristics affect flow accumulation values

    NASA Astrophysics Data System (ADS)

    Farek, Vladimir

    2014-05-01

    Remote sensing methods (like aerial based LIDAR recording, land-use recording etc.) become continually more available and accurate. On the other hand in-situ surveying is still expensive. Above all in small, anthropogenically uninfluenced catchments, with poor, or non-existing surveying network could be remote sensing methods extremely useful. Overland flow accumulation (FA) values belong to important indicators of higher flash floods or soil erosion exposure. This value gives the number of cells of the Digital Elevation Model (DEM) grid, which are drained to each point of the catchment. This contribution deals with relations between basic geomorphological and morphometric characteristics (like hypsometric integral, Melton index of subcatchment etc.) and FA values. These relations are studied in the rocky sandstone landscapes of National park Ceské Svycarsko with the particular occurrence of broken relief. All calculations are based on high-resolution LIDAR DEM named Genesis created by TU Dresden. The main computational platform is GIS GRASS . The goal of the conference paper is to submit a quick method or indicators to estimate small particular subcatchments threatened by higher flash floods or soil erosion risks, without the necessity of using sophisticated rainfall-runoff models. There is a possibility to split catchments easily to small subcatchments (or use existing disjunction), compute basic characteristics and (with knowledge of links between this characteristics and FA values) identify, which particular subcatchment is potentially threatened by flash floods or soil erosion.

  20. Efficient calculation of many-body induced electrostatics in molecular systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLaughlin, Keith, E-mail: kmclaugh@mail.usf.edu; Cioce, Christian R.; Pham, Tony

    Potential energy functions including many-body polarization are in widespread use in simulations of aqueous and biological systems, metal-organics, molecular clusters, and other systems where electronically induced redistribution of charge among local atomic sites is of importance. The polarization interactions, treated here via the methods of Thole and Applequist, while long-ranged, can be computed for moderate-sized periodic systems with extremely high accuracy by extending Ewald summation to the induced fields as demonstrated by Nymand, Sala, and others. These full Ewald polarization calculations, however, are expensive and often limited to very small systems, particularly in Monte Carlo simulations, which may require energymore » evaluation over several hundred-thousand configurations. For such situations, it shall be shown that sufficiently accurate computation of the polarization energy can be produced in a fraction of the central processing unit (CPU) time by neglecting the long-range extension to the induced fields while applying the long-range treatments of Ewald or Wolf to the static fields; these methods, denoted Ewald E-Static and Wolf E-Static (WES), respectively, provide an effective means to obtain polarization energies for intermediate and large systems including those with several thousand polarizable sites in a fraction of the CPU time. Furthermore, we shall demonstrate a means to optimize the damping for WES calculations via extrapolation from smaller trial systems.« less

  1. The Advantages of Using Technology in Second Language Education: Technology Integration in Foreign Language Teaching Demonstrates the Shift from a Behavioral to a Constructivist Learning Approach

    ERIC Educational Resources Information Center

    Wang, Li

    2005-01-01

    With the advent of networked computers and Internet technology, computer-based instruction has been widely used in language classrooms throughout the United States. Computer technologies have dramatically changed the way people gather information, conduct research and communicate with others worldwide. Considering the tremendous startup expenses,…

  2. Distributed Kalman filtering compared to Fourier domain preconditioned conjugate gradient for laser guide star tomography on extremely large telescopes.

    PubMed

    Gilles, Luc; Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Ellerbroek, Brent

    2013-05-01

    This paper discusses the performance and cost of two computationally efficient Fourier-based tomographic wavefront reconstruction algorithms for wide-field laser guide star (LGS) adaptive optics (AO). The first algorithm is the iterative Fourier domain preconditioned conjugate gradient (FDPCG) algorithm developed by Yang et al. [Appl. Opt.45, 5281 (2006)], combined with pseudo-open-loop control (POLC). FDPCG's computational cost is proportional to N log(N), where N denotes the dimensionality of the tomography problem. The second algorithm is the distributed Kalman filter (DKF) developed by Massioni et al. [J. Opt. Soc. Am. A28, 2298 (2011)], which is a noniterative spatially invariant controller. When implemented in the Fourier domain, DKF's cost is also proportional to N log(N). Both algorithms are capable of estimating spatial frequency components of the residual phase beyond the wavefront sensor (WFS) cutoff frequency thanks to regularization, thereby reducing WFS spatial aliasing at the expense of more computations. We present performance and cost analyses for the LGS multiconjugate AO system under design for the Thirty Meter Telescope, as well as DKF's sensitivity to uncertainties in wind profile prior information. We found that, provided the wind profile is known to better than 10% wind speed accuracy and 20 deg wind direction accuracy, DKF, despite its spatial invariance assumptions, delivers a significantly reduced wavefront error compared to the static FDPCG minimum variance estimator combined with POLC. Due to its nonsequential nature and high degree of parallelism, DKF is particularly well suited for real-time implementation on inexpensive off-the-shelf graphics processing units.

  3. On Shaft Data Acquisition System (OSDAS)

    NASA Technical Reports Server (NTRS)

    Pedings, Marc; DeHart, Shawn; Formby, Jason; Naumann, Charles

    2012-01-01

    On Shaft Data Acquisition System (OSDAS) is a rugged, compact, multiple-channel data acquisition computer system that is designed to record data from instrumentation while operating under extreme rotational centrifugal or gravitational acceleration forces. This system, which was developed for the Heritage Fuel Air Turbine Test (HFATT) program, addresses the problem of recording multiple channels of high-sample-rate data on most any rotating test article by mounting the entire acquisition computer onboard with the turbine test article. With the limited availability of slip ring wires for power and communication, OSDAS utilizes its own resources to provide independent power and amplification for each instrument. Since OSDAS utilizes standard PC technology as well as shared code interfaces with the next-generation, real-time health monitoring system (SPARTAA Scalable Parallel Architecture for Real Time Analysis and Acquisition), this system could be expanded beyond its current capabilities, such as providing advanced health monitoring capabilities for the test article. High-conductor-count slip rings are expensive to purchase and maintain, yet only provide a limited number of conductors for routing instrumentation off the article and to a stationary data acquisition system. In addition to being limited to a small number of instruments, slip rings are prone to wear quickly, and introduce noise and other undesirable characteristics to the signal data. This led to the development of a system capable of recording high-density instrumentation, at high sample rates, on the test article itself, all while under extreme rotational stress. OSDAS is a fully functional PC-based system with 48 channels of 24-bit, high-sample-rate input channels, phase synchronized, with an onboard storage capacity of over 1/2-terabyte of solid-state storage. This recording system takes a novel approach to the problem of recording multiple channels of instrumentation, integrated with the test article itself, packaged in a compact/rugged form factor, consuming limited power, all while rotating at high turbine speeds.

  4. Extreme-Scale Computing Project Aims to Advance Precision Oncology | Frederick National Laboratory for Cancer Research

    Cancer.gov

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict dru

  5. Extreme-Scale Computing Project Aims to Advance Precision Oncology | Poster

    Cancer.gov

    Two government agencies and five national laboratories are collaborating to develop extremely high-performance computing capabilities that will analyze mountains of research and clinical data to improve scientific understanding of cancer, predict drug response, and improve treatments for patients.

  6. Cloud computing can simplify HIT infrastructure management.

    PubMed

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  7. Hip Implant Modified To Increase Probability Of Retention

    NASA Technical Reports Server (NTRS)

    Canabal, Francisco, III

    1995-01-01

    Modification in design of hip implant proposed to increase likelihood of retention of implant in femur after hip-repair surgery. Decreases likelihood of patient distress and expense associated with repetition of surgery after failed implant procedure. Intended to provide more favorable flow of cement used to bind implant in proximal extreme end of femur, reducing structural flaws causing early failure of implant/femur joint.

  8. Final Report Extreme Computing and U.S. Competitiveness DOE Award. DE-FG02-11ER26087/DE-SC0008764

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mustain, Christopher J.

    The Council has acted on each of the grant deliverables during the funding period. The deliverables are: (1) convening the Council’s High Performance Computing Advisory Committee (HPCAC) on a bi-annual basis; (2) broadening public awareness of high performance computing (HPC) and exascale developments; (3) assessing the industrial applications of extreme computing; and (4) establishing a policy and business case for an exascale economy.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  10. Gravitational Waves From the Kerr/CFT Correspondence

    NASA Astrophysics Data System (ADS)

    Porfyriadis, Achilleas

    Astronomical observation suggests the existence of near-extreme Kerr black holes in the sky. Properties of diffeomorphisms imply that dynamics of the near-horizon region of near-extreme Kerr are governed by an infinite-dimensional conformal symmetry. This symmetry may be exploited to analytically, rather than numerically, compute a variety of potentially observable processes. In this thesis we compute the gravitational radiation emitted by a small compact object that orbits in the near-horizon region and plunges into the horizon of a large rapidly rotating black hole. We study the holographically dual processes in the context of the Kerr/CFT correspondence and find our conformal field theory (CFT) computations in perfect agreement with the gravity results. We compute the radiation emitted by a particle on the innermost stable circular orbit (ISCO) of a rapidly spinning black hole. We confirm previous estimates of the overall scaling of the power radiated, but show that there are also small oscillations all the way to extremality. Furthermore, we reveal an intricate mode-by-mode structure in the flux to infinity, with only certain modes having the dominant scaling. The scaling of each mode is controlled by its conformal weight. Massive objects in adiabatic quasi-circular inspiral towards a near-extreme Kerr black hole quickly plunge into the horizon after passing the ISCO. The post-ISCO plunge trajectory is shown to be related by a conformal map to a circular orbit. Conformal symmetry of the near-horizon region is then used to compute analytically the gravitational radiation produced during the plunge phase. Most extreme-mass-ratio-inspirals of small compact objects into supermassive black holes end with a fast plunge from an eccentric last stable orbit. We use conformal transformations to analytically solve for the radiation emitted from various fast plunges into extreme and near-extreme Kerr black holes.

  11. The Chemical Engineer's Toolbox: A Glass Box Approach to Numerical Problem Solving

    ERIC Educational Resources Information Center

    Coronell, Daniel G.; Hariri, M. Hossein

    2009-01-01

    Computer programming in undergraduate engineering education all too often begins and ends with the freshman programming course. Improvements in computer technology and curriculum revision have improved this situation, but often at the expense of the students' learning due to the use of commercial "black box" software. This paper describes the…

  12. Superintendents' Perceptions of 1:1 Initiative Implementation and Sustainability

    ERIC Educational Resources Information Center

    Cole, Bobby Virgil, Jr.; Sauers, Nicholas J.

    2018-01-01

    One of the fastest growing, most discussed, and most expensive technology initiatives over the last decade has been one-to-one (1:1) computing initiatives. The purpose of this study was to examine key factors that influenced implementing and sustaining 1:1 computing initiatives from the perspective of school superintendents. Nine superintendents…

  13. Data Bases at a State Institution--Costs, Uses and Needs. AIR Forum Paper 1978.

    ERIC Educational Resources Information Center

    McLaughlin, Gerald W.

    The cost-benefit of administrative data at a state college is placed in perspective relative to the institutional involvement in computer use. The costs of computer operations, personnel, and peripheral equipment expenses related to instruction are analyzed. Data bases and systems support institutional activities, such as registration, and aid…

  14. Film Library Information Management System.

    ERIC Educational Resources Information Center

    Minnella, C. Vincent; And Others

    The computer program described not only allows the user to determine rental sources for a particular film title quickly, but also to select the least expensive of the sources. This program developed at SUNY Cortland's Sperry Learning Resources Center and Computer Center is designed to maintain accurate data on rental and purchase films in both…

  15. Computational discovery of extremal microstructure families

    PubMed Central

    Chen, Desai; Skouras, Mélina; Zhu, Bo; Matusik, Wojciech

    2018-01-01

    Modern fabrication techniques, such as additive manufacturing, can be used to create materials with complex custom internal structures. These engineered materials exhibit a much broader range of bulk properties than their base materials and are typically referred to as metamaterials or microstructures. Although metamaterials with extraordinary properties have many applications, designing them is very difficult and is generally done by hand. We propose a computational approach to discover families of microstructures with extremal macroscale properties automatically. Using efficient simulation and sampling techniques, we compute the space of mechanical properties covered by physically realizable microstructures. Our system then clusters microstructures with common topologies into families. Parameterized templates are eventually extracted from families to generate new microstructure designs. We demonstrate these capabilities on the computational design of mechanical metamaterials and present five auxetic microstructure families with extremal elastic material properties. Our study opens the way for the completely automated discovery of extremal microstructures across multiple domains of physics, including applications reliant on thermal, electrical, and magnetic properties. PMID:29376124

  16. Computer work and musculoskeletal disorders of the neck and upper extremity: A systematic review

    PubMed Central

    2010-01-01

    Background This review examines the evidence for an association between computer work and neck and upper extremity disorders (except carpal tunnel syndrome). Methods A systematic critical review of studies of computer work and musculoskeletal disorders verified by a physical examination was performed. Results A total of 22 studies (26 articles) fulfilled the inclusion criteria. Results show limited evidence for a causal relationship between computer work per se, computer mouse and keyboard time related to a diagnosis of wrist tendonitis, and for an association between computer mouse time and forearm disorders. Limited evidence was also found for a causal relationship between computer work per se and computer mouse time related to tension neck syndrome, but the evidence for keyboard time was insufficient. Insufficient evidence was found for an association between other musculoskeletal diagnoses of the neck and upper extremities, including shoulder tendonitis and epicondylitis, and any aspect of computer work. Conclusions There is limited epidemiological evidence for an association between aspects of computer work and some of the clinical diagnoses studied. None of the evidence was considered as moderate or strong and there is a need for more and better documentation. PMID:20429925

  17. Energy and Technology Review

    NASA Astrophysics Data System (ADS)

    1994-07-01

    Satellites have shrunk the world to the size of the proverbial global village. They track weather and the traffic patterns of ships and aircraft, and monitor our environment. Defense satellites provide high-resolution images of objects on the ground to protect our troops and allies. Telecommunication satellites have interwoven business sectors, corporations, and markets into global networks. Nevertheless, orbiting satellites may not be the best choice for all applications requiring a high vantage point. satellites and their payloads are expensive, and launching them by rocket is expensive and risky. They must operate in the extreme conditions of space, bombarded by radiation and with no airflow to cool their electronics. Only valuable, long-term missions would seem to justify the expense and risk of a satellite. Even then, satellites are not always the best choice. They typically cannot hop to a new orbit, and some uses, such as local and global communications, require a large number of satellites to ensure adequate coverage. There is clearly a large potential role for high-altitude, atmospheric vehicles that can stay aloft for very long periods (weeks or months) and can roam virtually anywhere.

  18. A unified RANS–LES model: Computational development, accuracy and cost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopalan, Harish, E-mail: hgopalan@uwyo.edu; Heinz, Stefan, E-mail: heinz@uwyo.edu; Stöllinger, Michael K., E-mail: MStoell@uwyo.edu

    2013-09-15

    Large eddy simulation (LES) is computationally extremely expensive for the investigation of wall-bounded turbulent flows at high Reynolds numbers. A way to reduce the computational cost of LES by orders of magnitude is to combine LES equations with Reynolds-averaged Navier–Stokes (RANS) equations used in the near-wall region. A large variety of such hybrid RANS–LES methods are currently in use such that there is the question of which hybrid RANS-LES method represents the optimal approach. The properties of an optimal hybrid RANS–LES model are formulated here by taking reference to fundamental properties of fluid flow equations. It is shown that unifiedmore » RANS–LES models derived from an underlying stochastic turbulence model have the properties of optimal hybrid RANS–LES models. The rest of the paper is organized in two parts. First, a priori and a posteriori analyses of channel flow data are used to find the optimal computational formulation of the theoretically derived unified RANS–LES model and to show that this computational model, which is referred to as linear unified model (LUM), does also have all the properties of an optimal hybrid RANS–LES model. Second, a posteriori analyses of channel flow data are used to study the accuracy and cost features of the LUM. The following conclusions are obtained. (i) Compared to RANS, which require evidence for their predictions, the LUM has the significant advantage that the quality of predictions is relatively independent of the RANS model applied. (ii) Compared to LES, the significant advantage of the LUM is a cost reduction of high-Reynolds number simulations by a factor of 0.07Re{sup 0.46}. For coarse grids, the LUM has a significant accuracy advantage over corresponding LES. (iii) Compared to other usually applied hybrid RANS–LES models, it is shown that the LUM provides significantly improved predictions.« less

  19. FASTPM: a new scheme for fast simulations of dark matter and haloes

    NASA Astrophysics Data System (ADS)

    Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick

    2016-12-01

    We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.

  20. Correlation energy extrapolation by many-body expansion

    DOE PAGES

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...

    2017-01-09

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  1. Correlation energy extrapolation by many-body expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  2. A regressive storm model for extreme space weather

    NASA Astrophysics Data System (ADS)

    Terkildsen, Michael; Steward, Graham; Neudegg, Dave; Marshall, Richard

    2012-07-01

    Extreme space weather events, while rare, pose significant risk to society in the form of impacts on critical infrastructure such as power grids, and the disruption of high end technological systems such as satellites and precision navigation and timing systems. There has been an increased focus on modelling the effects of extreme space weather, as well as improving the ability of space weather forecast centres to identify, with sufficient lead time, solar activity with the potential to produce extreme events. This paper describes the development of a data-based model for predicting the occurrence of extreme space weather events from solar observation. The motivation for this work was to develop a tool to assist space weather forecasters in early identification of solar activity conditions with the potential to produce extreme space weather, and with sufficient lead time to notify relevant customer groups. Data-based modelling techniques were used to construct the model, and an extensive archive of solar observation data used to train, optimise and test the model. The optimisation of the base model aimed to eliminate false negatives (missed events) at the expense of a tolerable increase in false positives, under the assumption of an iterative improvement in forecast accuracy during progression of the solar disturbance, as subsequent data becomes available.

  3. Feasibility of Prostate Cancer Diagnosis by Transrectal Photoacoustic Imaging

    DTIC Science & Technology

    2014-05-01

    cancer imaging [1]. Currently, most PA imaging systems adopt a nanosecond pulsed laser with high pulse energy because a short light pulse can...health, including prostate cancer detection [3]. A nanosecond pulsed laser with high pulse energy is usually extremely expensive (from tens to...scattering coefficients of 0.04 cm-1 and 8.4 cm-1, respectively, measured with an ISS Oximeter ). An optically and acoustically transparent tube was filled

  4. Development of Osseointegrated Implants for Soldier Amputees Following Orthopaedic Extremity Trauma

    DTIC Science & Technology

    2008-08-01

    specimens, histology and mechanical testing of implants. The second focus of Year 2 was human morphometric studies on variations due to ethnicity, gender...custom implants in above-knee patients with amputations would require expensive custom type implants, a morphometric study was conducted on human...male and female cadaveric femurs. Morphometric variations of the periosteal surface of long bones have been identified with changing age, gender and

  5. Ultraviolet Communication for Medical Applications

    DTIC Science & Technology

    2012-06-01

    battlefield casualty care. UVC Plasma-shells were fabricated and tested as optical emitter components in the solar blind 200-280 nm UVC region, and were... solar -blind (SB) UVC region (200–280 nm). IST’s proprietary UVC-emitting Plasma-shells are successfully demonstrated in a breadboard system. At this...enclosure and removable filter. Single-crystal solar blind filters provide exceptional rejection but are extremely expensive, ruling out the Ofil filters SB

  6. 45 CFR 2507.5 - How does the Corporation process requests for records?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... compelled to create new records or do statistical computations. For example, the Corporation is not required... feasible way to respond to a request. The Corporation is not required to perform any research for the... duplicating all of them. For example, if it requires less time and expense to provide a computer record as a...

  7. 26 CFR 1.179-5 - Time and manner of making election.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... desktop computer costing $1,500. On Taxpayer's 2003 Federal tax return filed on April 15, 2004, Taxpayer elected to expense under section 179 the full cost of the laptop computer and the full cost of the desktop... provided by the Internal Revenue Code, the regulations under the Code, or other guidance published in the...

  8. Innovative Leaders Take the Phone and Run: Profiles of Four Trailblazing Programs

    ERIC Educational Resources Information Center

    Norris, Cathleen; Soloway, Elliot; Menchhofer, Kyle; Bauman, Billie Diane; Dickerson, Mindy; Schad, Lenny; Tomko, Sue

    2010-01-01

    While the Internet changed everything, mobile will change everything squared. The Internet is just a roadway, and computers--the equivalent of cars for the Internet--have been expensive. The keepers of the information roadway--the telecommunication companies--will give one a "computer," such as cell phone, mobile learning device, or MLD,…

  9. 75 FR 25161 - Defense Federal Acquisition Regulation Supplement; Presumption of Development at Private Expense

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-07

    ... asserted restrictions on technical data and computer software. DATES: Comments on the proposed rule should... restrictions on technical data and computer software. More specifically, the proposed rule affects these...) items (as defined at 41 U.S.C. 431(c)). Since COTS items are a subtype of commercial items, this change...

  10. 17 CFR 240.17a-3 - Records to be made by certain exchange members, brokers and dealers.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... records) reflecting all assets and liabilities, income and expense and capital accounts. (3) Ledger..., and a record of the computation of aggregate indebtedness and net capital, as of the trial balance...) thereof shall make a record of the computation of aggregate indebtedness and net capital as of the trial...

  11. Application of Sequence Comparison Methods to Multisensor Data Fusion and Target Recognition

    DTIC Science & Technology

    1993-06-18

    lin- ear comparison). A particularly attractive aspect of the proposed fusion scheme is that it has the potential to work for any object with (1...radar sensing is a historical custom - however, the reader should keep in mind that the fundamental issue in this research is to explore and exploit...reduce the computationally expensive need to compute partial derivatives. In usual practice, the computationally more attractive filter design is

  12. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  13. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  14. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  15. 7 CFR 3560.102 - Housing project management.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., unless the machine becomes the property of the project after purchase. (iii) Determining if Expenses are... computer learning center activities benefiting tenants are not covered in this prohibition. (viii) It is...

  16. Designing instrumented walker to measure upper-extremity's efforts: A case study.

    PubMed

    Khodadadi, Mohammad; Baniasad, Mina Arab; Arazpour, Mokhtar; Farahmand, Farzam; Zohoor, Hassan

    2018-02-26

    The high prevalence of shoulder pain in using walkers in patients who have spinal cord injury (SCI). Also, the limited options available to economically measure grip forces in walkers, which drove the need to create one. This article describes a method to obtain upper-extremities' forces and moments in a person with SCI by designing an appropriate instrumented walker. First, since the commercial multidirectional loadcells are too expensive, custom loadcells are fabricated. Ultimately, a complete gait analysis by means of VICON motion analysis and using inverse dynamic method has been held to measure upper-extremities' efforts. The results for a person with SCI using a two-wheel walker in low and high heights and a basic walker show that there are higher shoulder and elbow flexion-extension moments and also higher shoulder forces in superior-inferior direction and higher elbow and wrist forces in anterior-posterior directions. The results are not much different in using two different types of walker. By using the proposed method, upper-extremities' forces and moments were obtained and the results were compared to each other in using two different walkers.

  17. Multiscale climate emulator of multimodal wave spectra: MUSCLE-spectra

    NASA Astrophysics Data System (ADS)

    Rueda, Ana; Hegermiller, Christie A.; Antolinez, Jose A. A.; Camus, Paula; Vitousek, Sean; Ruggiero, Peter; Barnard, Patrick L.; Erikson, Li H.; Tomás, Antonio; Mendez, Fernando J.

    2017-02-01

    Characterization of multimodal directional wave spectra is important for many offshore and coastal applications, such as marine forecasting, coastal hazard assessment, and design of offshore wave energy farms and coastal structures. However, the multivariate and multiscale nature of wave climate variability makes this complex problem tractable using computationally expensive numerical models. So far, the skill of statistical-downscaling model-based parametric (unimodal) wave conditions is limited in large ocean basins such as the Pacific. The recent availability of long-term directional spectral data from buoys and wave hindcast models allows for development of stochastic models that include multimodal sea-state parameters. This work introduces a statistical downscaling framework based on weather types to predict multimodal wave spectra (e.g., significant wave height, mean wave period, and mean wave direction from different storm systems, including sea and swells) from large-scale atmospheric pressure fields. For each weather type, variables of interest are modeled using the categorical distribution for the sea-state type, the Generalized Extreme Value (GEV) distribution for wave height and wave period, a multivariate Gaussian copula for the interdependence between variables, and a Markov chain model for the chronology of daily weather types. We apply the model to the southern California coast, where local seas and swells from both the Northern and Southern Hemispheres contribute to the multimodal wave spectrum. This work allows attribution of particular extreme multimodal wave events to specific atmospheric conditions, expanding knowledge of time-dependent, climate-driven offshore and coastal sea-state conditions that have a significant influence on local nearshore processes, coastal morphology, and flood hazards.

  18. Multiscale Climate Emulator of Multimodal Wave Spectra: MUSCLE-spectra

    NASA Astrophysics Data System (ADS)

    Rueda, A.; Hegermiller, C.; Alvarez Antolinez, J. A.; Camus, P.; Vitousek, S.; Ruggiero, P.; Barnard, P.; Erikson, L. H.; Tomas, A.; Mendez, F. J.

    2016-12-01

    Characterization of multimodal directional wave spectra is important for many offshore and coastal applications, such as marine forecasting, coastal hazard assessment, and design of offshore wave energy farms and coastal structures. However, the multivariate and multiscale nature of wave climate variability makes this problem complex yet tractable using computationally-expensive numerical models. So far, the skill of statistical-downscaling models based parametric (unimodal) wave conditions is limited in large ocean basins such as the Pacific. The recent availability of long-term directional spectral data from buoys and wave hindcast models allows for development of stochastic models that include multimodal sea-state parameters. This work introduces a statistical-downscaling framework based on weather types to predict multimodal wave spectra (e.g., significant wave height, mean wave period, and mean wave direction from different storm systems, including sea and swells) from large-scale atmospheric pressure fields. For each weather type, variables of interest are modeled using the categorical distribution for the sea-state type, the Generalized Extreme Value (GEV) distribution for wave height and wave period, a multivariate Gaussian copula for the interdependence between variables, and a Markov chain model for the chronology of daily weather types. We apply the model to the Southern California coast, where local seas and swells from both the Northern and Southern Hemispheres contribute to the multimodal wave spectrum. This work allows attribution of particular extreme multimodal wave events to specific atmospheric conditions, expanding knowledge of time-dependent, climate-driven offshore and coastal sea-state conditions that have a significant influence on local nearshore processes, coastal morphology, and flood hazards.

  19. Improving finite element results in modeling heart valve mechanics.

    PubMed

    Earl, Emily; Mohammadi, Hadi

    2018-06-01

    Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.

  20. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    NASA Astrophysics Data System (ADS)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  1. Automatic Data Filter Customization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  2. Computing LORAN time differences with an HP-25 hand calculator

    NASA Technical Reports Server (NTRS)

    Jones, E. D.

    1978-01-01

    A program for an HP-25 or HP-25C hand calculator that will calculate accurate LORAN-C time differences is described and presented. The program is most useful when checking the accuracy of a LORAN-C receiver at a known latitude and longitude without the aid of an expensive computer. It can thus be used to compute time differences for known landmarks or waypoints to predict in advance the approximate readings during a navigation mission.

  3. Design Trade-off Between Performance and Fault-Tolerance of Space Onboard Computers

    NASA Astrophysics Data System (ADS)

    Gorbunov, M. S.; Antonov, A. A.

    2017-01-01

    It is well known that there is a trade-off between performance and power consumption in onboard computers. The fault-tolerance is another important factor affecting performance, chip area and power consumption. Involving special SRAM cells and error-correcting codes is often too expensive with relation to the performance needed. We discuss the possibility of finding the optimal solutions for modern onboard computer for scientific apparatus focusing on multi-level cache memory design.

  4. Integration of Extended MHD and Kinetic Effects in Global Magnetosphere Models

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Wang, L.; Maynard, K. R. M.; Raeder, J.; Bhattacharjee, A.

    2015-12-01

    Computational models of Earth's geospace environment are an important tool to investigate the science of the coupled solar-wind -- magnetosphere -- ionosphere system, complementing satellite and ground observations with a global perspective. They are also crucial in understanding and predicting space weather, in particular under extreme conditions. Traditionally, global models have employed the one-fluid MHD approximation, which captures large-scale dynamics quite well. However, in Earth's nearly collisionless plasma environment it breaks down on small scales, where ion and electron dynamics and kinetic effects become important, and greatly change the reconnection dynamics. A number of approaches have recently been taken to advance global modeling, e.g., including multiple ion species, adding Hall physics in a Generalized Ohm's Law, embedding local PIC simulations into a larger fluid domain and also some work on simulating the entire system with hybrid or fully kinetic models, the latter however being to computationally expensive to be run at realistic parameters. We will present an alternate approach, ie., a multi-fluid moment model that is derived rigorously from the Vlasov-Maxwell system. The advantage is that the computational cost remains managable, as we are still solving fluid equations. While the evolution equation for each moment is exact, it depends on the next higher-order moment, so that truncating the hiearchy and closing the system to capture the essential kinetic physics is crucial. We implement 5-moment (density, momentum, scalar pressure) and 10-moment (includes pressure tensor) versions of the model, and use local approximations for the heat flux to close the system. We test these closures by local simulations where we can compare directly to PIC / hybrid codes, and employ them in global simulations using the next-generation OpenGGCM to contrast them to MHD / Hall-MHD results and compare with observations.

  5. Sustaining Moore's law with 3D chips

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeBenedictis, Erik P.; Badaroglu, Mustafa; Chen, An

    Here, rather than continue the expensive and time-consuming quest for transistor replacement, the authors argue that 3D chips coupled with new computer architectures can keep Moore's law on its traditional scaling path.

  6. Sustaining Moore's law with 3D chips

    DOE PAGES

    DeBenedictis, Erik P.; Badaroglu, Mustafa; Chen, An; ...

    2017-08-01

    Here, rather than continue the expensive and time-consuming quest for transistor replacement, the authors argue that 3D chips coupled with new computer architectures can keep Moore's law on its traditional scaling path.

  7. 46 CFR 404.5 - Guidelines for the recognition of expenses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... to the extent that they conform to depreciation plus an allowance for return on investment (computed... ratemaking purposes. The Director reviews non-pilotage activities to determine if any adversely impact the...

  8. Computer programs: Information retrieval and data analysis, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The items presented in this compilation are divided into two sections. Section one treats of computer usage devoted to the retrieval of information that affords the user rapid entry into voluminous collections of data on a selective basis. Section two is a more generalized collection of computer options for the user who needs to take such data and reduce it to an analytical study within a specific discipline. These programs, routines, and subroutines should prove useful to users who do not have access to more sophisticated and expensive computer software.

  9. Behavior-Based Fault Monitoring

    DTIC Science & Technology

    1990-12-03

    processor targeted for avionics and space applications . It appears that the signature monitoring technique can be extended to detect computer viruses as...most common approach is structural duplication. Although effective, duplication is too expensive for all but a few applications . Redundancy can also be...Signature Monitoring and Encryption," Int. Conf. on Dependable Computing for Critical Applications , August 1989. 7. K.D. Wilken and J.P. Shen

  10. Artificial Intelligence Methods: Challenge in Computer Based Polymer Design

    NASA Astrophysics Data System (ADS)

    Rusu, Teodora; Pinteala, Mariana; Cartwright, Hugh

    2009-08-01

    This paper deals with the use of Artificial Intelligence Methods (AI) in the design of new molecules possessing desired physical, chemical and biological properties. This is an important and difficult problem in the chemical, material and pharmaceutical industries. Traditional methods involve a laborious and expensive trial-and-error procedure, but computer-assisted approaches offer many advantages in the automation of molecular design.

  11. Gaussian process regression of chirplet decomposed ultrasonic B-scans of a simulated design case

    NASA Astrophysics Data System (ADS)

    Wertz, John; Homa, Laura; Welter, John; Sparkman, Daniel; Aldrin, John

    2018-04-01

    The US Air Force seeks to implement damage tolerant lifecycle management of composite structures. Nondestructive characterization of damage is a key input to this framework. One approach to characterization is model-based inversion of the ultrasonic response from damage features; however, the computational expense of modeling the ultrasonic waves within composites is a major hurdle to implementation. A surrogate forward model with sufficient accuracy and greater computational efficiency is therefore critical to enabling model-based inversion and damage characterization. In this work, a surrogate model is developed on the simulated ultrasonic response from delamination-like structures placed at different locations within a representative composite layup. The resulting B-scans are decomposed via the chirplet transform, and a Gaussian process model is trained on the chirplet parameters. The quality of the surrogate is tested by comparing the B-scan for a delamination configuration not represented within the training data set. The estimated B-scan has a maximum error of ˜15% for an estimated reduction in computational runtime of ˜95% for 200 function calls. This considerable reduction in computational expense makes full 3D characterization of impact damage tractable.

  12. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  13. Detection of hidden explosives in different scenarios with the use of nuclear probes

    NASA Astrophysics Data System (ADS)

    Nebbia, G.; Pesente, S.; Lunardon, M.; Moretto, S.; Viesti, G.; Cinausero, M.; Barbui, M.; Fioretto, E.; Filippini, V.; Sudac, D.; Nađ, K.; Blagus, S.; Valković, V.

    2005-04-01

    The detection of landmines by using available technologies is a time consuming, expensive and extremely dangerous job, so that there is a need for a technological breakthrough in this field. Atomic and nuclear physics based sensors might offer new possibilities in de-mining. Technology and methods derived from the studies applied to the detection of landmines can be successfully applied to the screening of cargo in customs inspections.

  14. Reducing overlay sampling for APC-based correction per exposure by replacing measured data with computational prediction

    NASA Astrophysics Data System (ADS)

    Noyes, Ben F.; Mokaberi, Babak; Oh, Jong Hun; Kim, Hyun Sik; Sung, Jun Ha; Kea, Marc

    2016-03-01

    One of the keys to successful mass production of sub-20nm nodes in the semiconductor industry is the development of an overlay correction strategy that can meet specifications, reduce the number of layers that require dedicated chuck overlay, and minimize measurement time. Three important aspects of this strategy are: correction per exposure (CPE), integrated metrology (IM), and the prioritization of automated correction over manual subrecipes. The first and third aspects are accomplished through an APC system that uses measurements from production lots to generate CPE corrections that are dynamically applied to future lots. The drawback of this method is that production overlay sampling must be extremely high in order to provide the system with enough data to generate CPE. That drawback makes IM particularly difficult because of the throughput impact that can be created on expensive bottleneck photolithography process tools. The goal is to realize the cycle time and feedback benefits of IM coupled with the enhanced overlay correction capability of automated CPE without impacting process tool throughput. This paper will discuss the development of a system that sends measured data with reduced sampling via an optimized layout to the exposure tool's computational modelling platform to predict and create "upsampled" overlay data in a customizable output layout that is compatible with the fab user CPE APC system. The result is dynamic CPE without the burden of extensive measurement time, which leads to increased utilization of IM.

  15. Smartphones Based Mobile Mapping Systems

    NASA Astrophysics Data System (ADS)

    Al-Hamad, A.; El-Sheimy, N.

    2014-06-01

    The past 20 years have witnessed an explosive growth in the demand for geo-spatial data. This demand has numerous sources and takes many forms; however, the net effect is an ever-increasing thirst for data that is more accurate, has higher density, is produced more rapidly, and is acquired less expensively. For mapping and Geographic Information Systems (GIS) projects, this has been achieved through the major development of Mobile Mapping Systems (MMS). MMS integrate various navigation and remote sensing technologies which allow mapping from moving platforms (e.g. cars, airplanes, boats, etc.) to obtain the 3D coordinates of the points of interest. Such systems obtain accuracies that are suitable for all but the most demanding mapping and engineering applications. However, this accuracy doesn't come cheaply. As a consequence of the platform and navigation and mapping technologies used, even an "inexpensive" system costs well over 200 000 USD. Today's mobile phones are getting ever more sophisticated. Phone makers are determined to reduce the gap between computers and mobile phones. Smartphones, in addition to becoming status symbols, are increasingly being equipped with extended Global Positioning System (GPS) capabilities, Micro Electro Mechanical System (MEMS) inertial sensors, extremely powerful computing power and very high resolution cameras. Using all of these components, smartphones have the potential to replace the traditional land MMS and portable GPS/GIS equipment. This paper introduces an innovative application of smartphones as a very low cost portable MMS for mapping and GIS applications.

  16. Automated 3D renal segmentation based on image partitioning

    NASA Astrophysics Data System (ADS)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  17. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  18. How the Reinsurance Industry views the current and future risk landscape

    NASA Astrophysics Data System (ADS)

    Castaldi, A.

    2012-12-01

    The last decade has witnessed some of the most devastating and expensive natural disasters within the last century. Regardless of where the event occurred and what the hazard might have been, the global insurance and reinsurance industry were active participants in evaluating the events' characteristics as well as providing the financial means for rebuilding the communities affected by these events. The concept of risk transfer from one to many (spreading the risk) is at the very core of the insurers and reinsurers' business. Reinsurers' focus most of their attention on the extreme events including those caused by natural disasters. As such the reinsurance industry must be expert in understanding and quantifying the potential economic and social impact of extreme natural hazard events throughout the world whether they be in the present or possible future. To understand the global risk the reinsurance industry has invested a substantial amount of time and resources into analyzing natural hazard events on a global scale. This requires a firm understanding of the hazard (frequency and severity) and what exposure lies in harm's way (vulnerability). Through the development and use of natural hazard models, claims loss reviews, and research and development the reinsurance industry has an excellent view of the global risks. In this session we will describe how the reinsurance industry models the natural hazard risk and how we view natural hazards in the modern world and our concerns for the future. We will discuss some of the reasons why these events have become increasingly more expensive as well as discussing how we can help reduce our economic susceptibility to these extreme and unpleasant events.

  19. Simulating Halos with the Caterpillar Project

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-04-01

    The Caterpillar Project is a beautiful series of high-resolution cosmological simulations. The goal of this project is to examine the evolution of dark-matter halos like the Milky Ways, to learn about how galaxies like ours formed. This immense computational project is still in progress, but the Caterpillar team is already providing a look at some of its first results.Lessons from Dark-Matter HalosWhy simulate the dark-matter halos of galaxies? Observationally, the formation history of our galaxy is encoded in galactic fossil record clues, like the tidal debris from disrupted satellite galaxies in the outer reaches of our galaxy, or chemical abundance patterns throughout our galactic disk and stellar halo.But to interpret this information in a way that lets us learn about our galaxys history, we need to first test galaxy formation and evolution scenarios via cosmological simulations. Then we can compare the end result of these simulations to what we observe today.This figure illustrates the difference that mass resolution makes. In the left panel, the mass resolution is 1.5*10^7 solar masses per particle. In the right panel, the mass resolution is 3*10^4 solar masses per particle [Griffen et al. 2016]A Computational ChallengeDue to how computationally expensive such simulations are, previous N-body simulations of the growth of Milky-Way-like halos have consisted of only one or a few halos each. But in order to establish a statistical understanding of how galaxy halos form and find out whether the Milky Ways halo is typical or unusual! it is necessary to simulate a larger number of halos.In addition, in order to accurately follow the formation and evolution of substructure within the dark-matter halos, these simulations must be able to resolve the smallest dwarf galaxies, which are around a million solar masses. This requires an extremely high mass resolution, which adds to the computational expense of the simulation.First OutcomesThese are the challenges faced by the Caterpillar Project, detailed in a recently published paper led by Brendan Griffen (Massachusetts Institute of Technology). The Caterpillar Project was designed to simulate 70 Milky-Way-size halos (quadrupling the total number of halos that have been simulated in the past!) at a high mass resolution (10,000 solar masses per particle) and time resolution (5 Myr per snapshot). The project is extremely computationally intense, requiring 14 million CPU hours and 700 TB of data storage!Mass evolution of the first 24 Caterpillar halos (selected to be Milky-Way-size at z=0). The inset panel shows the mass evolution normalized by the halo mass at z=0, demonstrating the highly varied evolution these different halos undergo. [Griffen et al. 2016]In this first study, the Griffen and collaboratorsshow the end states for the first 24 halos of the project, evolved from a large redshift to today (z=0). They use these initialresults to demonstrate the integrity of their data and the utility of their methods, which include new halo-finding techniques that recover more substructure within each halo.The first results from the Caterpillar Project are already enough to show clear general trends, such as the highly variable paths the different halos take as they merge, accrete, and evolve, as well as how different their ends states can be. Statistically examining the evolution of these halos is an importantnext step in providinginsight intothe origin and evolution of the Milky Way, and helping us to understand how our galaxy differs from other galaxies of similar mass. Keep an eye out for future results from this project!BonusCheck out this video (make sure to watch in HD!) of how the first 24 Milky-Way-like halos from the Caterpillar simulations form. Seeingthese halos evolve simultaneously is an awesome way to identifythe similarities and differences between them.CitationBrendan F. Griffen et al 2016 ApJ 818 10. doi:10.3847/0004-637X/818/1/10

  20. Use of computer games as an intervention for stroke.

    PubMed

    Proffitt, Rachel M; Alankus, Gazihan; Kelleher, Caitlin L; Engsberg, Jack R

    2011-01-01

    Current rehabilitation for persons with hemiparesis after stroke requires high numbers of repetitions to be in accordance with contemporary motor learning principles. The motivational characteristics of computer games can be harnessed to create engaging interventions for persons with hemiparesis after stroke that incorporate this high number of repetitions. The purpose of this case report was to test the feasibility of using computer games as a 6-week home therapy intervention to improve upper extremity function for a person with stroke. One person with left upper extremity hemiparesis after stroke participated in a 6-week home therapy computer game intervention. The games were customized to her preferences and abilities and modified weekly. Her performance was tracked and analyzed. Data from pre-, mid-, and postintervention testing using standard upper extremity measures and the Reaching Performance Scale (RPS) were analyzed. After 3 weeks, the participant demonstrated increased upper extremity range of motion at the shoulder and decreased compensatory trunk movements during reaching tasks. After 6 weeks, she showed functional gains in activities of daily living (ADLs) and instrumental ADLs despite no further improvements on the RPS. Results indicate that computer games have the potential to be a useful intervention for people with stroke. Future work will add additional support to quantify the effectiveness of the games as a home therapy intervention for persons with stroke.

  1. The MPO system for automatic workflow documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abla, G.; Coviello, E. N.; Flanagan, S. M.

    Data from large-scale experiments and extreme-scale computing is expensive to produce and may be used for critical applications. However, it is not the mere existence of data that is important, but our ability to make use of it. Experience has shown that when metadata is better organized and more complete, the underlying data becomes more useful. Traditionally, capturing the steps of scientific workflows and metadata was the role of the lab notebook, but the digital era has resulted instead in the fragmentation of data, processing, and annotation. Here, this article presents the Metadata, Provenance, and Ontology (MPO) System, the softwaremore » that can automate the documentation of scientific workflows and associated information. Based on recorded metadata, it provides explicit information about the relationships among the elements of workflows in notebook form augmented with directed acyclic graphs. A set of web-based graphical navigation tools and Application Programming Interface (API) have been created for searching and browsing, as well as programmatically accessing the workflows and data. We describe the MPO concepts and its software architecture. We also report the current status of the software as well as the initial deployment experience.« less

  2. Radiation and scattering from cylindrically conformal printed antennas. Ph.D. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Kempel, Leo C.; Volakis, John L.

    1994-01-01

    Microstrip patch antennas offer considerable advantages in terms of weight, aerodynamic drag, cost, flexibility, and observables over more conventional protruding antennas. These flat patch antennas were first proposed over thirty years ago by Deschamps in the United States and Gutton and Baisinot in France. Such antennas have been analyzed and developed for planar as well as curved platforms. However, the methods used in these designs employ gross approximations, suffer from extreme computational burden, or require expensive physical experiments. The goal of this thesis is to develop accurate and efficient numerical modeling techniques which represent actual antenna structures mounted on curved surfaces with a high degree of fidelity. In this thesis, the finite element method is extended to cavity-backed conformal antenna arrays embedded in a circular, metallic, infinite cylinder. Both the boundary integral and absorbing boundary mesh closure conditions will be used for terminating the mesh. These two approaches will be contrasted and used to study the scattering and radiation behavior of several useful antenna configurations. An important feature of this study will be to examine the effect of curvature and cavity size on the scattering and radiation properties of wraparound conformal antenna arrays.

  3. Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle

    2009-10-19

    Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less

  4. The MPO system for automatic workflow documentation

    DOE PAGES

    Abla, G.; Coviello, E. N.; Flanagan, S. M.; ...

    2016-04-18

    Data from large-scale experiments and extreme-scale computing is expensive to produce and may be used for critical applications. However, it is not the mere existence of data that is important, but our ability to make use of it. Experience has shown that when metadata is better organized and more complete, the underlying data becomes more useful. Traditionally, capturing the steps of scientific workflows and metadata was the role of the lab notebook, but the digital era has resulted instead in the fragmentation of data, processing, and annotation. Here, this article presents the Metadata, Provenance, and Ontology (MPO) System, the softwaremore » that can automate the documentation of scientific workflows and associated information. Based on recorded metadata, it provides explicit information about the relationships among the elements of workflows in notebook form augmented with directed acyclic graphs. A set of web-based graphical navigation tools and Application Programming Interface (API) have been created for searching and browsing, as well as programmatically accessing the workflows and data. We describe the MPO concepts and its software architecture. We also report the current status of the software as well as the initial deployment experience.« less

  5. Computing technology in the 1980's. [computers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Advances in computing technology have been led by consistently improving semiconductor technology. The semiconductor industry has turned out ever faster, smaller, and less expensive devices since transistorized computers were first introduced 20 years ago. For the next decade, there appear to be new advances possible, with the rate of introduction of improved devices at least equal to the historic trends. The implication of these projections is that computers will enter new markets and will truly be pervasive in business, home, and factory as their cost diminishes and their computational power expands to new levels. The computer industry as we know it today will be greatly altered in the next decade, primarily because the raw computer system will give way to computer-based turn-key information and control systems.

  6. A Method for Aircraft Concept Selection Using Multicriteria Interactive Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Buonanno, Michael; Mavris, Dimitri

    2005-01-01

    The problem of aircraft concept selection has become increasingly difficult in recent years as a result of a change from performance as the primary evaluation criteria of aircraft concepts to the current situation in which environmental effects, economics, and aesthetics must also be evaluated and considered in the earliest stages of the decision-making process. This has prompted a shift from design using historical data regression techniques for metric prediction to the use of physics-based analysis tools that are capable of analyzing designs outside of the historical database. The use of optimization methods with these physics-based tools, however, has proven difficult because of the tendency of optimizers to exploit assumptions present in the models and drive the design towards a solution which, while promising to the computer, may be infeasible due to factors not considered by the computer codes. In addition to this difficulty, the number of discrete options available at this stage may be unmanageable due to the combinatorial nature of the concept selection problem, leading the analyst to arbitrarily choose a sub-optimum baseline vehicle. These concept decisions such as the type of control surface scheme to use, though extremely important, are frequently made without sufficient understanding of their impact on the important system metrics because of a lack of computational resources or analysis tools. This paper describes a hybrid subjective/quantitative optimization method and its application to the concept selection of a Small Supersonic Transport. The method uses Genetic Algorithms to operate on a population of designs and promote improvement by varying more than sixty parameters governing the vehicle geometry, mission, and requirements. In addition to using computer codes for evaluation of quantitative criteria such as gross weight, expert input is also considered to account for criteria such as aeroelasticity or manufacturability which may be impossible or too computationally expensive to consider explicitly in the analysis. Results indicate that concepts resulting from the use of this method represent designs which are promising to both the computer and the analyst, and that a mapping between concepts and requirements that would not otherwise be apparent is revealed.

  7. Switching from computer to microcomputer architecture education

    NASA Astrophysics Data System (ADS)

    Bolanakis, Dimosthenis E.; Kotsis, Konstantinos T.; Laopoulos, Theodore

    2010-03-01

    In the last decades, the technological and scientific evolution of the computing discipline has been widely affecting research in software engineering education, which nowadays advocates more enlightened and liberal ideas. This article reviews cross-disciplinary research on a computer architecture class in consideration of its switching to microcomputer architecture. The authors present their strategies towards a successful crossing of boundaries between engineering disciplines. This communication aims at providing a different aspect on professional courses that are, nowadays, addressed at the expense of traditional courses.

  8. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments

    DOE PAGES

    Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...

    2015-11-09

    Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.

  9. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  10. Extremely Low Operating Current Resistive Memory Based on Exfoliated 2D Perovskite Single Crystals for Neuromorphic Computing.

    PubMed

    Tian, He; Zhao, Lianfeng; Wang, Xuefeng; Yeh, Yao-Wen; Yao, Nan; Rand, Barry P; Ren, Tian-Ling

    2017-12-26

    Extremely low energy consumption neuromorphic computing is required to achieve massively parallel information processing on par with the human brain. To achieve this goal, resistive memories based on materials with ionic transport and extremely low operating current are required. Extremely low operating current allows for low power operation by minimizing the program, erase, and read currents. However, materials currently used in resistive memories, such as defective HfO x , AlO x , TaO x , etc., cannot suppress electronic transport (i.e., leakage current) while allowing good ionic transport. Here, we show that 2D Ruddlesden-Popper phase hybrid lead bromide perovskite single crystals are promising materials for low operating current nanodevice applications because of their mixed electronic and ionic transport and ease of fabrication. Ionic transport in the exfoliated 2D perovskite layer is evident via the migration of bromide ions. Filaments with a diameter of approximately 20 nm are visualized, and resistive memories with extremely low program current down to 10 pA are achieved, a value at least 1 order of magnitude lower than conventional materials. The ionic migration and diffusion as an artificial synapse is realized in the 2D layered perovskites at the pA level, which can enable extremely low energy neuromorphic computing.

  11. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Pugmire, David; Rogers, David

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  12. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Pugmire, David; Rogers, David

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  13. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  14. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less

  15. Estimation of resist sensitivity for extreme ultraviolet lithography using an electron beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oyama, Tomoko Gowa, E-mail: ohyama.tomoko@qst.go.jp; Oshima, Akihiro; Tagawa, Seiichi, E-mail: tagawa@sanken.osaka-u.ac.jp

    2016-08-15

    It is a challenge to obtain sufficient extreme ultraviolet (EUV) exposure time for fundamental research on developing a new class of high sensitivity resists for extreme ultraviolet lithography (EUVL) because there are few EUV exposure tools that are very expensive. In this paper, we introduce an easy method for predicting EUV resist sensitivity by using conventional electron beam (EB) sources. If the chemical reactions induced by two ionizing sources (EB and EUV) are the same, the required absorbed energies corresponding to each required exposure dose (sensitivity) for the EB and EUV would be almost equivalent. Based on this theory, wemore » calculated the resist sensitivities for the EUV/soft X-ray region. The estimated sensitivities were found to be comparable to the experimentally obtained sensitivities. It was concluded that EB is a very useful exposure tool that accelerates the development of new resists and sensitivity enhancement processes for 13.5 nm EUVL and 6.x nm beyond-EUVL (BEUVL).« less

  16. Digital video technology, today and tomorrow

    NASA Astrophysics Data System (ADS)

    Liberman, J.

    1994-10-01

    Digital video is probably computing's fastest moving technology today. Just three years ago, the zenith of digital video technology on the PC was the successful marriage of digital text and graphics with analog audio and video by means of expensive analog laser disc players and video overlay boards. The state of the art involves two different approaches to fully digital video on computers: hardware-assisted and software-only solutions.

  17. Fast All-Sky Radiation Model for Solar Applications (FARMS): A Brief Overview of Mechanisms, Performance, and Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Yu; Sengupta, Manajit

    Solar radiation can be computed using radiative transfer models, such as the Rapid Radiation Transfer Model (RRTM) and its general circulation model applications, and used for various energy applications. Due to the complexity of computing radiation fields in aerosol and cloudy atmospheres, simulating solar radiation can be extremely time-consuming, but many approximations--e.g., the two-stream approach and the delta-M truncation scheme--can be utilized. To provide a new fast option for computing solar radiation, we developed the Fast All-sky Radiation Model for Solar applications (FARMS) by parameterizing the simulated diffuse horizontal irradiance and direct normal irradiance for cloudy conditions from the RRTMmore » runs using a 16-stream discrete ordinates radiative transfer method. The solar irradiance at the surface was simulated by combining the cloud irradiance parameterizations with a fast clear-sky model, REST2. To understand the accuracy and efficiency of the newly developed fast model, we analyzed FARMS runs using cloud optical and microphysical properties retrieved using GOES data from 2009-2012. The global horizontal irradiance for cloudy conditions was simulated using FARMS and RRTM for global circulation modeling with a two-stream approximation and compared to measurements taken from the U.S. Department of Energy's Atmospheric Radiation Measurement Climate Research Facility Southern Great Plains site. Our results indicate that the accuracy of FARMS is comparable to or better than the two-stream approach; however, FARMS is approximately 400 times more efficient because it does not explicitly solve the radiative transfer equation for each individual cloud condition. Radiative transfer model runs are computationally expensive, but this model is promising for broad applications in solar resource assessment and forecasting. It is currently being used in the National Solar Radiation Database, which is publicly available from the National Renewable Energy Laboratory at http://nsrdb.nrel.gov.« less

  18. Portable and programmable clinical EOG diagnostic system.

    PubMed

    Chen, S C; Tsai, T T; Luo, C H

    2000-01-01

    Monitoring eye movements is clinically important in diagnosis of diseases of the central nervous system. Electrooculography (EOG) is one method of obtaining such records which uses skin electrodes, and utilizes the anterior posterior polarization of the eye. A new EOG diagnostic system has been developed that utilizes two off-the-shelf portable notebook computers, one projector and simple electronic hardware. It can be operated under Windows 95, 98, NT, and has significant advantages over any other similar equipment, including programmability, portability, improved safety and low cost. Especially, portability of the instrument is extremely important for acutely ill or handicapped patients. The purpose of this paper is to introduce the techniques of computer animation, data acquisition, real time analysis of measured data, and database management to implement a portable, programmable and inexpensive contacting EOG instrument. It is very convenient to replace the present expensive, inflexible and large-sized commercially available EOG instruments. A lot of interesting stimulation patterns for clinical application can be created easily in different shape, time sequence, and colour by programming in Delphi language. With the help of Winstar (a software package that is used to control I/O and interrupt functions of the computer under Windows 95, 98, NT), the I/O communication between two notebook computers and A/D interface module can be effectively programmed. In addition, the new EOG diagnostic system is battery operated and it has the advantages of low noise as well as isolation from electricity. Two kinds of EOG tests, pursuit and saccade, were performed on 20 normal subjects with this new portable and programmable instrument. Based on the test result, the performance of the new instrument is superior to the other commercially available instruments. In conclusion, we hope that it will be more convenient for doctors and researchers to do the clinical EOG diagnosis and basic medical science research by using this new creation.

  19. A fast CT reconstruction scheme for a general multi-core PC.

    PubMed

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.

  20. A Fast CT Reconstruction Scheme for a General Multi-Core PC

    PubMed Central

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731

  1. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Patrick

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  2. Didactic Dissonance: Teacher Roles in Computer Gaming Situations in Kindergartens

    ERIC Educational Resources Information Center

    Vangsnes, Vigdis; Økland, Nils Tore Gram

    2015-01-01

    In computer gaming situations in kindergartens, the pre-school teacher's function can be viewed in a continuum. At one extreme is the teacher who takes an intervening role and at the other extreme is the teacher who chooses to restrict herself/himself to an organising or distal role. This study shows that both the intervening position and the…

  3. 25 CFR 700.163 - Expenses in searching for replacement location-nonresidential moves.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., including— (a) Transportation computed at prevailing federal per diem and mileage allowance schedules; meals and lodging away from home; (b) Time spent searching, based on reasonable earnings; (c) Fees paid to a...

  4. 25 CFR 700.163 - Expenses in searching for replacement location-nonresidential moves.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., including— (a) Transportation computed at prevailing federal per diem and mileage allowance schedules; meals and lodging away from home; (b) Time spent searching, based on reasonable earnings; (c) Fees paid to a...

  5. SIMULATING ATMOSPHERIC EXPOSURE USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME

    EPA Science Inventory

    Multimedia Risk assessments require the temporal integration of atmospheric concentration and deposition estimates with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-ter...

  6. 47 CFR 54.639 - Ineligible expenses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., including the following: i. Computers, including servers, and related hardware (e.g., printers, scanners, laptops), unless used exclusively for network management, maintenance, or other network operations; ii... installation/construction; marketing studies, marketing activities, or outreach to potential network members...

  7. 47 CFR 54.639 - Ineligible expenses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., including the following: i. Computers, including servers, and related hardware (e.g., printers, scanners, laptops), unless used exclusively for network management, maintenance, or other network operations; ii... installation/construction; marketing studies, marketing activities, or outreach to potential network members...

  8. Inner Space Perturbation Theory in Matrix Product States: Replacing Expensive Iterative Diagonalization.

    PubMed

    Ren, Jiajun; Yi, Yuanping; Shuai, Zhigang

    2016-10-11

    We propose an inner space perturbation theory (isPT) to replace the expensive iterative diagonalization in the standard density matrix renormalization group theory (DMRG). The retained reduced density matrix eigenstates are partitioned into the active and secondary space. The first-order wave function and the second- and third-order energies are easily computed by using one step Davidson iteration. Our formulation has several advantages including (i) keeping a balance between the efficiency and accuracy, (ii) capturing more entanglement with the same amount of computational time, (iii) recovery of the standard DMRG when all the basis states belong to the active space. Numerical examples for the polyacenes and periacene show that the efficiency gain is considerable and the accuracy loss due to the perturbation treatment is very small, when half of the total basis states belong to the active space. Moreover, the perturbation calculations converge in all our numerical examples.

  9. Multi-chain Markov chain Monte Carlo methods for computationally expensive models

    NASA Astrophysics Data System (ADS)

    Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.

    2017-12-01

    Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.

  10. On Using Surrogates with Genetic Programming.

    PubMed

    Hildebrandt, Torsten; Branke, Jürgen

    2015-01-01

    One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.

  11. Space-filling designs for computer experiments: A review

    DOE PAGES

    Joseph, V. Roshan

    2016-01-29

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  12. Space-filling designs for computer experiments: A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, V. Roshan

    Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used for overcoming this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the review, a special emphasis is given for a recently developed space-filling design called maximum projection design. Furthermore, its advantages are illustrated using a simulation conductedmore » for optimizing a milling process.« less

  13. GPU-computing in econophysics and statistical physics

    NASA Astrophysics Data System (ADS)

    Preis, T.

    2011-03-01

    A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.

  14. Extreme hydronephrosis due to uretropelvic junction obstruction in infant (case report).

    PubMed

    Krzemień, Grażyna; Szmigielska, Agnieszka; Bombiński, Przemysław; Barczuk, Marzena; Biejat, Agnieszka; Warchoł, Stanisław; Dudek-Warchoł, Teresa

    2016-01-01

    Hydronephrosis is the one of the most common congenital abnormalities of urinary tract. The left kidney is more commonly affected than the right side and is more common in males. To determine the role of ultrasonography, renal dynamic scintigraphy and lowerdose computed tomography urography in preoperative diagnostic workup of infant with extreme hydronephrosis. We presented the boy with antenatally diagnosed hydronephrosis. In serial, postnatal ultrasonography, renal scintigraphy and computed tomography urography we observed slightly declining function in the dilated kidney and increasing pelvic dilatation. Pyeloplasty was performed at the age of four months with good result. Results of ultrasonography and renal dynamic scintigraphy in child with extreme hydronephrosis can be difficult to asses, therefore before the surgical procedure a lower-dose computed tomography urography should be performed.

  15. A computer-based physics laboratory apparatus: Signal generator software

    NASA Astrophysics Data System (ADS)

    Thanakittiviroon, Tharest; Liangrocapart, Sompong

    2005-09-01

    This paper describes a computer-based physics laboratory apparatus to replace expensive instruments such as high-precision signal generators. This apparatus uses a sound card in a common personal computer to give sinusoidal signals with an accurate frequency that can be programmed to give different frequency signals repeatedly. An experiment on standing waves on an oscillating string uses this apparatus. In conjunction with interactive lab manuals, which have been developed using personal computers in our university, we achieve a complete set of low-cost, accurate, and easy-to-use equipment for teaching a physics laboratory.

  16. Entropy, extremality, euclidean variations, and the equations of motion

    NASA Astrophysics Data System (ADS)

    Dong, Xi; Lewkowycz, Aitor

    2018-01-01

    We study the Euclidean gravitational path integral computing the Rényi entropy and analyze its behavior under small variations. We argue that, in Einstein gravity, the extremality condition can be understood from the variational principle at the level of the action, without having to solve explicitly the equations of motion. This set-up is then generalized to arbitrary theories of gravity, where we show that the respective entanglement entropy functional needs to be extremized. We also extend this result to all orders in Newton's constant G N , providing a derivation of quantum extremality. Understanding quantum extremality for mixtures of states provides a generalization of the dual of the boundary modular Hamiltonian which is given by the bulk modular Hamiltonian plus the area operator, evaluated on the so-called modular extremal surface. This gives a bulk prescription for computing the relative entropies to all orders in G N . We also comment on how these ideas can be used to derive an integrated version of the equations of motion, linearized around arbitrary states.

  17. A Toolkit for ARB to Integrate Custom Databases and Externally Built Phylogenies

    DOE PAGES

    Essinger, Steven D.; Reichenberger, Erin; Morrison, Calvin; ...

    2015-01-21

    Researchers are perpetually amassing biological sequence data. The computational approaches employed by ecologists for organizing this data (e.g. alignment, phylogeny, etc.) typically scale nonlinearly in execution time with the size of the dataset. This often serves as a bottleneck for processing experimental data since many molecular studies are characterized by massive datasets. To keep up with experimental data demands, ecologists are forced to choose between continually upgrading expensive in-house computer hardware or outsourcing the most demanding computations to the cloud. Outsourcing is attractive since it is the least expensive option, but does not necessarily allow direct user interaction with themore » data for exploratory analysis. Desktop analytical tools such as ARB are indispensable for this purpose, but they do not necessarily offer a convenient solution for the coordination and integration of datasets between local and outsourced destinations. Therefore, researchers are currently left with an undesirable tradeoff between computational throughput and analytical capability. To mitigate this tradeoff we introduce a software package to leverage the utility of the interactive exploratory tools offered by ARB with the computational throughput of cloud-based resources. Our pipeline serves as middleware between the desktop and the cloud allowing researchers to form local custom databases containing sequences and metadata from multiple resources and a method for linking data outsourced for computation back to the local database. Furthermore, a tutorial implementation of the toolkit is provided in the supporting information, S1 Tutorial.« less

  18. A Toolkit for ARB to Integrate Custom Databases and Externally Built Phylogenies

    PubMed Central

    Essinger, Steven D.; Reichenberger, Erin; Morrison, Calvin; Blackwood, Christopher B.; Rosen, Gail L.

    2015-01-01

    Researchers are perpetually amassing biological sequence data. The computational approaches employed by ecologists for organizing this data (e.g. alignment, phylogeny, etc.) typically scale nonlinearly in execution time with the size of the dataset. This often serves as a bottleneck for processing experimental data since many molecular studies are characterized by massive datasets. To keep up with experimental data demands, ecologists are forced to choose between continually upgrading expensive in-house computer hardware or outsourcing the most demanding computations to the cloud. Outsourcing is attractive since it is the least expensive option, but does not necessarily allow direct user interaction with the data for exploratory analysis. Desktop analytical tools such as ARB are indispensable for this purpose, but they do not necessarily offer a convenient solution for the coordination and integration of datasets between local and outsourced destinations. Therefore, researchers are currently left with an undesirable tradeoff between computational throughput and analytical capability. To mitigate this tradeoff we introduce a software package to leverage the utility of the interactive exploratory tools offered by ARB with the computational throughput of cloud-based resources. Our pipeline serves as middleware between the desktop and the cloud allowing researchers to form local custom databases containing sequences and metadata from multiple resources and a method for linking data outsourced for computation back to the local database. A tutorial implementation of the toolkit is provided in the supporting information, S1 Tutorial. Availability: http://www.ece.drexel.edu/gailr/EESI/tutorial.php. PMID:25607539

  19. An approach to the design of wide-angle optical systems with special illumination and IFOV requirements

    NASA Astrophysics Data System (ADS)

    Pravdivtsev, Andrey V.

    2012-06-01

    The article presents the approach to the design wide-angle optical systems with special illumination and instantaneous field of view (IFOV) requirements. The unevenness of illumination reduces the dynamic range of the system, which negatively influence on the system ability to perform their task. The result illumination on the detector depends among other factors from the IFOV changes. It is also necessary to consider IFOV in the synthesis of data processing algorithms, as it directly affects to the potential "signal/background" ratio for the case of statistically homogeneous backgrounds. A numerical-analytical approach that simplifies the design of wideangle optical systems with special illumination and IFOV requirements is presented. The solution can be used for optical systems which field of view greater than 180 degrees. Illumination calculation in optical CAD is based on computationally expensive tracing of large number of rays. The author proposes to use analytical expression for some characteristics which illumination depends on. The rest characteristic are determined numerically in calculation with less computationally expensive operands, the calculation performs not every optimization step. The results of analytical calculation inserts in the merit function of optical CAD optimizer. As a result we reduce the optimizer load, since using less computationally expensive operands. It allows reducing time and resources required to develop a system with the desired characteristics. The proposed approach simplifies the creation and understanding of the requirements for the quality of the optical system, reduces the time and resources required to develop an optical system, and allows creating more efficient EOS.

  20. Automation Rover for Extreme Environments

    NASA Technical Reports Server (NTRS)

    Sauder, Jonathan; Hilgemann, Evan; Johnson, Michael; Parness, Aaron; Hall, Jeffrey; Kawata, Jessie; Stack, Kathryn

    2017-01-01

    Almost 2,300 years ago the ancient Greeks built the Antikythera automaton. This purely mechanical computer accurately predicted past and future astronomical events long before electronics existed1. Automata have been credibly used for hundreds of years as computers, art pieces, and clocks. However, in the past several decades automata have become less popular as the capabilities of electronics increased, leaving them an unexplored solution for robotic spacecraft. The Automaton Rover for Extreme Environments (AREE) proposes an exciting paradigm shift from electronics to a fully mechanical system, enabling longitudinal exploration of the most extreme environments within the solar system.

  1. Out-of-pocket payment for surgery in Uganda: The rate of impoverishing and catastrophic expenditure at a government hospital.

    PubMed

    Anderson, Geoffrey A; Ilcisin, Lenka; Kayima, Peter; Abesiga, Lenard; Portal Benitez, Noralis; Ngonzi, Joseph; Ronald, Mayanja; Shrime, Mark G

    2017-01-01

    It is Ugandan governmental policy that all surgical care delivered at government hospitals in Uganda is to be provided to patients free of charge. In practice, however, frequent stock-outs and broken equipment require patients to pay for large portions of their care out of their own pocket. The purpose of this study was to determine the financial impact on patients who undergo surgery at a government hospital in Uganda. Every surgical patient discharged from a surgical ward at a large regional referral hospital in rural southwestern Uganda over a 3-week period in April 2016 was asked to participate. Patients who agreed were surveyed to determine their baseline level of poverty and to assess the financial impact of the hospitalization. Rates of impoverishment and catastrophic expenditure were then calculated. An "impoverishing expense" is defined as one that pushes a household below published poverty thresholds. A "catastrophic expense" was incurred if the patient spent more than 10% of their average annual expenditures. We interviewed 295 out of a possible 320 patients during the study period. 46% (CI 40-52%) of our patients met the World Bank's definition of extreme poverty ($1.90/person/day). After receiving surgical care an additional 10 patients faced extreme poverty, and 5 patients were newly impoverished by the World Bank's definition ($3.10/person/day). 31% of patients faced a catastrophic expenditure of more than 10% of their estimated total yearly expenses. 53% of the households in our study had to borrow money to pay for care, 21% had to sell possessions, and 17% lost a job as a result of the patient's hospitalization. Only 5% of our patients received some form of charity. Despite the government's policy to provide "free care," undergoing an operation at a government hospital in Uganda can result in a severe economic burden to patients and their families. Alternative financing schemes to provide financial protection are critically needed.

  2. A European Flagship Programme on Extreme Computing and Climate

    NASA Astrophysics Data System (ADS)

    Palmer, Tim

    2017-04-01

    In 2016, an outline proposal co-authored by a number of leading climate modelling scientists from around Europe for a (c. 1 billion euro) flagship project on exascale computing and high-resolution global climate modelling was sent to the EU via its Future and Emerging Flagship Technologies Programme. The project is formally entitled "A Flagship European Programme on Extreme Computing and Climate (EPECC)"? In this talk I will outline the reasons why I believe such a project is needed and describe the current status of the project. I will leave time for some discussion.

  3. Attitude to the Use of the Computer for Learning Biological Concepts and Achievement of Students in an Environment Dominated by Indigenous Technology.

    ERIC Educational Resources Information Center

    Jegede, Olugbemiro J.; And Others

    The use of computers to facilitate learning is yet to make an appreciable in-road into the teaching-learning process in most developing Third World countries. The purchase cost and maintenance expenses of the equipment are the major inhibiting factors related to adoption of this high technology in these countries. This study investigated: (1) the…

  4. Analysis of Disaster Preparedness Planning Measures in DoD Computer Facilities

    DTIC Science & Technology

    1993-09-01

    city, stae, aod ZP code) 10 Source of Funding Numbers SProgram Element No lProject No ITask No lWork Unit Accesion I 11 Title include security...Computer Disaster Recovery .... 13 a. PC and LAN Lessons Learned . . ..... 13 2. Distributed Architectures . . . .. . 14 3. Backups...amount of expense, but no client problems." (Leeke, 1993, p. 8) 2. Distributed Architectures The majority of operations that were disrupted by the

  5. Network Support for Group Coordination

    DTIC Science & Technology

    2000-01-01

    telecommuting and ubiquitous computing [40], the advent of networked multimedia, and less expensive technology have shifted telecollaboration into...of Computer Engineering,Santa Cruz,CA,95064 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/ MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10...participants A and B, the payoff structure for choosing two actions i and j is P = Aij + Bij . If P = 0, then the interaction is called a zero -sum game, and

  6. High-Fidelity Simulations of Electromagnetic Propagation and RF Communication Systems

    DTIC Science & Technology

    2017-05-01

    addition to high -fidelity RF propagation modeling, lower-fidelity mod- els, which are less computationally burdensome, are available via a C++ API...expensive to perform, requiring roughly one hour of computer time with 36 available cores and ray tracing per- formed by a single high -end GPU...ER D C TR -1 7- 2 Military Engineering Applied Research High -Fidelity Simulations of Electromagnetic Propagation and RF Communication

  7. Development of Multidisciplinary, Multifidelity Analysis, Integration, and Optimization of Aerospace Vehicles

    DTIC Science & Technology

    2010-02-27

    investigated in more detail. The intermediate level of fidelity, though more expensive, is then used to refine the analysis , add geometric detail, and...design stage is used to further refine the analysis , narrowing the design to a handful of options. Figure 1. Integrated Hierarchical Framework. In...computational structural and computational fluid modeling. For the structural analysis tool we used McIntosh Structural Dynamics’ finite element code CNEVAL

  8. COST FUNCTION STUDIES FOR POWER REACTORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heestand, J.; Wos, L.T.

    1961-11-01

    A function to evaluate the cost of electricity produced by a nuclear power reactor was developed. The basic equation, revenue = capital charges + profit + operating expenses, was expanded in terms of various cost parameters to enable analysis of multiregion nuclear reactors with uranium and/or plutonium for fuel. A corresponding IBM 704 computer program, which will compute either the price of electricity or the value of plutonium, is presented in detail. (auth)

  9. A direct method for computing extreme value (Gumbel) parameters for gapped biological sequence alignments.

    PubMed

    Quinn, Terrance; Sinkala, Zachariah

    2014-01-01

    We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.

  10. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  11. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  12. Fast perceptual image hash based on cascade algorithm

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly; Yavtushenko, Evgeniya

    2017-09-01

    In this paper, we propose a perceptual image hash algorithm based on cascade algorithm, which can be applied in image authentication, retrieval, and indexing. Image perceptual hash uses for image retrieval in sense of human perception against distortions caused by compression, noise, common signal processing and geometrical modifications. The main disadvantage of perceptual hash is high time expenses. In the proposed cascade algorithm of image retrieval initializes with short hashes, and then a full hash is applied to the processed results. Computer simulation results show that the proposed hash algorithm yields a good performance in terms of robustness, discriminability, and time expenses.

  13. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geveci, Berk; Maynard, Robert

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respectivemore » features into a new visualization toolkit called VTK-m.« less

  14. Multiple burn fuel-optimal orbit transfers: Numerical trajectory computation and neighboring optimal feedback guidance

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.; Goodson, Troy D.; Ledsinger, Laura A.

    1995-01-01

    This report describes current work in the numerical computation of multiple burn, fuel-optimal orbit transfers and presents an analysis of the second variation for extremal multiple burn orbital transfers as well as a discussion of a guidance scheme which may be implemented for such transfers. The discussion of numerical computation focuses on the use of multivariate interpolation to aid the computation in the numerical optimization. The second variation analysis includes the development of the conditions for the examination of both fixed and free final time transfers. Evaluations for fixed final time are presented for extremal one, two, and three burn solutions of the first variation. The free final time problem is considered for an extremal two burn solution. In addition, corresponding changes of the second variation formulation over thrust arcs and coast arcs are included. The guidance scheme discussed is an implicit scheme which implements a neighboring optimal feedback guidance strategy to calculate both thrust direction and thrust on-off times.

  15. Effects of ergonomic intervention on work-related upper extremity musculoskeletal disorders among computer workers: a randomized controlled trial.

    PubMed

    Esmaeilzadeh, Sina; Ozcan, Emel; Capan, Nalan

    2014-01-01

    The aim of the study was to determine effects of ergonomic intervention on work-related upper extremity musculoskeletal disorders (WUEMSDs) among computer workers. Four hundred computer workers answered a questionnaire on work-related upper extremity musculoskeletal symptoms (WUEMSS). Ninety-four subjects with WUEMSS using computers at least 3 h a day participated in a prospective, randomized controlled 6-month intervention. Body posture and workstation layouts were assessed by the Ergonomic Questionnaire. We used the Visual Analogue Scale to assess the intensity of WUEMSS. The Upper Extremity Function Scale was used to evaluate functional limitations at the neck and upper extremities. Health-related quality of life was assessed with the Short Form-36. After baseline assessment, those in the intervention group participated in a multicomponent ergonomic intervention program including a comprehensive ergonomic training consisting of two interactive sessions, an ergonomic training brochure, and workplace visits with workstation adjustments. Follow-up assessment was conducted after 6 months. In the intervention group, body posture (p < 0.001) and workstation layout (p = 0.002) improved over 6 months; furthermore, intensity (p < 0.001), duration (p < 0.001), and frequency (p = 0.009) of WUEMSS decreased significantly in the intervention group compared with the control group. Additionally, the functional status (p = 0.001), and physical (p < 0.001), and mental (p = 0.035) health-related quality of life improved significantly compared with the controls. There was no improvement of work day loss due to WUEMSS (p > 0.05). Ergonomic intervention programs may be effective in reducing ergonomic risk factors among computer workers and consequently in the secondary prevention of WUEMSDs.

  16. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  17. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  18. Explosive compaction of aluminum oxide modified by multiwall carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Buzyurkin, A. E.; Kraus, E. I.; Lukyanov, Ya L.

    2018-04-01

    This paper presents experiments and numerical research on explosive compaction of aluminum oxide powder modified by multiwall carbon nanotubes (MWCNT) and modeling of the stress state behind the shock front at shock loading. The aim of this study was to obtain a durable low-porosity compact sample. The explosive compaction technology is used in this problem because the aluminum oxide is an extremely hard and refractory material. Therefore, its compaction by traditional methods requires special equipment and considerable expenses.

  19. Home Health Care for Chronically Ill Children: Hearing before the Committee on Labor and Human Resources, United States Senate, Ninety-Ninth Congress, First Session on Examining the Needs for Pediatric Home Care for Children with Long-Term Illnesses and Disabilities.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Labor and Human Resources.

    The proceedings of the 1985 hearing address issues in pediatric home care for children with long-term illnesses and disabilities. Statements of parents center on extreme expenses of home care and the difficulties of finding financial aid. Additional testimony is offered by representatives of home health care agencies, physicians involved in care…

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulchin, Yu N; Mayor, A Yu; Proschenko, D Yu

    Specific features of modification of a new photorecording material based on PMMA doped with 2,2-difluoro-4-(9-antracyl)-6-methyl-1,3,2-dioxaborine are studied. The recording of the filament distribution in the studied material occurs at the expense of two-photon photochemical processes. The three-dimensional modification of the material is achieved in the filamentation regime without supercontinuum generation. It is possible to order the volume structure by preliminary photo-modification of the near-surface layer of the material. (extreme light fields and their applications)

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Christopher

    In this talk, I review recent work on using a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), called the Singlet-extended Minimal Supersymmetric Standard Model (SMSSM), to raise the mass of the Standard Model-like Higgs boson without requiring extremely heavy top squarks or large stop mixing. In so doing, this model solves the little hierarchy problem of the minimal model (MSSM), at the expense of leaving the {mu}-problem of the MSSM unresolved. This talk is based on work published in Refs. [1, 2, 3].

  2. Loosely Coupled GPS-Aided Inertial Navigation System for Range Safety

    NASA Technical Reports Server (NTRS)

    Heatwole, Scott; Lanzi, Raymond J.

    2010-01-01

    The Autonomous Flight Safety System (AFSS) aims to replace the human element of range safety operations, as well as reduce reliance on expensive, downrange assets for launches of expendable launch vehicles (ELVs). The system consists of multiple navigation sensors and flight computers that provide a highly reliable platform. It is designed to ensure that single-event failures in a flight computer or sensor will not bring down the whole system. The flight computer uses a rules-based structure derived from range safety requirements to make decisions whether or not to destroy the rocket.

  3. Application of the System Identification Technique to Goal-Directed Saccades.

    DTIC Science & Technology

    1984-07-30

    1983 to May 31, 1984 by the AFOSR under Grant No. AFOSR-83-0187. 1. Salaries & Wages $7,257 2. Employee Benefits $ 4186 3. Indirect Costs $1,177 *’ 1...Equipment $2,127 DEC VT100 Terminal Computer Terminal Table & Chair Computer Interface 5. Travel $ 672 6. Miscellaneous Expenses 281 Computer Costs ...Telephone Xeroxing Report Costs Total $12,000 A 1cc;3t Ion r . ;. ., ’o n. e, Ef V r CI3 k.i *r 7’r’ ’ - s-I - . CLef • -- * 0 - -- -, r ~ 𔄁 . r w

  4. The nonequilibrium quantum many-body problem as a paradigm for extreme data science

    NASA Astrophysics Data System (ADS)

    Freericks, J. K.; Nikolić, B. K.; Frieder, O.

    2014-12-01

    Generating big data pervades much of physics. But some problems, which we call extreme data problems, are too large to be treated within big data science. The nonequilibrium quantum many-body problem on a lattice is just such a problem, where the Hilbert space grows exponentially with system size and rapidly becomes too large to fit on any computer (and can be effectively thought of as an infinite-sized data set). Nevertheless, much progress has been made with computational methods on this problem, which serve as a paradigm for how one can approach and attack extreme data problems. In addition, viewing these physics problems from a computer-science perspective leads to new approaches that can be tried to solve more accurately and for longer times. We review a number of these different ideas here.

  5. GPSS/360 computer models to simulate aircraft passenger emergency evacuations.

    DOT National Transportation Integrated Search

    1972-09-01

    Live tests of emergency evacuation of transport aircraft are becoming increasingly expensive as the planes grow to a size seating hundreds of passengers. Repeated tests, to cope with random variations, increase these costs, as well as risks of injuri...

  6. Another View of "PC vs. Mac."

    ERIC Educational Resources Information Center

    DeMillion, John A.

    1998-01-01

    An article by Nan Wodarz in the November 1997 issue listed reasons why the Microsoft computer operating system was superior to the Apple Macintosh platform. This rebuttal contends the Macintosh is less expensive, lasts longer, and requires less technical staff for support. (MLF)

  7. Experimental CAD Course Uses Low-Cost Systems.

    ERIC Educational Resources Information Center

    Wohlers, Terry

    1984-01-01

    Describes the outstanding results obtained when a department of industrial sciences used special software on microcomputers to teach computer-aided design (CAD) as an alternative to much more expensive equipment. The systems used and prospects for the future are also considered. (JN)

  8. A simple parameterization of aerosol emissions in RAMS

    NASA Astrophysics Data System (ADS)

    Letcher, Theodore

    Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical model. Furthermore, SA formation is greatly reduced during the winter months due to the lack of naturally produced organic VOC's. Because of these reasons, it was felt that neglecting SOA within the model was the best course of action. The actual parameterization uses a prescribed source map to add aerosol to the model at two vertical levels that surround an arbitrary height decided by the user. To best represent the real-world, the WRF Chemistry model was run using the National Emissions Inventory (NEI2005) to represent anthropogenic emissions and the Model Emissions of Gases and Aerosols from Nature (MEGAN) to represent natural contributions to aerosol. WRF Chemistry was run for one hour, after which the aerosol output along with the hygroscopicity parameter (κ) were saved into a data file that had the capacity to be interpolated to an arbitrary grid used in RAMS. The comparison of this parameterization to observations collected at Mesa Verde National Park (MVNP) during the Inhibition of Snowfall from Pollution Aerosol (ISPA-III) field campaign yielded promising results. The model was able to simulate the variability in near surface aerosol concentration with reasonable accuracy, though with a general low bias. Furthermore, this model compared much better to the observations than did the WRF Chemistry model using a fraction of the computational expense. This emissions scheme was able to show reasonable solutions regarding the aerosol concentrations and can therefore be used to provide an estimate of the seasonal impact of increased CCN on water resources in Western Colorado with relatively low computational expense.

  9. Trends in atmospheric patterns conducive to seasonal precipitation and temperature extremes in California.

    PubMed

    Swain, Daniel L; Horton, Daniel E; Singh, Deepti; Diffenbaugh, Noah S

    2016-04-01

    Recent evidence suggests that changes in atmospheric circulation have altered the probability of extreme climate events in the Northern Hemisphere. We investigate northeastern Pacific atmospheric circulation patterns that have historically (1949-2015) been associated with cool-season (October-May) precipitation and temperature extremes in California. We identify changes in occurrence of atmospheric circulation patterns by measuring the similarity of the cool-season atmospheric configuration that occurred in each year of the 1949-2015 period with the configuration that occurred during each of the five driest, wettest, warmest, and coolest years. Our analysis detects statistically significant changes in the occurrence of atmospheric patterns associated with seasonal precipitation and temperature extremes. We also find a robust increase in the magnitude and subseasonal persistence of the cool-season West Coast ridge, resulting in an amplification of the background state. Changes in both seasonal mean and extreme event configurations appear to be caused by a combination of spatially nonuniform thermal expansion of the atmosphere and reinforcing trends in the pattern of sea level pressure. In particular, both thermal expansion and sea level pressure trends contribute to a notable increase in anomalous northeastern Pacific ridging patterns similar to that observed during the 2012-2015 California drought. Collectively, our empirical findings suggest that the frequency of atmospheric conditions like those during California's most severely dry and hot years has increased in recent decades, but not necessarily at the expense of patterns associated with extremely wet years.

  10. Trends in atmospheric patterns conducive to seasonal precipitation and temperature extremes in California

    PubMed Central

    Swain, Daniel L.; Horton, Daniel E.; Singh, Deepti; Diffenbaugh, Noah S.

    2016-01-01

    Recent evidence suggests that changes in atmospheric circulation have altered the probability of extreme climate events in the Northern Hemisphere. We investigate northeastern Pacific atmospheric circulation patterns that have historically (1949–2015) been associated with cool-season (October-May) precipitation and temperature extremes in California. We identify changes in occurrence of atmospheric circulation patterns by measuring the similarity of the cool-season atmospheric configuration that occurred in each year of the 1949–2015 period with the configuration that occurred during each of the five driest, wettest, warmest, and coolest years. Our analysis detects statistically significant changes in the occurrence of atmospheric patterns associated with seasonal precipitation and temperature extremes. We also find a robust increase in the magnitude and subseasonal persistence of the cool-season West Coast ridge, resulting in an amplification of the background state. Changes in both seasonal mean and extreme event configurations appear to be caused by a combination of spatially nonuniform thermal expansion of the atmosphere and reinforcing trends in the pattern of sea level pressure. In particular, both thermal expansion and sea level pressure trends contribute to a notable increase in anomalous northeastern Pacific ridging patterns similar to that observed during the 2012–2015 California drought. Collectively, our empirical findings suggest that the frequency of atmospheric conditions like those during California’s most severely dry and hot years has increased in recent decades, but not necessarily at the expense of patterns associated with extremely wet years. PMID:27051876

  11. Spiking Excitable Semiconductor Laser as Optical Neurons: Dynamics, Clustering and Global Emerging Behaviors

    DTIC Science & Technology

    2014-06-28

    constructed from inexpensive semiconductor lasers could lead to the development of novel neuro-inspired optical computing devices (threshold detectors ...optical computing devices (threshold detectors , logic gates, signal recognition, etc.). Other topics of research included the analysis of extreme events in...Extreme events is nowadays a highly active field of research. Rogue waves, earthquakes of high magnitude and financial crises are all rare and

  12. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  13. GPU Accelerated Prognostics

    NASA Technical Reports Server (NTRS)

    Gorospe, George E., Jr.; Daigle, Matthew J.; Sankararaman, Shankar; Kulkarni, Chetan S.; Ng, Eley

    2017-01-01

    Prognostic methods enable operators and maintainers to predict the future performance for critical systems. However, these methods can be computationally expensive and may need to be performed each time new information about the system becomes available. In light of these computational requirements, we have investigated the application of graphics processing units (GPUs) as a computational platform for real-time prognostics. Recent advances in GPU technology have reduced cost and increased the computational capability of these highly parallel processing units, making them more attractive for the deployment of prognostic software. We present a survey of model-based prognostic algorithms with considerations for leveraging the parallel architecture of the GPU and a case study of GPU-accelerated battery prognostics with computational performance results.

  14. Deep Learning @15 Petaflops/second: Semi-supervised pattern detection for 15 Terabytes of climate data

    NASA Astrophysics Data System (ADS)

    Collins, W. D.; Wehner, M. F.; Prabhat, M.; Kurth, T.; Satish, N.; Mitliagkas, I.; Zhang, J.; Racah, E.; Patwary, M.; Sundaram, N.; Dubey, P.

    2017-12-01

    Anthropogenically-forced climate changes in the number and character of extreme storms have the potential to significantly impact human and natural systems. Current high-performance computing enables multidecadal simulations with global climate models at resolutions of 25km or finer. Such high-resolution simulations are demonstrably superior in simulating extreme storms such as tropical cyclones than the coarser simulations available in the Coupled Model Intercomparison Project (CMIP5) and provide the capability to more credibly project future changes in extreme storm statistics and properties. The identification and tracking of storms in the voluminous model output is very challenging as it is impractical to manually identify storms due to the enormous size of the datasets, and therefore automated procedures are used. Traditionally, these procedures are based on a multi-variate set of physical conditions based on known properties of the class of storms in question. In recent years, we have successfully demonstrated that Deep Learning produces state of the art results for pattern detection in climate data. We have developed supervised and semi-supervised convolutional architectures for detecting and localizing tropical cyclones, extra-tropical cyclones and atmospheric rivers in simulation data. One of the primary challenges in the applicability of Deep Learning to climate data is in the expensive training phase. Typical networks may take days to converge on 10GB-sized datasets, while the climate science community has ready access to O(10 TB)-O(PB) sized datasets. In this work, we present the most scalable implementation of Deep Learning to date. We successfully scale a unified, semi-supervised convolutional architecture on all of the Cori Phase II supercomputer at NERSC. We use IntelCaffe, MKL and MLSL libraries. We have optimized single node MKL libraries to obtain 1-4 TF on single KNL nodes. We have developed a novel hybrid parameter update strategy to improve scaling to 9600 KNL nodes (600,000 cores). We obtain 15PF performance over the course of the training run; setting a new watermark for the HPC and Deep Learning communities. This talk will share insights on how to obtain this extreme level of performance, current gaps/challenges and implications for the climate science community.

  15. Intensity changes in future extreme precipitation: A statistical event-based approach.

    NASA Astrophysics Data System (ADS)

    Manola, Iris; van den Hurk, Bart; de Moel, Hans; Aerts, Jeroen

    2017-04-01

    Short-lived precipitation extremes are often responsible for hazards in urban and rural environments with economic and environmental consequences. The precipitation intensity is expected to increase about 7% per degree of warming, according to the Clausius-Clapeyron (CC) relation. However, the observations often show a much stronger increase in the sub-daily values. In particular, the behavior of the hourly summer precipitation from radar observations with the dew point temperature (the Pi-Td relation) for the Netherlands suggests that for moderate to warm days the intensification of the precipitation can be even higher than 21% per degree of warming, that is 3 times higher than the expected CC relation. The rate of change depends on the initial precipitation intensity, as low percentiles increase with a rate below CC, the medium percentiles with 2CC and the moderate-high and high percentiles with 3CC. This non-linear statistical Pi-Td relation is suggested to be used as a delta-transformation to project how a historic extreme precipitation event would intensify under future, warmer conditions. Here, the Pi-Td relation is applied over a selected historic extreme precipitation event to 'up-scale' its intensity to warmer conditions. Additionally, the selected historic event is simulated in the high-resolution, convective-permitting weather model Harmonie. The initial and boundary conditions are alternated to represent future conditions. The comparison between the statistical and the numerical method of projecting the historic event to future conditions showed comparable intensity changes, which depending on the initial percentile intensity, range from below CC to a 3CC rate of change per degree of warming. The model tends to overestimate the future intensities for the low- and the very high percentiles and the clouds are somewhat displaced, due to small wind and convection changes. The total spatial cloud coverage in the model remains, as also in the statistical method, unchanged. The advantages of the suggested Pi-Td method of projecting future precipitation events from historic events is that it is simple to use, is less expensive time, computational and resource wise compared to a numerical model. The outcome can be used directly for hydrological and climatological studies and for impact analysis such as for flood risk assessments.

  16. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel

    2014-07-01

    We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a surrogate. As waveform generation is one of the dominant costs in parameter estimation algorithms and parameter space exploration, surrogate models offer a new and practical way to dramatically accelerate such studies without impacting accuracy. Surrogates built in this paper, as well as others, are available from GWSurrogate, a publicly available python package.

  17. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  18. Suggested Approaches to the Measurement of Computer Anxiety.

    ERIC Educational Resources Information Center

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  19. A weighted exact test for mutually exclusive mutations in cancer

    PubMed Central

    Leiserson, Mark D.M.; Reyna, Matthew A.; Raphael, Benjamin J.

    2016-01-01

    Motivation: The somatic mutations in the pathways that drive cancer development tend to be mutually exclusive across tumors, providing a signal for distinguishing driver mutations from a larger number of random passenger mutations. This mutual exclusivity signal can be confounded by high and highly variable mutation rates across a cohort of samples. Current statistical tests for exclusivity that incorporate both per-gene and per-sample mutational frequencies are computationally expensive and have limited precision. Results: We formulate a weighted exact test for assessing the significance of mutual exclusivity in an arbitrary number of mutational events. Our test conditions on the number of samples with a mutation as well as per-event, per-sample mutation probabilities. We provide a recursive formula to compute P-values for the weighted test exactly as well as a highly accurate and efficient saddlepoint approximation of the test. We use our test to approximate a commonly used permutation test for exclusivity that conditions on per-event, per-sample mutation frequencies. However, our test is more efficient and it recovers more significant results than the permutation test. We use our Weighted Exclusivity Test (WExT) software to analyze hundreds of colorectal and endometrial samples from The Cancer Genome Atlas, which are two cancer types that often have extremely high mutation rates. On both cancer types, the weighted test identifies sets of mutually exclusive mutations in cancer genes with fewer false positives than earlier approaches. Availability and Implementation: See http://compbio.cs.brown.edu/projects/wext for software. Contact: braphael@cs.brown.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27587696

  20. RFDT: A Rotation Forest-based Predictor for Predicting Drug-Target Interactions Using Drug Structure and Protein Sequence Information.

    PubMed

    Wang, Lei; You, Zhu-Hong; Chen, Xing; Yan, Xin; Liu, Gang; Zhang, Wei

    2018-01-01

    Identification of interaction between drugs and target proteins plays an important role in discovering new drug candidates. However, through the experimental method to identify the drug-target interactions remain to be extremely time-consuming, expensive and challenging even nowadays. Therefore, it is urgent to develop new computational methods to predict potential drugtarget interactions (DTI). In this article, a novel computational model is developed for predicting potential drug-target interactions under the theory that each drug-target interaction pair can be represented by the structural properties from drugs and evolutionary information derived from proteins. Specifically, the protein sequences are encoded as Position-Specific Scoring Matrix (PSSM) descriptor which contains information of biological evolutionary and the drug molecules are encoded as fingerprint feature vector which represents the existence of certain functional groups or fragments. Four benchmark datasets involving enzymes, ion channels, GPCRs and nuclear receptors, are independently used for establishing predictive models with Rotation Forest (RF) model. The proposed method achieved the prediction accuracy of 91.3%, 89.1%, 84.1% and 71.1% for four datasets respectively. In order to make our method more persuasive, we compared our classifier with the state-of-theart Support Vector Machine (SVM) classifier. We also compared the proposed method with other excellent methods. Experimental results demonstrate that the proposed method is effective in the prediction of DTI, and can provide assistance for new drug research and development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  1. Evaluating 20th Century precipitation characteristics between multi-scale atmospheric models with different land-atmosphere coupling

    NASA Astrophysics Data System (ADS)

    Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.

    2016-12-01

    Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.

  2. Fluid Simulation in the Movies: Navier and Stokes Must Be Circulating in Their Graves

    NASA Astrophysics Data System (ADS)

    Tessendorf, Jerry

    2010-11-01

    Fluid simulations based on the Incompressible Navier-Stokes equations are commonplace computer graphics tools in the visual effects industry. These simulations mostly come from custom C++ code written by the visual effects companies. Their significant impact in films was recognized in 2008 with Academy Awards to four visual effects companies for their technical achievement. However artists are not fluid dynamicists, and fluid dynamics simulations are expensive to use in a deadline-driven production environment. As a result, the simulation algorithms are modified to limit the computational resources, adapt them to production workflow, and to respect the client's vision of the film plot. Eulerian solvers on fixed rectangular grids use a mix of momentum solvers, including Semi-Lagrangian, FLIP, and QUICK. Incompressibility is enforced with FFT, Conjugate Gradient, and Multigrid methods. For liquids, a levelset field tracks the free surface. Smooth Particle Hydrodynamics is also used, and is part of a hybrid Eulerian-SPH liquid simulator. Artists use all of them in a mix and match fashion to control the appearance of the simulation. Specially designed forces and boundary conditions control the flow. The simulation can be an input to artistically driven procedural particle simulations that enhance the flow with more detail and drama. Post-simulation processing increases the visual detail beyond the grid resolution. Ultimately, iterative simulation methods that fit naturally in the production workflow are extremely desirable but not yet successful. Results from some efforts for iterative methods are shown, and other approaches motivated by the history of production are proposed.

  3. A Finite Element Projection Method for the Solution of Particle Transport Problems with Anisotropic Scattering.

    DTIC Science & Technology

    1984-07-01

    piecewise constant energy dependence. This is a seven-dimensional problem with time dependence, three spatial and two angular or directional variables and...in extending the computer implementation of the method to time and energy dependent problems, and to solving and validating this technique on a...problems they have severe limitations. The Monte Carlo method, usually requires the use of many hours of expensive computer time , and for deep

  4. Topology Optimization for Reducing Additive Manufacturing Processing Distortions

    DTIC Science & Technology

    2017-12-01

    features that curl or warp under thermal load and are subsequently struck by the recoater blade /roller. Support structures act to wick heat away and...was run for 150 iterations. The material properties for all examples were Young’s modulus E = 1 GPa, Poisson’s ratio ν = 0.25, and thermal expansion...the element-birth model is significantly more computationally expensive for a full op- timization run . Consider, the computational complexity of a

  5. Cloud Computing with iPlant Atmosphere.

    PubMed

    McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos

    2013-10-15

    Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. Copyright © 2013 John Wiley & Sons, Inc.

  6. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  7. Catastrophe Finance: An Emerging Discipline

    NASA Astrophysics Data System (ADS)

    Elsner, James B.; Burch, R. King; Jagger, Thomas H.

    2009-08-01

    While the recent disasters in the world's financial markets demonstrate that finance theory remains far from perfected, science also faces steep challenges in the quest to predict and manage the effects of natural disasters. Worldwide, as many as half a million people have died in disasters such as earthquakes, tsunamis, and tropical cyclones since the turn of the 21st century [Wirtz, 2008]. Further, natural disasters can lead to extreme financial losses, and independent financial collapses can be exacerbated by natural disasters. In financial cost, 2008 was the second most expensive year on record for such catastrophes and for financial market declines. These extreme events in the natural and financial realms push the issue of risk management to the fore, expose the deficiencies of existing knowledge and practice, and suggest that progress requires further research and training at the graduate level.

  8. Alloy Design Data Generated for B2-Ordered Compounds

    NASA Technical Reports Server (NTRS)

    Noebe, Ronald D.; Bozzolo, Guillermo; Abel, Phillip B.

    2003-01-01

    Developing alloys based on ordered compounds is significantly more complicated than developing designs based on disordered materials. In ordered compounds, the major constituent elements reside on particular sublattices. Therefore, the addition of a ternary element to a binary-ordered compound is complicated by the manner in which the ternary addition is made (at the expense of which binary component). When ternary additions are substituted for the wrong constituent, the physical and mechanical properties usually degrade. In some cases the resulting degradation in properties can be quite severe. For example, adding alloying additions to NiAl in the wrong combination (i.e., alloying additions that prefer the Al sublattice but are added at the expense of Ni) will severely embrittle the alloy to the point that it can literally fall apart during processing on cooling from the molten state. Consequently, alloying additions that strongly prefer one sublattice over another should always be added at the expense of that component during alloy development. Elements that have a very weak preference for a sublattice can usually be safely added at the expense of either element and will accommodate any deviation from stoichiometry by filling in for the deficient component. Unfortunately, this type of information is not known beforehand for most ordered systems. Therefore, a computational survey study, using a recently developed quantum approximate method, was undertaken at the NASA Glenn Research Center to determine the preferred site occupancy of ternary alloying additions to 12 different B2-ordered compounds including NiAl, FeAl, CoAl, CoFe, CoHf, CoTi, FeTi, RuAl, RuSi, RuHf, RuTi, and RuZr. Some of these compounds are potential high temperature structural alloys; others are used in thin-film magnetic and other electronic applications. The results are summarized. The italicized elements represent the previous sum total alloying information known and verify the computational method used to establish the table. Details of the computational procedures used to determine the preferred site occupancy can be found in reference 2. As further substantiation of the validity of the technique, and its extension to even more complicated systems, it was applied to two simultaneous alloying additions in an ordered alloy.

  9. SIMULATING ATMOSPHERIC EXPOSURE IN A NATIONAL RISK ASSESSMENT USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME

    EPA Science Inventory

    Multimedia risk assessments require the temporal integration of atmospheric concentration and deposition with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-term average a...

  10. COMPETITIVE METAGENOMIC DNA HYBRIDIZATION IDENTIFIES HOST-SPECIFIC GENETIC MARKERS IN HUMAN FECAL MICROBIAL COMMUNITIES

    EPA Science Inventory

    Although recent technological advances in DNA sequencing and computational biology now allow scientists to compare entire microbial genomes, the use of these approaches to discern key genomic differences between natural microbial communities remains prohibitively expensive for mo...

  11. Desktop Publishing for Counselors.

    ERIC Educational Resources Information Center

    Lucking, Robert; Mitchum, Nancy

    1990-01-01

    Discusses the fundamentals of desktop publishing for counselors, including hardware and software systems and peripherals. Notes by using desktop publishing, counselors can produce their own high-quality documents without the expense of commercial printers. Concludes computers present a way of streamlining the communications of a counseling…

  12. Use of off-the-shelf PC-based flight simulators for aviation human factors research.

    DOT National Transportation Integrated Search

    1996-04-01

    Flight simulation has historically been an expensive proposition, particularly if out-the-window views were desired. Advances in computer technology have allowed a modular, off-the-shelf flight simulation (based on 80486 processors or Pentiums) to be...

  13. Software Prototyping: Designing Systems for Users.

    ERIC Educational Resources Information Center

    Spies, Phyllis Bova

    1983-01-01

    Reports on major change in computer software development process--the prototype model, i.e., implementation of skeletal system that is enhanced during interaction with users. Expensive and unreliable software, software design errors, traditional development approach, resources required for prototyping, success stories, and systems designer's role…

  14. Identification of Bacterial DNA Markers for the Detection of Human and Cattle Fecal Pollution - SLIDES

    EPA Science Inventory

    Technological advances in DNA sequencing and computational biology allow scientists to compare entire microbial genomes. However, the use of these approaches to discern key genomic differences between natural microbial communities remains prohibitively expensive for most laborato...

  15. IDENTIFICATION OF BACTERIAL DNA MARKERS FOR THE DETECTION OF HUMAN AND CATTLE FECAL POLLUTION

    EPA Science Inventory

    Technological advances in DNA sequencing and computational biology allow scientists to compare entire microbial genomes. However, the use of these approaches to discern key genomic differences between natural microbial communities remains prohibitively expensive for most laborato...

  16. Iterative framework radiation hybrid mapping

    USDA-ARS?s Scientific Manuscript database

    Building comprehensive radiation hybrid maps for large sets of markers is a computationally expensive process, since the basic mapping problem is equivalent to the traveling salesman problem. The mapping problem is also susceptible to noise, and as a result, it is often beneficial to remove markers ...

  17. Using Reconstructed POD Modes as Turbulent Inflow for LES Wind Turbine Simulations

    NASA Astrophysics Data System (ADS)

    Nielson, Jordan; Bhaganagar, Kiran; Juttijudata, Vejapong; Sirisup, Sirod

    2016-11-01

    Currently, in order to get realistic atmospheric effects of turbulence, wind turbine LES simulations require computationally expensive precursor simulations. At times, the precursor simulation is more computationally expensive than the wind turbine simulation. The precursor simulations are important because they capture turbulence in the atmosphere and as stated above, turbulence impacts the power production estimation. On the other hand, POD analysis has been shown to be capable of capturing turbulent structures. The current study was performed to determine the plausibility of using lower dimension models from POD analysis of LES simulations as turbulent inflow to wind turbine LES simulations. The study will aid the wind energy community by lowering the computational cost of full scale wind turbine LES simulations, while maintaining a high level of turbulent information and being able to quickly apply the turbulent inflow to multi turbine wind farms. This will be done by comparing a pure LES precursor wind turbine simulation with simulations that use reduced POD mod inflow conditions. The study shows the feasibility of using lower dimension models as turbulent inflow of LES wind turbine simulations. Overall the power production estimation and velocity field of the wind turbine wake are well captured with small errors.

  18. A glacier runoff extension to the Precipitation Runoff Modeling System

    USGS Publications Warehouse

    Van Beusekom, Ashley E.; Viger, Roland

    2016-01-01

    A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while maintaining model usability. PRMSglacier is validated on two basins in Alaska, Wolverine, and Gulkana Glacier basin, which have been studied since 1966 and have a substantial amount of data with which to test model performance over a long period of time covering a wide range of climatic and hydrologic conditions. When error in field measurements is considered, the Nash-Sutcliffe efficiencies of streamflow are 0.87 and 0.86, the absolute bias fractions of the winter mass balance simulations are 0.10 and 0.08, and the absolute bias fractions of the summer mass balances are 0.01 and 0.03, all computed over 42 years for the Wolverine and Gulkana Glacier basins, respectively. Without taking into account measurement error, the values are still within the range achieved by the more computationally expensive codes tested over shorter time periods.

  19. The role of under-determined approximations in engineering and science application

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1992-01-01

    There is currently a great deal of interest in using response surfaces in the optimization of aircraft performance. The objective function and/or constraint equations involved in these optimization problems may come from numerous disciplines such as structures, aerodynamics, environmental engineering, etc. In each of these disciplines, the mathematical complexity of the governing equations usually dictates that numerical results be obtained from large computer programs such as a finite element method program. Thus, when performing optimization studies, response surfaces are a convenient way of transferring information from the various disciplines to the optimization algorithm as opposed to bringing all the sundry computer programs together in a massive computer code. Response surfaces offer another advantage in the optimization of aircraft structures. A characteristic of these types of optimization problems is that evaluation of the objective function and response equations (referred to as a functional evaluation) can be very expensive in a computational sense. Because of the computational expense in obtaining functional evaluations, the present study was undertaken to investigate under-determinined approximations. An under-determined approximation is one in which there are fewer training pairs (pieces of information about a function) than there are undetermined parameters (coefficients or weights) associated with the approximation. Both polynomial approximations and neural net approximations were examined. Three main example problems were investigated: (1) a function of one design variable was considered; (2) a function of two design variables was considered; and (3) a 35 bar truss with 4 design variables was considered.

  20. Self-energy matrices for electron transport calculations within the real-space finite-difference formalism

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Shigeru; Ono, Tomoya; Hirose, Kikuji; Blügel, Stefan

    2017-03-01

    The self-energy term used in transport calculations, which describes the coupling between electrode and transition regions, is able to be evaluated only from a limited number of the propagating and evanescent waves of a bulk electrode. This obviously contributes toward the reduction of the computational expenses in transport calculations. In this paper, we present a mathematical formula for reducing the computational expenses further without using any approximation and without losing accuracy. So far, the self-energy term has been handled as a matrix with the same dimension as the Hamiltonian submatrix representing the interaction between an electrode and a transition region. In this work, through the singular-value decomposition of the submatrix, the self-energy matrix is handled as a smaller matrix, whose dimension is the rank number of the Hamiltonian submatrix. This procedure is practical in the case of using the pseudopotentials in a separable form, and the computational expenses for determining the self-energy matrix are reduced by 90% when employing a code based on the real-space finite-difference formalism and projector-augmented wave method. In addition, this technique is applicable to the transport calculations using atomic or localized basis sets. Adopting the self-energy matrices obtained from this procedure, we present the calculation of the electron transport properties of C20 molecular junctions. The application demonstrates that the electron transmissions are sensitive to the orientation of the molecule with respect to the electrode surface. In addition, channel decomposition of the scattering wave functions reveals that some unoccupied C20 molecular orbitals mainly contribute to the electron conduction through the molecular junction.

  1. MaRIE theory, modeling and computation roadmap executive summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lookman, Turab

    The confluence of MaRIE (Matter-Radiation Interactions in Extreme) and extreme (exascale) computing timelines offers a unique opportunity in co-designing the elements of materials discovery, with theory and high performance computing, itself co-designed by constrained optimization of hardware and software, and experiments. MaRIE's theory, modeling, and computation (TMC) roadmap efforts have paralleled 'MaRIE First Experiments' science activities in the areas of materials dynamics, irradiated materials and complex functional materials in extreme conditions. The documents that follow this executive summary describe in detail for each of these areas the current state of the art, the gaps that exist and the road mapmore » to MaRIE and beyond. Here we integrate the various elements to articulate an overarching theme related to the role and consequences of heterogeneities which manifest as competing states in a complex energy landscape. MaRIE experiments will locate, measure and follow the dynamical evolution of these heterogeneities. Our TMC vision spans the various pillar science and highlights the key theoretical and experimental challenges. We also present a theory, modeling and computation roadmap of the path to and beyond MaRIE in each of the science areas.« less

  2. Capabilities of a Global 3D MHD Model for Monitoring Extremely Fast CMEs

    NASA Astrophysics Data System (ADS)

    Wu, C. C.; Plunkett, S. P.; Liou, K.; Socker, D. G.; Wu, S. T.; Wang, Y. M.

    2015-12-01

    Since the start of the space era, spacecraft have recorded many extremely fast coronal mass ejections (CMEs) which have resulted in severe geomagnetic storms. Accurate and timely forecasting of the space weather effects of these events is important for protecting expensive space assets and astronauts and avoiding communications interruptions. Here, we will introduce a newly developed global, three-dimensional (3D) magnetohydrodynamic (MHD) model (G3DMHD). The model takes the solar magnetic field maps at 2.5 solar radii (Rs) and intepolates the solar wind plasma and field out to 18 Rs using the algorithm of Wang and Sheeley (1990, JGR). The output is used as the inner boundary condition for a 3D MHD model. The G3DMHD model is capable of simulating (i) extremely fast CME events with propagation speeds faster than 2500 km/s; and (ii) multiple CME events in sequence or simultaneously. We will demonstrate the simulation results (and comparison with in-situ observation) for the fastest CME in record on 23 July 2012, the shortest transit time in March 1976, and the well-known historic Carrington 1859 event.

  3. Atomistic Modeling of Pd Site Preference in NiTi

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Noebe, Ronald D.; Mosca, Hugo O.

    2004-01-01

    An analysis of the site subsitution behavior of Pd in NiTi was performed using the BFS method for alloys. Through a combination of Monte Carlo simulations and detailed atom-by-atom energetic analyses of various computational cells, representing compositions of NiTi with up to 10 at% Pd, a detailed understanding of site occupancy of Pd in NiTi was revealed. Pd subsituted at the expense of Ni in a NiTi alloy will prefer the Ni-sites. Pd subsituted at the expense of Ti shows a very weak preference for Ti-sites that diminishes as the amount of Pd in the alloy increases and as the temperature increases.

  4. Real-time algorithm for acoustic imaging with a microphone array.

    PubMed

    Huang, Xun

    2009-05-01

    Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.

  5. A qualitative analysis of bus simulator training on transit incidents : a case study in Florida. [Summary].

    DOT National Transportation Integrated Search

    2013-01-01

    The simulator was once a very expensive, large-scale mechanical device for training military pilots or astronauts. Modern computers, linking sophisticated software and large-screen displays, have yielded simulators for the desktop or configured as sm...

  6. A LAN Primer.

    ERIC Educational Resources Information Center

    Hazari, Sunil I.

    1991-01-01

    Local area networks (LANs) are systems of computers and peripherals connected together for the purposes of electronic mail and the convenience of sharing information and expensive resources. In planning the design of such a system, the components to consider are hardware, software, transmission media, topology, operating systems, and protocols.…

  7. Migration without migraines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lines, L.; Burton, A.; Lu, H.X.

    Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less

  8. The curse of planning: dissecting multiple reinforcement-learning systems by taxing the central executive.

    PubMed

    Otto, A Ross; Gershman, Samuel J; Markman, Arthur B; Daw, Nathaniel D

    2013-05-01

    A number of accounts of human and animal behavior posit the operation of parallel and competing valuation systems in the control of choice behavior. In these accounts, a flexible but computationally expensive model-based reinforcement-learning system has been contrasted with a less flexible but more efficient model-free reinforcement-learning system. The factors governing which system controls behavior-and under what circumstances-are still unclear. Following the hypothesis that model-based reinforcement learning requires cognitive resources, we demonstrated that having human decision makers perform a demanding secondary task engenders increased reliance on a model-free reinforcement-learning strategy. Further, we showed that, across trials, people negotiate the trade-off between the two systems dynamically as a function of concurrent executive-function demands, and people's choice latencies reflect the computational expenses of the strategy they employ. These results demonstrate that competition between multiple learning systems can be controlled on a trial-by-trial basis by modulating the availability of cognitive resources.

  9. The Curse of Planning: Dissecting multiple reinforcement learning systems by taxing the central executive

    PubMed Central

    Otto, A. Ross; Gershman, Samuel J.; Markman, Arthur B.; Daw, Nathaniel D.

    2013-01-01

    A number of accounts of human and animal behavior posit the operation of parallel and competing valuation systems in the control of choice behavior. Along these lines, a flexible but computationally expensive model-based reinforcement learning system has been contrasted with a less flexible but more efficient model-free reinforcement learning system. The factors governing which system controls behavior—and under what circumstances—are still unclear. Based on the hypothesis that model-based reinforcement learning requires cognitive resources, we demonstrate that having human decision-makers perform a demanding secondary task engenders increased reliance on a model-free reinforcement learning strategy. Further, we show that across trials, people negotiate this tradeoff dynamically as a function of concurrent executive function demands and their choice latencies reflect the computational expenses of the strategy employed. These results demonstrate that competition between multiple learning systems can be controlled on a trial-by-trial basis by modulating the availability of cognitive resources. PMID:23558545

  10. Automated combinatorial method for fast and robust prediction of lattice thermal conductivity

    NASA Astrophysics Data System (ADS)

    Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Toher, Cormac; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano

    The lack of computationally inexpensive and accurate ab-initio based methodologies to predict lattice thermal conductivity, κl, without computing the anharmonic force constants or performing time-consuming ab-initio molecular dynamics, is one of the obstacles preventing the accelerated discovery of new high or low thermal conductivity materials. The Slack equation is the best alternative to other more expensive methodologies but is highly dependent on two variables: the acoustic Debye temperature, θa, and the Grüneisen parameter, γ. Furthermore, different definitions can be used for these two quantities depending on the model or approximation. Here, we present a combinatorial approach based on the quasi-harmonic approximation to elucidate which definitions of both variables produce the best predictions of κl. A set of 42 compounds was used to test accuracy and robustness of all possible combinations. This approach is ideal for obtaining more accurate values than fast screening models based on the Debye model, while being significantly less expensive than methodologies that solve the Boltzmann transport equation.

  11. SemaTyP: a knowledge graph based literature mining method for drug discovery.

    PubMed

    Sang, Shengtian; Yang, Zhihao; Wang, Lei; Liu, Xiaoxia; Lin, Hongfei; Wang, Jian

    2018-05-30

    Drug discovery is the process through which potential new medicines are identified. High-throughput screening and computer-aided drug discovery/design are the two main drug discovery methods for now, which have successfully discovered a series of drugs. However, development of new drugs is still an extremely time-consuming and expensive process. Biomedical literature contains important clues for the identification of potential treatments. It could support experts in biomedicine on their way towards new discoveries. Here, we propose a biomedical knowledge graph-based drug discovery method called SemaTyP, which discovers candidate drugs for diseases by mining published biomedical literature. We first construct a biomedical knowledge graph with the relations extracted from biomedical abstracts, then a logistic regression model is trained by learning the semantic types of paths of known drug therapies' existing in the biomedical knowledge graph, finally the learned model is used to discover drug therapies for new diseases. The experimental results show that our method could not only effectively discover new drug therapies for new diseases, but also could provide the potential mechanism of action of the candidate drugs. In this paper we propose a novel knowledge graph based literature mining method for drug discovery. It could be a supplementary method for current drug discovery methods.

  12. Drug-target interaction prediction from PSSM based evolutionary information.

    PubMed

    Mousavian, Zaynab; Khakabimamaghani, Sahand; Kavousi, Kaveh; Masoudi-Nejad, Ali

    2016-01-01

    The labor-intensive and expensive experimental process of drug-target interaction prediction has motivated many researchers to focus on in silico prediction, which leads to the helpful information in supporting the experimental interaction data. Therefore, they have proposed several computational approaches for discovering new drug-target interactions. Several learning-based methods have been increasingly developed which can be categorized into two main groups: similarity-based and feature-based. In this paper, we firstly use the bi-gram features extracted from the Position Specific Scoring Matrix (PSSM) of proteins in predicting drug-target interactions. Our results demonstrate the high-confidence prediction ability of the Bigram-PSSM model in terms of several performance indicators specifically for enzymes and ion channels. Moreover, we investigate the impact of negative selection strategy on the performance of the prediction, which is not widely taken into account in the other relevant studies. This is important, as the number of non-interacting drug-target pairs are usually extremely large in comparison with the number of interacting ones in existing drug-target interaction data. An interesting observation is that different levels of performance reduction have been attained for four datasets when we change the sampling method from the random sampling to the balanced sampling. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Toward General Software Level Silent Data Corruption Detection for Parallel Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrocal, Eduardo; Bautista-Gomez, Leonardo; Di, Sheng

    Silent data corruption (SDC) poses a great challenge for high-performance computing (HPC) applications as we move to extreme-scale systems. Mechanisms have been proposed that are able to detect SDC in HPC applications by using the peculiarities of the data (more specifically, its “smoothness” in time and space) to make predictions. However, these data-analytic solutions are still far from fully protecting applications to a level comparable with more expensive solutions such as full replication. In this work, we propose partial replication to overcome this limitation. More specifically, we have observed that not all processes of an MPI application experience the samemore » level of data variability at exactly the same time. Thus, we can smartly choose and replicate only those processes for which the lightweight data-analytic detectors would perform poorly. In addition, we propose a new evaluation method based on the probability that a corruption will pass unnoticed by a particular detector (instead of just reporting overall single-bit precision and recall). In our experiments, we use four applications dealing with different explosions. Our results indicate that our new approach can protect the MPI applications analyzed with 7–70% less overhead (depending on the application) than that of full duplication with similar detection recall.« less

  14. Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG

    PubMed Central

    O'Sullivan, James A.; Power, Alan J.; Mesgarani, Nima; Rajaram, Siddharth; Foxe, John J.; Shinn-Cunningham, Barbara G.; Slaney, Malcolm; Shamma, Shihab A.; Lalor, Edmund C.

    2015-01-01

    How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain–computer interfaces. PMID:24429136

  15. Development of a Multifidelity Approach to Acoustic Liner Impedance Eduction

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Jones, Michael G.

    2017-01-01

    The use of acoustic liners has proven to be extremely effective in reducing aircraft engine fan noise transmission/radiation. However, the introduction of advanced fan designs and shorter engine nacelles has highlighted a need for novel acoustic liner designs that provide increased fan noise reduction over a broader frequency range. To achieve aggressive noise reduction goals, advanced broadband liner designs, such as zone liners and variable impedance liners, will likely depart from conventional uniform impedance configurations. Therefore, educing the impedance of these axial- and/or spanwise-variable impedance liners will require models that account for three-dimensional effects, thereby increasing computational expense. Thus, it would seem advantageous to investigate the use of multifidelity modeling approaches to impedance eduction for these advanced designs. This paper describes an extension of the use of the CDUCT-LaRC code to acoustic liner impedance eduction. The proposed approach is applied to a hardwall insert and conventional liner using simulated data. Educed values compare well with those educed using two extensively tested and validated approaches. The results are very promising and provide justification to further pursue the complementary use of CDUCT-LaRC with the currently used finite element codes to increase the efficiency of the eduction process for configurations involving three-dimensional effects.

  16. Brief: Field measurements of casing tension forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quigley, M.S.; Lewis, D.B.; Boswell, R.S.

    1995-02-01

    Tension forces acting on individual casing joints were accurately measured during installation of 10,158 ft of 9 5/8-in. {times} 47-lbm/ft casing and 11,960 ft of 11 7/8-in. {times} 71.8-lbm/ft casing. A unique casing load table (CLT) weighed the casing string after the addition of each casing joint. Strain gauges attached inside the pin ends of instrumented casing joints (ICJ`s) directly measured tension force on those joints. A high-speed computer data-acquisition system (DAS) automatically recorded data from all the sensors. Several casing joints were intentionally subjected to extreme deceleration to determine upper limits for dynamic tension forces. Data from these testsmore » clearly show effects of wellbore friction and casing handling conditions. In every case, tension forces in the casing during maximum deceleration were considerably less than expected. In some cases, the highest tension forces occurred when the casing lifted out of the slips. Peak tension forces caused by setting the casing slips were typically no more than 5% greater than tension forces in the casing at rest. This dynamic amplification was far less than the 60% value used in the previous casing design method. Reducing the safety factor for installation loads has permitted use of lighter, less-expensive casing than dictated by older design criteria.« less

  17. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  18. Extreme-scale Algorithms and Solver Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack

    A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs,more » etc.); and Conflicting goals of performance, resilience, and power requirements.« less

  19. Storage and computationally efficient permutations of factorized covariance and square-root information arrays

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.

  20. Habitual control of goal selection in humans

    PubMed Central

    Cushman, Fiery; Morris, Adam

    2015-01-01

    Humans choose actions based on both habit and planning. Habitual control is computationally frugal but adapts slowly to novel circumstances, whereas planning is computationally expensive but can adapt swiftly. Current research emphasizes the competition between habits and plans for behavioral control, yet many complex tasks instead favor their integration. We consider a hierarchical architecture that exploits the computational efficiency of habitual control to select goals while preserving the flexibility of planning to achieve those goals. We formalize this mechanism in a reinforcement learning setting, illustrate its costs and benefits, and experimentally demonstrate its spontaneous application in a sequential decision-making task. PMID:26460050

  1. Current Grid Generation Strategies and Future Requirements in Hypersonic Vehicle Design, Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)

    1998-01-01

    Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.

  2. OPEX: Optimized Eccentricity Computation in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henderson, Keith

    2011-11-14

    Real-world graphs have many properties of interest, but often these properties are expensive to compute. We focus on eccentricity, radius and diameter in this work. These properties are useful measures of the global connectivity patterns in a graph. Unfortunately, computing eccentricity for all nodes is O(n2) for a graph with n nodes. We present OPEX, a novel combination of optimizations which improves computation time of these properties by orders of magnitude in real-world experiments on graphs of many different sizes. We run OPEX on graphs with up to millions of links. OPEX gives either exact results or bounded approximations, unlikemore » its competitors which give probabilistic approximations or sacrifice node-level information (eccentricity) to compute graphlevel information (diameter).« less

  3. A DNA sequence analysis package for the IBM personal computer.

    PubMed Central

    Lagrimini, L M; Brentano, S T; Donelson, J E

    1984-01-01

    We present here a collection of DNA sequence analysis programs, called "PC Sequence" (PCS), which are designed to run on the IBM Personal Computer (PC). These programs are written in IBM PC compiled BASIC and take full advantage of the IBM PC's speed, error handling, and graphics capabilities. For a modest initial expense in hardware any laboratory can use these programs to quickly perform computer analysis on DNA sequences. They are written with the novice user in mind and require very little training or previous experience with computers. Also provided are a text editing program for creating and modifying DNA sequence files and a communications program which enables the PC to communicate with and collect information from mainframe computers and DNA sequence databases. PMID:6546433

  4. Node Resource Manager: A Distributed Computing Software Framework Used for Solving Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.

    2011-12-01

    With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  5. All-Inorganic Perovskite Solar Cells.

    PubMed

    Liang, Jia; Wang, Caixing; Wang, Yanrong; Xu, Zhaoran; Lu, Zhipeng; Ma, Yue; Zhu, Hongfei; Hu, Yi; Xiao, Chengcan; Yi, Xu; Zhu, Guoyin; Lv, Hongling; Ma, Lianbo; Chen, Tao; Tie, Zuoxiu; Jin, Zhong; Liu, Jie

    2016-12-14

    The research field on perovskite solar cells (PSCs) is seeing frequent record breaking in the power conversion efficiency (PCE). However, organic-inorganic hybrid halide perovskites and organic additives in common hole-transport materials (HTMs) exhibit poor stability against moisture and heat. Here we report the successful fabrication of all-inorganic PSCs without any labile or expensive organic components. The entire fabrication process can be operated in ambient environment without humidity control (e.g., a glovebox). Even without encapsulation, the all-inorganic PSCs present no performance degradation in humid air (90-95% relative humidity, 25 °C) for over 3 months (2640 h) and can endure extreme temperatures (100 and -22 °C). Moreover, by elimination of expensive HTMs and noble-metal electrodes, the cost was significantly reduced. The highest PCE of the first-generation all-inorganic PSCs reached 6.7%. This study opens the door for next-generation PSCs with long-term stability under harsh conditions, making practical application of PSCs a real possibility.

  6. Tobacco control and direct democracy in Dade County, Florida: future implications for health advocates.

    PubMed

    Givel, M S; Glantz, S A

    2000-01-01

    In 1979 and 1980 in Dade County, Florida, a small grassroots advocacy group, Group Against Smoking Pollution (GASP), attempted to enact a clean indoor air ordinance through the initiative process. The tobacco industry's successful efforts to defeat the initiatives were expensive high-tech media-centered campaigns. Even though GASP's electoral resources were extremely limited for both initiatives, GASP utilized similar media-centered tactics. This approach attempted to defeat the tobacco industry in its own venue, in spite of the tobacco industry's vastly greater resources. Nevertheless, the industry defeated these ordinances by narrow margins because of broad voter support for the initiatives before the industry started its campaigns. Health advocates will never have the resources to match the tobacco industry in expensive high-tech media-centered initiative campaigns. Rather, their power lies in the general popularity of tobacco control legislation and their ability to mobilize broad grassroots efforts combined with an adequately funded media campaign.

  7. Evolving Markets for Commercial, Civil, and Military Services

    NASA Astrophysics Data System (ADS)

    Kaplan, Marshall H.

    2003-01-01

    Recent commercial failures in the LEO market, declining budgets for research, and other political factors have made it difficult for entrepreneurs and financial institutions to realize returns from investments in new space transportation systems and satellites. This paper explores the major factors impacting future markets that make use of our space infrastructure. At the top of the list is the high cost of space access. This has been extremely expensive, and will continue to be expensive as long as space access remains low on the nation's priority list. While launch prices have generally been reduced over the past several years, they remain well above the elastic range of supply and demand. Our best estimate is that it will take an order of magnitude reduction to significantly expand the market. Projections about market segments that will represent future winners in space and launch demand forecasts are presented. Future markets, outside of traditional strongholds, are explored, including a long-term view of new commercial space activities, conventional and ambitious future/futuristic activities, and related business aspects.

  8. Developpement et implementation d'une methode pour resoudre les equations de la couche limite laminaire et turbulente

    NASA Astrophysics Data System (ADS)

    Leuca, Maxim

    CFD (Computational Fluid Dynamics) is a computational tool for studying flow in science and technology. The Aerospace Industry uses increasingly the CFD modeling and design phase of the aircraft, so the precision with which phenomena are simulated boundary layer is very important. The research efforts are focused on optimizing the aerodynamic performance of airfoils to predict the drag and delay the laminar-turbulent transition. CFD codes must be fast and efficient to model complex geometries for aerodynamic flows. The resolution of the boundary layer equations requires a large amount of computing resources for viscous flows. CFD codes are commonly used to simulate aerodynamic flows, require normal meshes to the wall, extremely fine, and, by consequence, the calculations are very expensive. . This thesis proposes a new approach to solve the equations of boundary layer for laminar and turbulent flows using an approach based on the finite difference method. Integrated into a code of panels, this concept allows to solve airfoils avoiding the use of iterative algorithms, usually computing time and often involving convergence problems. The main advantages of panels methods are their simplicity and ability to obtain, with minimal computational effort, solutions in complex flow conditions for relatively complicated configurations. To verify and validate the developed program, experimental data are used as references when available. Xfoil code is used to obtain data as a pseudo references. Pseudo-reference, as in the absence of experimental data, we cannot really compare two software together. Xfoil is a program that has proven to be accurate and inexpensive computing resources. Developed by Drela (1985), this program uses the method with two integral to design and analyze profiles of wings at low speed (Drela et Youngren, 2014), (Drela, 2003). NACA 0012, NACA 4412, and ATR-42 airfoils have been used for this study. For the airfoils NACA 0012 and NACA 4412 the calculations are made using the Mach number M =0.17 and Reynolds number Re = 6x10 6 conditions for which we have experimental results. For the airfoil ATR-42 the calculations are made using the Mach number M =0.1 and Reynolds number Re=536450 as it was analysed in LARCASE's Price-Paidoussis wind tunnel. Keywords: boundary layer, direct method, displacement thickness, finite differences, Xfoil code.

  9. A City Manager Looks at Trends Affecting Public Libraries.

    ERIC Educational Resources Information Center

    Kemp, Roger L.

    1999-01-01

    Highlights some important conditions, both present and future, which will have an impact on public libraries. Discusses holding down expenses, including user fees, alternative funding sources, and private cosponsorship of programs; increasing productivity; use of computers and new technologies; staff development and internal marketing; improving…

  10. Computer Conferencing and Electronic Mail.

    ERIC Educational Resources Information Center

    Kaye, Tony

    This paper discusses a number of problems associated with distance education methods used in adult education and training fields, including limited opportunities for dialogue and group interaction among students and between students and tutors; the expense of updating and modifying mass-produced print and audiovisual materials; and the relative…

  11. 26 CFR 1.460-1 - Long-term contracts.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... attributable to designing the satellite and developing computer software using the PCM. Example 7. Non-long... customer has title to, control over, or bears the risk of loss from, the property manufactured or... as design and engineering costs, other than expenses attributable to bidding and negotiating...

  12. 26 CFR 1.460-1 - Long-term contracts.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... attributable to designing the satellite and developing computer software using the PCM. Example 7. Non-long... customer has title to, control over, or bears the risk of loss from, the property manufactured or... as design and engineering costs, other than expenses attributable to bidding and negotiating...

  13. 26 CFR 1.460-1 - Long-term contracts.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... attributable to designing the satellite and developing computer software using the PCM. Example 7. Non-long... customer has title to, control over, or bears the risk of loss from, the property manufactured or... as design and engineering costs, other than expenses attributable to bidding and negotiating...

  14. 26 CFR 1.460-1 - Long-term contracts.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... attributable to designing the satellite and developing computer software using the PCM. Example 7. Non-long... customer has title to, control over, or bears the risk of loss from, the property manufactured or... as design and engineering costs, other than expenses attributable to bidding and negotiating...

  15. Maps and Map Learning in Social Studies

    ERIC Educational Resources Information Center

    Bednarz, Sarah Witham; Acheson, Gillian; Bednarz, Robert S.

    2006-01-01

    The importance of maps and other graphic representations has become more important to geography and geographers. This is due to the development and widespread diffusion of geographic (spatial) technologies. As computers and silicon chips have become more capable and less expensive, geographic information systems (GIS), global positioning satellite…

  16. Multi-Protocol LAN Design and Implementation: A Case Study.

    ERIC Educational Resources Information Center

    Hazari, Sunil

    1995-01-01

    Reports on the installation of a local area network (LAN) at East Carolina University. Topics include designing the network; computer labs and electronic mail; Internet connectivity; LAN expenses; and recommendations on planning, equipment, administration, and training. A glossary of networking terms is also provided. (AEF)

  17. Content range and precision of a computer adaptive test of upper extremity function for children with cerebral palsy.

    PubMed

    Montpetit, Kathleen; Haley, Stephen; Bilodeau, Nathalie; Ni, Pengsheng; Tian, Feng; Gorton, George; Mulcahey, M J

    2011-02-01

    This article reports on the content range and measurement precision of an upper extremity (UE) computer adaptive testing (CAT) platform of physical function in children with cerebral palsy. Upper extremity items representing skills of all abilities were administered to 305 parents. These responses were compared with two traditional standardized measures: Pediatric Outcomes Data Collection Instrument and Functional Independence Measure for Children. The UE CAT correlated strongly with the upper extremity component of these measures and had greater precision when describing individual functional ability. The UE item bank has wider range with items populating the lower end of the ability spectrum. This new UE item bank and CAT have the capability to quickly assess children of all ages and abilities with good precision and, most importantly, with items that are meaningful and appropriate for their age and level of physical function.

  18. Mapping snow depth return levels: smooth spatial modeling versus station interpolation

    NASA Astrophysics Data System (ADS)

    Blanchet, J.; Lehning, M.

    2010-12-01

    For adequate risk management in mountainous countries, hazard maps for extreme snow events are needed. This requires the computation of spatial estimates of return levels. In this article we use recent developments in extreme value theory and compare two main approaches for mapping snow depth return levels from in situ measurements. The first one is based on the spatial interpolation of pointwise extremal distributions (the so-called Generalized Extreme Value distribution, GEV henceforth) computed at station locations. The second one is new and based on the direct estimation of a spatially smooth GEV distribution with the joint use of all stations. We compare and validate the different approaches for modeling annual maximum snow depth measured at 100 sites in Switzerland during winters 1965-1966 to 2007-2008. The results show a better performance of the smooth GEV distribution fitting, in particular where the station network is sparser. Smooth return level maps can be computed from the fitted model without any further interpolation. Their regional variability can be revealed by removing the altitudinal dependent covariates in the model. We show how return levels and their regional variability are linked to the main climatological patterns of Switzerland.

  19. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    NASA Technical Reports Server (NTRS)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  20. Off the Shelf Cloud Robotics for the Smart Home: Empowering a Wireless Robot through Cloud Computing.

    PubMed

    Ramírez De La Pinta, Javier; Maestre Torreblanca, José María; Jurado, Isabel; Reyes De Cozar, Sergio

    2017-03-06

    In this paper, we explore the possibilities offered by the integration of home automation systems and service robots. In particular, we examine how advanced computationally expensive services can be provided by using a cloud computing approach to overcome the limitations of the hardware available at the user's home. To this end, we integrate two wireless low-cost, off-the-shelf systems in this work, namely, the service robot Rovio and the home automation system Z-wave. Cloud computing is used to enhance the capabilities of these systems so that advanced sensing and interaction services based on image processing and voice recognition can be offered.

  1. Off the Shelf Cloud Robotics for the Smart Home: Empowering a Wireless Robot through Cloud Computing

    PubMed Central

    Ramírez De La Pinta, Javier; Maestre Torreblanca, José María; Jurado, Isabel; Reyes De Cozar, Sergio

    2017-01-01

    In this paper, we explore the possibilities offered by the integration of home automation systems and service robots. In particular, we examine how advanced computationally expensive services can be provided by using a cloud computing approach to overcome the limitations of the hardware available at the user’s home. To this end, we integrate two wireless low-cost, off-the-shelf systems in this work, namely, the service robot Rovio and the home automation system Z-wave. Cloud computing is used to enhance the capabilities of these systems so that advanced sensing and interaction services based on image processing and voice recognition can be offered. PMID:28272305

  2. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  3. The development of a computer-assisted instruction system for clinical nursing skills with virtual instruments concepts: A case study for intra-aortic balloon pumping.

    PubMed

    Chang, Ching-I; Yan, Huey-Yeu; Sung, Wen-Hsu; Shen, Shu-Cheng; Chuang, Pao-Yu

    2006-01-01

    The purpose of this research was to develop a computer-aided instruction system for intra-aortic balloon pumping (IABP) skills in clinical nursing with virtual instrument (VI) concepts. Computer graphic technologies were incorporated to provide not only static clinical nursing education, but also the simulated function of operating an expensive medical instrument with VI techniques. The content of nursing knowledge was adapted from current well-accepted clinical training materials. The VI functions were developed using computer graphic technology with photos of real medical instruments taken by digital camera. We wish the system could provide beginners of nursing education important teaching assistance.

  4. A Zonal Approach for Prediction of Jet Noise

    NASA Technical Reports Server (NTRS)

    Shih, S. H.; Hixon, D. R.; Mankbadi, Reda R.

    1995-01-01

    A zonal approach for direct computation of sound generation and propagation from a supersonic jet is investigated. The present work splits the computational domain into a nonlinear, acoustic-source regime and a linear acoustic wave propagation regime. In the nonlinear regime, the unsteady flow is governed by the large-scale equations, which are the filtered compressible Navier-Stokes equations. In the linear acoustic regime, the sound wave propagation is described by the linearized Euler equations. Computational results are presented for a supersonic jet at M = 2. 1. It is demonstrated that no spurious modes are generated in the matching region and the computational expense is reduced substantially as opposed to fully large-scale simulation.

  5. Geometric modeling of controlled third-class hinged mechanisms with a stand in one extreme position for cyclic automatic machines

    NASA Astrophysics Data System (ADS)

    Khomchenko, V. G.; Varepo, L. G.; Glukhov, V. I.; Krivokhatko, E. A.

    2017-06-01

    The geometric model for the synthesis of third-class lever mechanisms is proposed, which allows, by changing the length of the auxiliary link and the position of its fixed hinge, to rearrange the movement of the working organ onto the cyclograms with different predetermined dwell times. It is noted that with the help of the proposed model, at the expense of the corresponding geometric constructions, the best uniform Chebyshev approximation can be achieved at the interval of the standstill.

  6. Gas-Centered Swirl Coaxial Liquid Injector Evaluations

    NASA Technical Reports Server (NTRS)

    Cohn, A. K.; Strakey, P. A.; Talley, D. G.

    2005-01-01

    Development of Liquid Rocket Engines is expensive. Extensive testing at large scales usually required. In order to verify engine lifetime, large number of tests required. Limited Resources available for development. Sub-scale cold-flow and hot-fire testing is extremely cost effective. Could be a necessary (but not sufficient) condition for long engine lifetime. Reduces overall costs and risk of large scale testing. Goal: Determine knowledge that can be gained from sub-scale cold-flow and hot-fire evaluations of LRE injectors. Determine relationships between cold-flow and hot-fire data.

  7. Effects of precision demands and mental pressure on muscle activation and hand forces in computer mouse tasks.

    PubMed

    Visser, Bart; De Looze, Michiel; De Graaff, Matthijs; Van Dieën, Jaap

    2004-02-05

    The objective of the present study was to gain insight into the effects of precision demands and mental pressure on the load of the upper extremity. Two computer mouse tasks were used: an aiming and a tracking task. Upper extremity loading was operationalized as the myo-electric activity of the wrist flexor and extensor and of the trapezius descendens muscles and the applied grip- and click-forces on the computer mouse. Performance measures, reflecting the accuracy in both tasks and the clicking rate in the aiming task, indicated that the levels of the independent variables resulted in distinguishable levels of accuracy and work pace. Precision demands had a small effect on upper extremity loading with a significant increase in the EMG-amplitudes (21%) of the wrist flexors during the aiming tasks. Precision had large effects on performance. Mental pressure had substantial effects on EMG-amplitudes with an increase of 22% in the trapezius when tracking and increases of 41% in the trapezius and 45% and 140% in the wrist extensors and flexors, respectively, when aiming. During aiming, grip- and click-forces increased by 51% and 40% respectively. Mental pressure had small effects on accuracy but large effects on tempo during aiming. Precision demands and mental pressure in aiming and tracking tasks with a computer mouse were found to coincide with increased muscle activity in some upper extremity muscles and increased force exertion on the computer mouse. Mental pressure caused significant effects on these parameters more often than precision demands. Precision and mental pressure were found to have effects on performance, with precision effects being significant for all performance measures studied and mental pressure effects for some of them. The results of this study suggest that precision demands and mental pressure increase upper extremity load, with mental pressure effects being larger than precision effects. The possible role of precision demands as an indirect mental stressor in working conditions is discussed.

  8. Stable time filtering of strongly unstable spatially extended systems

    PubMed Central

    Grote, Marcus J.; Majda, Andrew J.

    2006-01-01

    Many contemporary problems in science involve making predictions based on partial observation of extremely complicated spatially extended systems with many degrees of freedom and with physical instabilities on both large and small scale. Various new ensemble filtering strategies have been developed recently for these applications, and new mathematical issues arise. Because ensembles are extremely expensive to generate, one such issue is whether it is possible under appropriate circumstances to take long time steps in an explicit difference scheme and violate the classical Courant–Friedrichs–Lewy (CFL)-stability condition yet obtain stable accurate filtering by using the observations. These issues are explored here both through elementary mathematical theory, which provides simple guidelines, and the detailed study of a prototype model. The prototype model involves an unstable finite difference scheme for a convection–diffusion equation, and it is demonstrated below that appropriate observations can result in stable accurate filtering of this strongly unstable spatially extended system. PMID:16682626

  9. Stable time filtering of strongly unstable spatially extended systems.

    PubMed

    Grote, Marcus J; Majda, Andrew J

    2006-05-16

    Many contemporary problems in science involve making predictions based on partial observation of extremely complicated spatially extended systems with many degrees of freedom and with physical instabilities on both large and small scale. Various new ensemble filtering strategies have been developed recently for these applications, and new mathematical issues arise. Because ensembles are extremely expensive to generate, one such issue is whether it is possible under appropriate circumstances to take long time steps in an explicit difference scheme and violate the classical Courant-Friedrichs-Lewy (CFL)-stability condition yet obtain stable accurate filtering by using the observations. These issues are explored here both through elementary mathematical theory, which provides simple guidelines, and the detailed study of a prototype model. The prototype model involves an unstable finite difference scheme for a convection-diffusion equation, and it is demonstrated below that appropriate observations can result in stable accurate filtering of this strongly unstable spatially extended system.

  10. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  11. E-waste: an assessment of global production and environmental impacts.

    PubMed

    Robinson, Brett H

    2009-12-20

    E-waste comprises discarded electronic appliances, of which computers and mobile telephones are disproportionately abundant because of their short lifespan. The current global production of E-waste is estimated to be 20-25 million tonnes per year, with most E-waste being produced in Europe, the United States and Australasia. China, Eastern Europe and Latin America will become major E-waste producers in the next ten years. Miniaturisation and the development of more efficient cloud computing networks, where computing services are delivered over the internet from remote locations, may offset the increase in E-waste production from global economic growth and the development of pervasive new technologies. E-waste contains valuable metals (Cu, platinum group) as well as potential environmental contaminants, especially Pb, Sb, Hg, Cd, Ni, polybrominated diphenyl ethers (PBDEs), and polychlorinated biphenyls (PCBs). Burning E-waste may generate dioxins, furans, polycyclic aromatic hydrocarbons (PAHs), polyhalogenated aromatic hydrocarbons (PHAHs), and hydrogen chloride. The chemical composition of E-waste changes with the development of new technologies and pressure from environmental organisations on electronics companies to find alternatives to environmentally damaging materials. Most E-waste is disposed in landfills. Effective reprocessing technology, which recovers the valuable materials with minimal environmental impact, is expensive. Consequently, although illegal under the Basel Convention, rich countries export an unknown quantity of E-waste to poor countries, where recycling techniques include burning and dissolution in strong acids with few measures to protect human health and the environment. Such reprocessing initially results in extreme localised contamination followed by migration of the contaminants into receiving waters and food chains. E-waste workers suffer negative health effects through skin contact and inhalation, while the wider community are exposed to the contaminants through smoke, dust, drinking water and food. There is evidence that E-waste associated contaminants may be present in some agricultural or manufactured products for export.

  12. Fast solver for large scale eddy current non-destructive evaluation problems

    NASA Astrophysics Data System (ADS)

    Lei, Naiguang

    Eddy current testing plays a very important role in non-destructive evaluations of conducting test samples. Based on Faraday's law, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically conducting test specimen. The eddy currents generate induced magnetic fields that oppose the direction of the inducing magnetic field in accordance with Lenz's law. In the presence of discontinuities in material property or defects in the test specimen, the induced eddy current paths are perturbed and the associated magnetic fields can be detected by coils or magnetic field sensors, such as Hall elements or magneto-resistance sensors. Due to the complexity of the test specimen and the inspection environments, the availability of theoretical simulation models is extremely valuable for studying the basic field/flaw interactions in order to obtain a fuller understanding of non-destructive testing phenomena. Theoretical models of the forward problem are also useful for training and validation of automated defect detection systems. Theoretical models generate defect signatures that are expensive to replicate experimentally. In general, modelling methods can be classified into two categories: analytical and numerical. Although analytical approaches offer closed form solution, it is generally not possible to obtain largely due to the complex sample and defect geometries, especially in three-dimensional space. Numerical modelling has become popular with advances in computer technology and computational methods. However, due to the huge time consumption in the case of large scale problems, accelerations/fast solvers are needed to enhance numerical models. This dissertation describes a numerical simulation model for eddy current problems using finite element analysis. Validation of the accuracy of this model is demonstrated via comparison with experimental measurements of steam generator tube wall defects. These simulations generating two-dimension raster scan data typically takes one to two days on a dedicated eight-core PC. A novel direct integral solver for eddy current problems and GPU-based implementation is also investigated in this research to reduce the computational time.

  13. An adaptive least-squares global sensitivity method and application to a plasma-coupled combustion prediction with parametric correlation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.

    2018-05-01

    We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.

  14. Design and implementation of an inexpensive target scanner for the growth of thin films by the laser-ablation process

    NASA Astrophysics Data System (ADS)

    Rao, A. M.; Moodera, J. S.

    1991-04-01

    The design of a target scanner that is inexpensive and easy to construct is described. Our target scanner system does not require an expensive personal computer to raster the laser beam uniformily over the target material, unlike the computer driven target scanners that are currently being used in the thin-film industry. The main components of our target scanner comprise a bidirectional motor, a two-position switch, and a standard optical mirror mount.

  15. CLOCS (Computer with Low Context-Switching Time) Operating System Reference Documents

    DTIC Science & Technology

    1988-05-06

    system are met. In sum, real-time constraints make programming harder in genera420], because they add a whole new dimension - the time dimension - to ...be preempted until it allows itself to be. More is Stored; Less is Computed Alan Jay Smith, of Berkeley, has said that any program can be made five...times as swift to run, at the expense of five times the storage space. While his numbers may be questioned, his premise may not: programs can be made

  16. Experimental realization of an entanglement access network and secure multi-party computation

    NASA Astrophysics Data System (ADS)

    Chang, X.-Y.; Deng, D.-L.; Yuan, X.-X.; Hou, P.-Y.; Huang, Y.-Y.; Duan, L.-M.

    2016-07-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography.

  17. Experimental realization of an entanglement access network and secure multi-party computation

    NASA Astrophysics Data System (ADS)

    Chang, Xiuying; Deng, Donglin; Yuan, Xinxing; Hou, Panyu; Huang, Yuanyuan; Duan, Luming; Department of Physics, University of Michigan Collaboration; CenterQuantum Information in Tsinghua University Team

    2017-04-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography.

  18. Porting Extremely Lightweight Intrusion Detection (ELIDe) to Android

    DTIC Science & Technology

    2015-10-01

    ARL-TN-0681 ● OCT 2015 US Army Research Laboratory Porting Extremely Lightweight Intrusion Detection (ELIDe) to Android by...Lightweight Intrusion Detection (ELIDe) to Android by Ken F Yu and Garret S Payer Computational and Information Sciences Directorate, ARL...

  19. 11 CFR 9035.1 - Campaign expenditure limitation; compliance and fundraising exemptions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...: (i) Coordinated expenditures under 11 CFR 109.20; (ii) Coordinated communications under 11 CFR 109.21... coordinated communications pursuant to 11 CFR 109.37 that are in-kind contributions received or accepted by... this section, 100% of salary, overhead and computer expenses incurred after a candidate's date of...

  20. 11 CFR 9035.1 - Campaign expenditure limitation; compliance and fundraising exemptions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...: (i) Coordinated expenditures under 11 CFR 109.20; (ii) Coordinated communications under 11 CFR 109.21... coordinated communications pursuant to 11 CFR 109.37 that are in-kind contributions received or accepted by... this section, 100% of salary, overhead and computer expenses incurred after a candidate's date of...

  1. Do Early Outs Work Out? Teacher Early Retirement Incentive Plans.

    ERIC Educational Resources Information Center

    Brown, Herb R.; Repa, J. Theodore

    1993-01-01

    School districts offer teacher early retirement incentive plans (TERIPs) as an opportunity to hire less expensive teachers, reduce fringe benefits costs, and eliminate teaching positions. Discusses reasons for teachers to accept TERIP, and describes a computer model that allows school officials to calculate and compare costs incurred if an…

  2. 14 CFR Section 24 - Profit and Loss Elements

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Maintenance Burden” shall reflect a memorandum allocation by each air carrier of the total expenses included... operation personnel in readiness for assignment to an in-flight status. (2) “Maintenance” shall include all... line 5 of this schedule. (f) “Operating Profit (Loss)” shall be computed by subtracting the total...

  3. 14 CFR Section 24 - Profit and Loss Elements

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Maintenance Burden” shall reflect a memorandum allocation by each air carrier of the total expenses included... operation personnel in readiness for assignment to an in-flight status. (2) “Maintenance” shall include all... line 5 of this schedule. (f) “Operating Profit (Loss)” shall be computed by subtracting the total...

  4. 14 CFR Section 24 - Profit and Loss Elements

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Maintenance Burden” shall reflect a memorandum allocation by each air carrier of the total expenses included... operation personnel in readiness for assignment to an in-flight status. (2) “Maintenance” shall include all... line 5 of this schedule. (f) “Operating Profit (Loss)” shall be computed by subtracting the total...

  5. 26 CFR 54.4980B-5 - COBRA continuation coverage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (for example, because of a divorce), the family deductible may be computed separately for each... the year. The plan provides that upon the divorce of a covered employee, coverage will end immediately... family had accumulated $420 of covered expenses before the divorce, as follows: $70 by each parent, $200...

  6. Reinforce Networking Theory with OPNET Simulation

    ERIC Educational Resources Information Center

    Guo, Jinhua; Xiang, Weidong; Wang, Shengquan

    2007-01-01

    As networking systems have become more complex and expensive, hands-on experiments based on networking simulation have become essential for teaching the key computer networking topics to students. The simulation approach is the most cost effective and highly useful because it provides a virtual environment for an assortment of desirable features…

  7. A DIY Ultrasonic Signal Generator for Sound Experiments

    ERIC Educational Resources Information Center

    Riad, Ihab F.

    2018-01-01

    Many physics departments around the world have electronic and mechanical workshops attached to them that can help build experimental setups and instruments for research and the training of undergraduate students. The workshops are usually run by experienced technicians and equipped with expensive lathing, computer numerical control (CNC) machines,…

  8. Teaching Children Thinking

    ERIC Educational Resources Information Center

    Papert, Seymour

    2005-01-01

    The phrase "technology and education" usually means inventing new gadgets to teach the same old stuff in a thinly disguised version of the same old way. Moreover, if the gadgets are computers, the same old teaching becomes incredibly more expensive and biased towards its dullest parts, namely the kind of rote learning in which measurable…

  9. Don't Outsource It. Do It!

    ERIC Educational Resources Information Center

    Nuzzo, David

    1999-01-01

    Discusses outsourcing in library technical-services departments and how to make the department more cost-effective to limit the need for outsourcing as a less expensive alternative. Topics include experiences at State University of New York at Buffalo; efficient use of computers for in-house programs; and staff participation. (LRW)

  10. Recording Computer-Based Demonstrations and Board Work

    ERIC Educational Resources Information Center

    Spencer, Neil H.

    2010-01-01

    This article describes how a demonstration of statistical (or other) software can be recorded without expensive video equipment and saved as a presentation to be displayed with software such as Microsoft PowerPoint. Work carried out on a tablet PC, for example, can also be recorded in this fashion.

  11. Common Sense Wordworking III: Desktop Publishing and Desktop Typesetting.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1987-01-01

    Describes current desktop publishing packages available for microcomputers and discusses the disadvantages, especially in cost, for most personal computer users. Also described is a less expensive alternative technology--desktop typesetting--which meets the requirements of users who do not need elaborate techniques for combining text and graphics.…

  12. CRITTERS! A Realistic Simulation for Teaching Evolutionary Biology

    ERIC Educational Resources Information Center

    Latham, Luke G., II; Scully, Erik P.

    2008-01-01

    Evolutionary processes can be studied in nature and in the laboratory, but time and financial constraints result in few opportunities for undergraduate and high school students to explore the agents of genetic change in populations. One alternative to time consuming and expensive teaching laboratories is the use of computer simulations. We…

  13. Learning Hierarchical Skills for Game Agents from Video of Human Behavior

    DTIC Science & Technology

    2009-01-01

    intelligent agents for computer games is an im- portant aspect of game development . However, traditional methods are expensive, and the resulting agents...Constructing autonomous agents is an essential task in game development . In this paper, we outlined a system that an- alyzes preprocessed video footage of

  14. Processing Polarity: How the Ungrammatical Intrudes on the Grammatical

    ERIC Educational Resources Information Center

    Vasishth, Shravan; Brussow, Sven; Lewis, Richard L.; Drenhaus, Heiner

    2008-01-01

    A central question in online human sentence comprehension is, "How are linguistic relations established between different parts of a sentence?" Previous work has shown that this dependency resolution process can be computationally expensive, but the underlying reasons for this are still unclear. This article argues that dependency…

  15. Low Cost Alternatives to Commercial Lab Kits for Physics Experiments

    ERIC Educational Resources Information Center

    Kodejška, C.; De Nunzio, G.; Kubinek, R.; Ríha, J.

    2015-01-01

    Conducting experiments in physics using modern measuring techniques, and particularly those utilizing computers, is often much more attractive to students than conducting experiments conventionally. However, the cost of professional kits in the Czech Republic is still very expensive for many schools. The basic equipment for one student workplace…

  16. Long-Range Budget Planning in Private Colleges and Universities

    ERIC Educational Resources Information Center

    Hopkins, David S. P.; Massy, William F.

    1977-01-01

    Computer models have greatly assisted budget planners in privately financed institutions to identify and analyze major financial problems. The implementation of such a model at Stanford University is described that considers student aid expenses, indirect cost recovery, endowments, price elasticity of enrollment, and student/faculty ratios.…

  17. Extreme Temperature Operation of a 10 MHz Silicon Oscillator Type STCL1100

    NASA Technical Reports Server (NTRS)

    Patterson, Richard L.; Hammoud, Ahmad

    2008-01-01

    The performance of STMicroelectronics 10 MHz silicon oscillator was evaluated under exposure to extreme temperatures. The oscillator was characterized in terms of its output frequency stability, output signal rise and fall times, duty cycle, and supply current. The effects of thermal cycling and re-start capability at extreme low and high temperatures were also investigated. The silicon oscillator chip operated well with good stability in its output frequency over the temperature region of -50 C to +130 C, a range that by far exceeded its recommended specified boundaries of -20 C to +85 C. In addition, this chip, which is a low-cost oscillator designed for use in applications where great accuracy is not required, continued to function at cryogenic temperatures as low as - 195 C but at the expense of drop in its output frequency. The STCL1100 silicon oscillator was also able to re-start at both -195 C and +130 C, and it exhibited no change in performance due to the thermal cycling. In addition, no physical damage was observed in the packaging material due to extreme temperature exposure and thermal cycling. Therefore, it can be concluded that this device could potentially be used in space exploration missions under extreme temperature conditions in microprocessor and other applications where tight clock accuracy is not critical. In addition to the aforementioned screening evaluation, additional testing, however, is required to fully establish the reliability of these devices and to determine their suitability for long-term use.

  18. Retrospective Review of Air Transportation Use for Upper Extremity Amputations at a Level-1 Trauma Center.

    PubMed

    Grantham, W Jeffrey; To, Philip; Watson, Jeffry T; Brywczynski, Jeremy; Lee, Donald H

    2016-08-01

    Air transportation to tertiary care centers of patients with upper extremity amputations has been utilized in hopes of reducing the time to potential replantation; however, this mode of transportation is expensive and not all patients will undergo replantation. The purpose of this study is to review the appropriateness and cost of air transportation in upper extremity amputations. Consecutive patients transported by aircraft with upper extremity amputations in a 7-year period at a level-1 trauma center were retrospectively reviewed. The distance traveled was recorded, along with the times of the injury, referral, transportation duration, arrival, and start of the operation. The results of the transfer were defined as replantation or revision amputation. Overall, 47 patients were identified with 43 patients going to the operating room, but only 14 patients (30%) undergoing replantation. Patients arrived at the tertiary hand surgery center with a mean time of 182.3 minutes following the injury, which includes 105.2 minutes of transportation time. The average distance traveled was 105.4 miles (range, 22-353 miles). The time before surgery of those who underwent replantation was 154.6 minutes. The average cost of transportation was $20,482. Air transportation for isolated upper extremity amputations is costly and is not usually the determining factor for replantation. The type of injury and patients' expectations often dictate the outcome, and these may be better determined at the time of referral with use of telecommunication photos, discussion with a hand surgeon, and patient counseling. III.

  19. CASL VMA Milestone Report FY16 (L3:VMA.VUQ.P13.08): Westinghouse Mixing with STAR-CCM+

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilkey, Lindsay Noelle

    2016-09-30

    STAR-CCM+ (STAR) is a high-resolution computational fluid dynamics (CFD) code developed by CD-adapco. STAR includes validated physics models and a full suite of turbulence models including ones from the k-ε and k-ω families. STAR is currently being developed to be able to do two phase flows, but the current focus of the software is single phase flow. STAR can use imported meshes or use the built in meshing software to create computation domains for CFD. Since the solvers generally require a fine mesh for good computational results, the meshes used with STAR tend to number in the millions of cells,more » with that number growing with simulation and geometry complexity. The time required to model the flow of a full 5x5 Mixing Vane Grid Assembly (5x5MVG) in the current STAR configuration is on the order of hours, and can be very computationally expensive. COBRA-TF (CTF) is a low-resolution subchannel code that can be trained using high fidelity data from STAR. CTF does not have turbulence models and instead uses a turbulent mixing coefficient β. With a properly calibrated β, CTF can be used a low-computational cost alternative to expensive full CFD calculations performed with STAR. During the Hi2Lo work with CTF and STAR, STAR-CCM+ will be used to calibrate β and to provide high-resolution results that can be used in the place of and in addition to experimental results to reduce the uncertainty in the CTF results.« less

  20. Extreme cyclone events in the Arctic during wintertime: Variability and Trends

    NASA Astrophysics Data System (ADS)

    Rinke, Annette; Maturilli, Marion; Graham, Robert; Matthes, Heidrun; Handorf, Doerthe; Cohen, Lana; Hudson, Stephen; Moore, John

    2017-04-01

    Extreme cyclone events are of significant interest as they can transport much heat, moisture, and momentum poleward. Associated impacts are warming and sea-ice breakup. Recently, several examples of such extreme weather events occurred in winter (e.g. during the N-ICE2015 campaign north of Svalbard and the Frank North Atlantic storm during the end of December 2015). With Arctic amplification and associated reduced sea-ice cover and warmer sea surface temperatures, the occurrence of extreme cyclones events could be a plausible scenario. We calculate the spatial patterns, and changes and trends of the number of extreme cyclone events in the Arctic based on ERA-Interim six-hourly sea level pressure (SLP) data for winter (November-February) 1979-2015. Further, we analyze the SLP data from the Ny Alesund station for the same 37 year period. We define an extreme cyclone event by a extreme low central pressure (SLP below 985 hPa, which is the 5th percentile of the Ny Alesund/N-ICE2015 SLP data) and a deepening of at least 6 hPa/6 hours. Areas of highest frequency of occurrence of extreme cyclones are south/southeast of Greenland (corresponding to the Islandic low), between Norway and Svalbard and in the Barents/Kara Seas. The time series of the number of occurrence of extreme cyclone events for Ny Alesund/N-ICE show considerable interannual variability. The trend is not consistent through the winter, but we detect an increase in early winter and a slight decrease in late winter. The former is due to the increased occurrence of longer events at the expense of short events. Furthermore, the difference patterns of the frequency of events for months following the September with high and low Arctic sea-ice extent ("Low minus high sea ice") conforms with the change patterns of extreme cyclones numbers (frequency of events "2000-2015 minus 1979-1994") and with the trend patterns. This indicates that the changes in extreme cyclone occurrence in early winter are associated with sea-ice changes (regional feedback). In contrast, different mechanisms via large-scale circulation changes/teleconnections seem to play a role in late winter.

  1. Optimized photonic gauge of extreme high vacuum with Petawatt lasers

    NASA Astrophysics Data System (ADS)

    Paredes, Ángel; Novoa, David; Tommasini, Daniele; Mas, Héctor

    2014-03-01

    One of the latest proposed applications of ultra-intense laser pulses is their possible use to gauge extreme high vacuum by measuring the photon radiation resulting from nonlinear Thomson scattering within a vacuum tube. Here, we provide a complete analysis of the process, computing the expected rates and spectra, both for linear and circular polarizations of the laser pulses, taking into account the effect of the time envelope in a slowly varying envelope approximation. We also design a realistic experimental configuration allowing for the implementation of the idea and compute the corresponding geometric efficiencies. Finally, we develop an optimization procedure for this photonic gauge of extreme high vacuum at high repetition rate Petawatt and multi-Petawatt laser facilities, such as VEGA, JuSPARC and ELI.

  2. Multidisciplinary propulsion simulation using the numerical propulsion system simulator (NPSS)

    NASA Technical Reports Server (NTRS)

    Claus, Russel W.

    1994-01-01

    Implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributions to the high cost is the need to perform many large scale system tests. The traditional design analysis procedure decomposes the engine into isolated components and focuses attention on each single physical discipline (e.g., fluid for structural dynamics). Consequently, the interactions that naturally occur between components and disciplines can be masked by the limited interactions that occur between individuals or teams doing the design and must be uncovered during expensive engine testing. This overview will discuss a cooperative effort of NASA, industry, and universities to integrate disciplines, components, and high performance computing into a Numerical propulsion System Simulator (NPSS).

  3. College students and computers: assessment of usage patterns and musculoskeletal discomfort.

    PubMed

    Noack-Cooper, Karen L; Sommerich, Carolyn M; Mirka, Gary A

    2009-01-01

    A limited number of studies have focused on computer-use-related MSDs in college students, though risk factor exposure may be similar to that of workers who use computers. This study examined computer use patterns of college students, and made comparisons to a group of previously studied computer-using professionals. 234 students completed a web-based questionnaire concerning computer use habits and physical discomfort respondents specifically associated with computer use. As a group, students reported their computer use to be at least 'Somewhat likely' 18 out of 24 h/day, compared to 12 h for the professionals. Students reported more uninterrupted work behaviours than the professionals. Younger graduate students reported 33.7 average weekly computing hours, similar to hours reported by younger professionals. Students generally reported more frequent upper extremity discomfort than the professionals. Frequent assumption of awkward postures was associated with frequent discomfort. The findings signal a need for intervention, including, training and education, prior to entry into the workforce. Students are future workers, and so it is important to determine whether their increasing exposure to computers, prior to entering the workforce, may make it so they enter already injured or do not enter their chosen profession due to upper extremity MSDs.

  4. Opportunities for nonvolatile memory systems in extreme-scale high-performance computing

    DOE PAGES

    Vetter, Jeffrey S.; Mittal, Sparsh

    2015-01-12

    For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less

  5. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  6. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE PAGES

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...

    2015-07-14

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  7. Symmetrically private information retrieval based on blind quantum computing

    NASA Astrophysics Data System (ADS)

    Sun, Zhiwei; Yu, Jianping; Wang, Ping; Xu, Lingling

    2015-05-01

    Universal blind quantum computation (UBQC) is a new secure quantum computing protocol which allows a user Alice who does not have any sophisticated quantum technology to delegate her computing to a server Bob without leaking any privacy. Using the features of UBQC, we propose a protocol to achieve symmetrically private information retrieval, which allows a quantum limited Alice to query an item from Bob with a fully fledged quantum computer; meanwhile, the privacy of both parties is preserved. The security of our protocol is based on the assumption that malicious Alice has no quantum computer, which avoids the impossibility proof of Lo. For the honest Alice, she is almost classical and only requires minimal quantum resources to carry out the proposed protocol. Therefore, she does not need any expensive laboratory which can maintain the coherence of complicated quantum experimental setups.

  8. Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mason, B. H.; Walsh, J. L.

    2001-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.

  9. Current CFD Practices in Launch Vehicle Applications

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, Cetin

    2012-01-01

    The quest for sustained space exploration will require the development of advanced launch vehicles, and efficient and reliable operating systems. Development of launch vehicles via test-fail-fix approach is very expensive and time consuming. For decision making, modeling and simulation (M&S) has played increasingly important roles in many aspects of launch vehicle development. It is therefore essential to develop and maintain most advanced M&S capability. More specifically computational fluid dynamics (CFD) has been providing critical data for developing launch vehicles complementing expensive testing. During the past three decades CFD capability has increased remarkably along with advances in computer hardware and computing technology. However, most of the fundamental CFD capability in launch vehicle applications is derived from the past advances. Specific gaps in the solution procedures are being filled primarily through "piggy backed" efforts.on various projects while solving today's problems. Therefore, some of the advanced capabilities are not readily available for various new tasks, and mission-support problems are often analyzed using ad hoc approaches. The current report is intended to present our view on state-of-the-art (SOA) in CFD and its shortcomings in support of space transport vehicle development. Best practices in solving current issues will be discussed using examples from ascending launch vehicles. Some of the pacing will be discussed in conjunction with these examples.

  10. Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke

    NASA Astrophysics Data System (ADS)

    Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro

    Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.

  11. Using Multistate Reweighting to Rapidly and Efficiently Explore Molecular Simulation Parameters Space for Nonbonded Interactions.

    PubMed

    Paliwal, Himanshu; Shirts, Michael R

    2013-11-12

    Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.

  12. The immediate economic impact of maternal deaths on rural Chinese households.

    PubMed

    Ye, Fang; Wang, Haijun; Huntington, Dale; Zhou, Hong; Li, Yan; You, Fengzhi; Li, Jinhua; Cui, Wenlong; Yao, Meiling; Wang, Yan

    2012-01-01

    To identify the immediate economic impact of maternal death on rural Chinese households. Results are reported from a study that matched 195 households who had suffered a maternal death to 384 households that experienced a childbirth without maternal death in rural areas of three provinces in China, using quantitative questionnaire to compare differences of direct and indirect costs between two groups. The direct costs of a maternal death were significantly higher than the costs of a childbirth without a maternal death (US$4,119 vs. $370, p<0.001). More than 40% of the direct costs were attributed to funeral expenses. Hospitalization and emergency care expenses were the largest proportion of non-funeral direct costs and were higher in households with maternal death than the comparison group (US$2,248 vs. $305, p<0.001). To cover most of the high direct costs, 44.1% of affected households utilized compensation from hospitals, and the rest affected households (55.9%) utilized borrowing money or taking loans as major source of money to offset direct costs. The median economic burden of the direct (and non-reimbursed) costs of a maternal death was quite high--37.0% of the household's annual income, which was approximately 4 times as high as the threshold for an expense being considered catastrophic. The immediate direct costs of maternal deaths are extremely catastrophic for the rural Chinese households in three provinces studied.

  13. The future of scientific workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Peterka, Tom; Altintas, Ilkay

    Today’s computational, experimental, and observational sciences rely on computations that involve many related tasks. The success of a scientific mission often hinges on the computer automation of these workflows. In April 2015, the US Department of Energy (DOE) invited a diverse group of domain and computer scientists from national laboratories supported by the Office of Science, the National Nuclear Security Administration, from industry, and from academia to review the workflow requirements of DOE’s science and national security missions, to assess the current state of the art in science workflows, to understand the impact of emerging extreme-scale computing systems on thosemore » workflows, and to develop requirements for automated workflow management in future and existing environments. This article is a summary of the opinions of over 50 leading researchers attending this workshop. We highlight use cases, computing systems, workflow needs and conclude by summarizing the remaining challenges this community sees that inhibit large-scale scientific workflows from becoming a mainstream tool for extreme-scale science.« less

  14. Child Poverty: Definition and Measurement.

    PubMed

    Short, Kathleen S

    2016-04-01

    This article provides a discussion of what we mean when we refer to 'child poverty.' Many images come to mind when we discuss child poverty, but when we try to measure and quantify the extent of child poverty, we often use a very narrow concept. In this article a variety of poverty measures that are used in the United States are described and some of the differences between those measures are illustrated. In this article 3 measures are explored in detail: a relative measure of poverty that is used more often in an international context, the official US poverty measure, and a new supplemental poverty measure (SPM). The new measure differs from the other 2 because it takes into account noncash benefits that are provided to poor families. These include nutrition assistance such as food stamps, subsidized housing, and home energy assistance. The SPM also takes account of necessary expenses that families face, such as taxes and expenses related to work and health care. Comparing estimates for 2012, the SPM showed lower poverty rates for children than the other 2 measures. Because noncash benefits help those in extreme poverty, there were also lower percentages of children in extreme poverty with resources below half the SPM threshold. These results suggest that 2 important measures of poverty, the relative measure used in international comparisons, and the official poverty measure, are not able to gauge the effect of government programs on the alleviation of poverty, and the SPM illustrates that noncash benefits do help families meet their basic needs. Published by Elsevier Inc.

  15. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  16. Simple, inexpensive computerized rodent activity meters.

    PubMed

    Horton, R M; Karachunski, P I; Kellermann, S A; Conti-Fine, B M

    1995-10-01

    We describe two approaches for using obsolescent computers, either an IBM PC XT or an Apple Macintosh Plus, to accurately quantify spontaneous rodent activity, as revealed by continuous monitoring of the spontaneous usage of running activity wheels. Because such computers can commonly be obtained at little or no expense, and other commonly available materials and inexpensive parts can be used, these meters can be built quite economically. Construction of these meters requires no specialized electronics expertise, and their software requirements are simple. The computer interfaces are potentially of general interest, as they could also be used for monitoring a variety of events in a research setting.

  17. Reduced-Order Models for the Aeroelastic Analysis of Ares Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Vatsa, Veer N.; Biedron, Robert T.

    2010-01-01

    This document presents the development and application of unsteady aerodynamic, structural dynamic, and aeroelastic reduced-order models (ROMs) for the ascent aeroelastic analysis of the Ares I-X flight test and Ares I crew launch vehicles using the unstructured-grid, aeroelastic FUN3D computational fluid dynamics (CFD) code. The purpose of this work is to perform computationally-efficient aeroelastic response calculations that would be prohibitively expensive via computation of multiple full-order aeroelastic FUN3D solutions. These efficient aeroelastic ROM solutions provide valuable insight regarding the aeroelastic sensitivity of the vehicles to various parameters over a range of dynamic pressures.

  18. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  19. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  20. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  1. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  2. The successful merger of theoretical thermochemistry with fragment-based methods in quantum chemistry.

    PubMed

    Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-12-16

    CONSPECTUS: Quantum chemistry and electronic structure theory have proven to be essential tools to the experimental chemist, in terms of both a priori predictions that pave the way for designing new experiments and rationalizing experimental observations a posteriori. Translating the well-established success of electronic structure theory in obtaining the structures and energies of small chemical systems to increasingly larger molecules is an exciting and ongoing central theme of research in quantum chemistry. However, the prohibitive computational scaling of highly accurate ab initio electronic structure methods poses a fundamental challenge to this research endeavor. This scenario necessitates an indirect fragment-based approach wherein a large molecule is divided into small fragments and is subsequently reassembled to compute its energy accurately. In our quest to further reduce the computational expense associated with the fragment-based methods and overall enhance the applicability of electronic structure methods to large molecules, we realized that the broad ideas involved in a different area, theoretical thermochemistry, are transferable to the area of fragment-based methods. This Account focuses on the effective merger of these two disparate frontiers in quantum chemistry and how new concepts inspired by theoretical thermochemistry significantly reduce the total number of electronic structure calculations needed to be performed as part of a fragment-based method without any appreciable loss of accuracy. Throughout, the generalized connectivity based hierarchy (CBH), which we developed to solve a long-standing problem in theoretical thermochemistry, serves as the linchpin in this merger. The accuracy of our method is based on two strong foundations: (a) the apt utilization of systematic and sophisticated error-canceling schemes via CBH that result in an optimal cutting scheme at any given level of fragmentation and (b) the use of a less expensive second layer of electronic structure method to recover all the missing long-range interactions in the parent large molecule. Overall, the work featured here dramatically decreases the computational expense and empowers the execution of very accurate ab initio calculations (gold-standard CCSD(T)) on large molecules and thereby facilitates sophisticated electronic structure applications to a wide range of important chemical problems.

  3. Sensitivity of chemistry-transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, Sajeev; Martin, Randall V.; Keller, Christoph A.

    2016-05-01

    Chemistry-transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemistry-transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to operator duration. Subsequently, we compare the species simulated with operator durations from 10 to 60 min as typically used by global chemistry-transport models, and identify the operator durations that optimize both computational expense and simulation accuracy. We find that longer continuous transport operator duration increases concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production with longer transport operator duration. Longer chemical operator duration decreases sulfate and ammonium but increases nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by up to a factor of 5 from fine (5 min) to coarse (60 min) operator duration. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, secondary inorganic aerosols, ozone and carbon monoxide with a finer temporal or spatial resolution taken as "truth". Relative simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) operator duration. Chemical operator duration twice that of the transport operator duration offers more simulation accuracy per unit computation. However, the relative simulation error from coarser spatial resolution generally exceeds that from longer operator duration; e.g., degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different operator durations in offline chemistry-transport models. We encourage chemistry-transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.

  4. Sensitivity of chemical transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, S.; Martin, R. V.; Keller, C. A.

    2015-11-01

    Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different temporal resolutions in offline chemical transport models. We encourage the chemical transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.

  5. A Practical, Robust Methodology for Acquiring New Observation Data Using Computationally Expensive Groundwater Models

    NASA Astrophysics Data System (ADS)

    Siade, Adam J.; Hall, Joel; Karelse, Robert N.

    2017-11-01

    Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.

  6. Extreme Mean and Its Applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.

    1979-01-01

    Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.

  7. Dense, Efficient Chip-to-Chip Communication at the Extremes of Computing

    ERIC Educational Resources Information Center

    Loh, Matthew

    2013-01-01

    The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural…

  8. Molecular dynamics simulations and applications in computational toxicology and nanotoxicology.

    PubMed

    Selvaraj, Chandrabose; Sakkiah, Sugunadevi; Tong, Weida; Hong, Huixiao

    2018-02-01

    Nanotoxicology studies toxicity of nanomaterials and has been widely applied in biomedical researches to explore toxicity of various biological systems. Investigating biological systems through in vivo and in vitro methods is expensive and time taking. Therefore, computational toxicology, a multi-discipline field that utilizes computational power and algorithms to examine toxicology of biological systems, has gained attractions to scientists. Molecular dynamics (MD) simulations of biomolecules such as proteins and DNA are popular for understanding of interactions between biological systems and chemicals in computational toxicology. In this paper, we review MD simulation methods, protocol for running MD simulations and their applications in studies of toxicity and nanotechnology. We also briefly summarize some popular software tools for execution of MD simulations. Published by Elsevier Ltd.

  9. Adaptive Crack Modeling with Interface Solid Elements for Plain and Fiber Reinforced Concrete Structures.

    PubMed

    Zhan, Yijian; Meschke, Günther

    2017-07-08

    The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense.

  10. Efficient computation of photonic crystal waveguide modes with dispersive material.

    PubMed

    Schmidt, Kersten; Kappeler, Roman

    2010-03-29

    The optimization of PhC waveguides is a key issue for successfully designing PhC devices. Since this design task is computationally expensive, efficient methods are demanded. The available codes for computing photonic bands are also applied to PhC waveguides. They are reliable but not very efficient, which is even more pronounced for dispersive material. We present a method based on higher order finite elements with curved cells, which allows to solve for the band structure taking directly into account the dispersiveness of the materials. This is accomplished by reformulating the wave equations as a linear eigenproblem in the complex wave-vectors k. For this method, we demonstrate the high efficiency for the computation of guided PhC waveguide modes by a convergence analysis.

  11. Adaptive Crack Modeling with Interface Solid Elements for Plain and Fiber Reinforced Concrete Structures

    PubMed Central

    Zhan, Yijian

    2017-01-01

    The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense. PMID:28773130

  12. Enabling Earth Science: The Facilities and People of the NCCS

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The NCCS's mass data storage system allows scientists to store and manage the vast amounts of data generated by these computations, and its high-speed network connections allow the data to be accessed quickly from the NCCS archives. Some NCCS users perform studies that are directly related to their ability to run computationally expensive and data-intensive simulations. Because the number and type of questions scientists research often are limited by computing power, the NCCS continually pursues the latest technologies in computing, mass storage, and networking technologies. Just as important as the processors, tapes, and routers of the NCCS are the personnel who administer this hardware, create and manage accounts, maintain security, and assist the scientists, often working one on one with them.

  13. Comment

    NASA Astrophysics Data System (ADS)

    2001-07-01

    Good teaching isn't a hardware problem Stuart Robertson, a physics teacher by training, now works to ensure that teachers are fully trained to use Information and Communication Technology (ICT) and that all Scottish students leave school competent with the basics of using computers. He addressed the Stirling meeting of physics teachers at the end of May. So, how do governments measure progress with ICT? They measure the numbers of schools with full internet access, the proportion of teachers with e-mail, the numbers of computers in classrooms and so on. One of England's most successful state schools (by exam results) boasts 26 interactive whiteboards, and in the UK there seems to be a feeling that lots of hardware = good school. Teaching isn't that simple. We don't need expensive research to know that just using a computer won't make teaching necessarily better. Robertson knows this and advises: don't be driven by technology—be driven by what you can do with it. Good teaching has always been about using the resources at hand, and it still is. Our aim at Physics Education is support the teaching of physics by reviewing and discussing new teaching tools— hardware and software (see Reviews). That's not to say that we must all be using expensive electronic boxes of tricks to reinforce every concept. We don't need computers to teach physics. I really doubt that my teachers, back in the 1970s, would have taught me much more physics if we had had computers in our lab. In this issue of Physics Education we have examples of some very straight-forward demonstrations and experiments—with no computer involvement whatsoever. But we also have some computer-interfaced activities and some computer-based investigations. We recognize that some institutions have an erratic electricity supply and few, if any, computers. Others are being driven to use as much electronic gadgetry as possible, following the mistaken assumption that this is, in itself, educationally better. Other schools and colleges are exploring electronic learning through the internet and virtual labs (see Steve Mellema's use of IT in his Lecture for the 21st Century). We aim to provide useful material for everybody at and in between the extremes. But some words of caution, sounded by Robertson in Stirling: today we might find that our classes are motivated and interested when we use computers, but how long will the excitement last? If every lesson faces children with computer screens will they soon get bored and demotivated? Individual learning, through worksheets, was a great success when it was developed in the 70s, but when every lesson faced a child with yet another worksheet, students were turned off. It became known as 'death by a thousand worksheets'. Let's not abuse computers in the same way. Physics for the beach and the igloo Physics is about being cool, as we are always trying to tell our students! In this 'summer' issue we have two papers which allow us to demonstrate this practically. I should also like to remind readers that Physics Education is available online (www.iop.org/Journals/pe) in addition to the paper version. The electronic version has the advantages of hotlinks to websites, search facilities and the ability to download teaching materials. My guess is that we haven't begun to explore the possibilities of the electronic journal as a teaching resource for teachers. If it can be stored electronically, we can include it as a multimedia clip pictures, worksheets, spreadsheets, videos, sounds... But there are also many advantages of paper—convenience and permanence being just two. Physics Education is a worthwhile publication and it feels like that in your hand. Having a journal like this, to put in my bag, or stack on my bookshelf, still feels good to me. And judging by readers' comments, you agree. IOPP will, no doubt, support both formats for a long time to come. So, this summer, enjoy the format you are reading, read Physics Education on screen or on the beach, reflect on your teaching, your students' learning and remind yourself that physics really can be cool. Editor: Kerry Parker

  14. Current capabilities for simulating the extreme distortion of thin structures subjected to severe impacts

    NASA Technical Reports Server (NTRS)

    Key, Samuel W.

    1993-01-01

    The explicit transient dynamics technology in use today for simulating the impact and subsequent transient dynamic response of a structure has its origins in the 'hydrocodes' dating back to the late 1940's. The growth in capability in explicit transient dynamics technology parallels the growth in speed and size of digital computers. Computer software for simulating the explicit transient dynamic response of a structure is characterized by algorithms that use a large number of small steps. In explicit transient dynamics software there is a significant emphasis on speed and simplicity. The finite element technology used to generate the spatial discretization of a structure is based on a compromise between completeness of the representation for the physical processes modelled and speed in execution. That is, since it is expected in every calculation that the deformation will be finite and the material will be strained beyond the elastic range, the geometry and the associated gradient operators must be reconstructed, as well as complex stress-strain models evaluated at every time step. As a result, finite elements derived for explicit transient dynamics software use the simplest and barest constructions possible for computational efficiency while retaining an essential representation of the physical behavior. The best example of this technology is the four-node bending quadrilateral derived by Belytschko, Lin and Tsay. Today, the speed, memory capacity and availability of computer hardware allows a number of the previously used algorithms to be 'improved.' That is, it is possible with today's computing hardware to modify many of the standard algorithms to improve their representation of the physical process at the expense of added complexity and computational effort. The purpose is to review a number of these algorithms and identify the improvements possible. In many instances, both the older, faster version of the algorithm and the improved and somewhat slower version of the algorithm are found implemented together in software. Specifically, the following seven algorithmic items are examined: the invariant time derivatives of stress used in material models expressed in rate form; incremental objectivity and strain used in the numerical integration of the material models; the use of one-point element integration versus mean quadrature; shell elements used to represent the behavior of thin structural components; beam elements based on stress-resultant plasticity versus cross-section integration; the fidelity of elastic-plastic material models in their representation of ductile metals; and the use of Courant subcycling to reduce computational effort.

  15. Fault-Tolerant, Radiation-Hard DSP

    NASA Technical Reports Server (NTRS)

    Czajkowski, David

    2011-01-01

    Commercial digital signal processors (DSPs) for use in high-speed satellite computers are challenged by the damaging effects of space radiation, mainly single event upsets (SEUs) and single event functional interrupts (SEFIs). Innovations have been developed for mitigating the effects of SEUs and SEFIs, enabling the use of very-highspeed commercial DSPs with improved SEU tolerances. Time-triple modular redundancy (TTMR) is a method of applying traditional triple modular redundancy on a single processor, exploiting the VLIW (very long instruction word) class of parallel processors. TTMR improves SEU rates substantially. SEFIs are solved by a SEFI-hardened core circuit, external to the microprocessor. It monitors the health of the processor, and if a SEFI occurs, forces the processor to return to performance through a series of escalating events. TTMR and hardened-core solutions were developed for both DSPs and reconfigurable field-programmable gate arrays (FPGAs). This includes advancement of TTMR algorithms for DSPs and reconfigurable FPGAs, plus a rad-hard, hardened-core integrated circuit that services both the DSP and FPGA. Additionally, a combined DSP and FPGA board architecture was fully developed into a rad-hard engineering product. This technology enables use of commercial off-the-shelf (COTS) DSPs in computers for satellite and other space applications, allowing rapid deployment at a much lower cost. Traditional rad-hard space computers are very expensive and typically have long lead times. These computers are either based on traditional rad-hard processors, which have extremely low computational performance, or triple modular redundant (TMR) FPGA arrays, which suffer from power and complexity issues. Even more frustrating is that the TMR arrays of FPGAs require a fixed, external rad-hard voting element, thereby causing them to lose much of their reconfiguration capability and in some cases significant speed reduction. The benefits of COTS high-performance signal processing include significant increase in onboard science data processing, enabling orders of magnitude reduction in required communication bandwidth for science data return, orders of magnitude improvement in onboard mission planning and critical decision making, and the ability to rapidly respond to changing mission environments, thus enabling opportunistic science and orders of magnitude reduction in the cost of mission operations through reduction of required staff. Additional benefits of COTS-based, high-performance signal processing include the ability to leverage considerable commercial and academic investments in advanced computing tools, techniques, and infra structure, and the familiarity of the science and IT community with these computing environments.

  16. Crunching Knowledge: The Coming Environment for the Information Specialist.

    ERIC Educational Resources Information Center

    Nelson, Milo

    The adjustment of librarians to technological change has been difficult because they have been too close observers of the present at the expense of daydreaming about society's likely future. The brisk pace of business, industry, and Wall Street has been accelerated even more by developments in information technology and computer communications. A…

  17. Efficacy and Utility of Computer-Assisted Cognitive Behavioural Therapy for Anxiety Disorders

    ERIC Educational Resources Information Center

    Przeworski, Amy; Newman, Michelle G.

    2006-01-01

    Despite the efficacy of cognitive behavioural treatment for anxiety disorders, more than 70% of individuals with anxiety disorders go untreated every year. This is partially due to obstacles to treatment including limited access to mental health services for rural residents, the expense of treatment and the inconvenience of attending weekly…

  18. Optimize Resources and Help Reduce Cost of Ownership with Dell[TM] Systems Management

    ERIC Educational Resources Information Center

    Technology & Learning, 2008

    2008-01-01

    Maintaining secure, convenient administration of the PC system environment can be a significant drain on resources. Deskside visits can greatly increase the cost of supporting a large number of computers. Even simple tasks, such as tracking inventory or updating software, quickly become expensive when they require physically visiting every…

  19. 19 CFR 10.710 - Value-content requirement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., character, or use, which is then used in Jordan in the production or manufacture of a new or different... production or manufacture of a new or different article of commerce that is imported into the United States... determined by computing the sum of: (A) All expenses incurred in the growth, production, or manufacture of...

  20. A Simple, Low-Cost, Data-Logging Pendulum Built from a Computer Mouse

    ERIC Educational Resources Information Center

    Gintautas, Vadas; Hubler, Alfred

    2009-01-01

    Lessons and homework problems involving a pendulum are often a big part of introductory physics classes and laboratory courses from high school to undergraduate levels. Although laboratory equipment for pendulum experiments is commercially available, it is often expensive and may not be affordable for teachers on fixed budgets, particularly in…

  1. 26 CFR 1.863-3 - Allocation and apportionment of income from certain sales of inventory.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... income from sources within and without the United States determined under the 50/50 method. Research and... Possession Purchase Sales—(A) Business activity method. Gross income from Possession Purchase Sales is... from Possession Purchase Sales computed under the business activity method, the amounts of expenses...

  2. 30 CFR 206.353 - How do I determine transmission deductions?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Depreciation under paragraphs (g) and (h) of this section and a return on undepreciated capital investment under paragraphs (g) and (i) of this section or (iv) A return on the capital investment in the..., are not allowable expenses. (g) To compute costs associated with capital investment, a lessee may use...

  3. 30 CFR 206.354 - How do I determine generating deductions?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Depreciation under paragraphs (g) and (h) of this section and a return on undepreciated capital investment under paragraphs (g) and (i) of this section; or (iv) A return on capital investment in the power plant... allowable expenses. (g) To compute costs associated with capital investment, a lessee may use either...

  4. An Authoring System for Creating Computer-Based Role-Performance Trainers.

    ERIC Educational Resources Information Center

    Guralnick, David; Kass, Alex

    This paper describes a multimedia authoring system called MOPed-II. Like other authoring systems, MOPed-II reduces the time and expense of producing end-user applications by eliminating much of the programming effort they require. However, MOPed-II reflects an approach to authoring tools for educational multimedia which is different from most…

  5. Cost Effective Computer-Assisted Legal Research, or When Two Are Better Than One.

    ERIC Educational Resources Information Center

    Griffith, Cary

    1986-01-01

    An analysis of pricing policies and costs of LEXIS and WESTLAW indicates that it is less expensive to subscribe to both using a PC microcomputer rather than a dedicated terminal. Rules for when to use each database are essential to lowering the costs of online legal research. (EM)

  6. Hardware for Hard-Up Schools?

    ERIC Educational Resources Information Center

    St. John, Stuart A.

    2012-01-01

    The purpose of this work was to investigate ways in which everyday computers can be used in schools to fulfil several of the roles of more expensive, specialized laboratory equipment for teaching and learning purposes. The brief adopted was to keep things as straightforward as possible so that any school science department with a few basic tools…

  7. 12 CFR 563.170 - Examinations and audits; appraisals; establishment and maintenance of records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... any time, by the Office, with appraisals when deemed advisable, in accordance with general policies from time to time established by the Office. The costs, as computed by the Office, of any examinations made by it, including office analysis, overhead, per diem, travel expense, other supervision by the...

  8. Budgeting for Quality and Survival in the 21st Century--Guidelines for Directors.

    ERIC Educational Resources Information Center

    Whitehead, R. Ann

    2003-01-01

    Offers practical guidelines for directors of child care centers on creating a budget and managing the center's finances. Suggests ways to establish priorities, establish a tuition rate, compute projected monthly enrollment and income, budget variable and fixed expenses, create the final budget, and monitor financial statements. (JPB)

  9. 26 CFR 1.50B-4 - Partnerships.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Partnerships. 1.50B-4 Section 1.50B-4 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY INCOME TAX INCOME TAXES Rules for Computing Credit for Expenses of Work Incentive Programs § 1.50B-4 Partnerships. (a) General rule—(1) In general...

  10. A glacier runoff extension to the Precipitation Runoff Modeling System

    Treesearch

    A. E. Van Beusekom; R. J. Viger

    2016-01-01

    A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while...

  11. 24 CFR 990.165 - Computation of project expense level (PEL).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) Ownership type (profit, non-profit, or limited dividend); and (10) Geographic. (c) Cost adjustments. HUD... ceiling; (3) Application of a four percent reduction for any PEL calculated over $325 PUM, with the reduction limited so that a PEL will not be reduced to less than $325; and (4) The reduction of audit costs...

  12. DYNER: A DYNamic ClustER for Education and Research

    ERIC Educational Resources Information Center

    Kehagias, Dimitris; Grivas, Michael; Mamalis, Basilis; Pantziou, Grammati

    2006-01-01

    Purpose: The purpose of this paper is to evaluate the use of a non-expensive dynamic computing resource, consisting of a Beowulf class cluster and a NoW, as an educational and research infrastructure. Design/methodology/approach: Clusters, built using commodity-off-the-shelf (COTS) hardware components and free, or commonly used, software, provide…

  13. 37 CFR 385.23 - Royalty rates and subscriber-based royalty floors for specific types of services.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND... DIGITAL PHONORECORDS Limited Offerings, Mixed Service Bundles, Music Bundles, Paid Locker Services and... expensed for the rights to make the relevant permanent digital downloads and ringtones. (b) Computation of...

  14. Simulating the fate of fall- and spring-applied poultry litter nitrogen in corn production

    USDA-ARS?s Scientific Manuscript database

    Monitoring the fate of N derived from manures applied to fertilize crops is difficult, time consuming, and relatively expensive. But computer simulation models can help understand the interactions among various N processes in the soil-plant system and determine the fate of applied N. The RZWQM2 was ...

  15. State-of-the-art methods for testing materials outdoors

    Treesearch

    R. Sam Williams

    2004-01-01

    In recent years, computers, sensors, microelectronics, and communication technologies have made it possible to automate the way materials are tested in the field. It is now possible to purchase monitoring equipment to measure weather and materials properties. The measurement of materials response often requires innovative approaches and added expense, but the...

  16. Introduction to Parallel Computing

    DTIC Science & Technology

    1992-05-01

    Instruction Stream, Multiple Data Stream Machines .................... 19 2.4 Networks of M achines...independent memory units and connecting them to the processors by an interconnection network . Many different interconnection schemes have been considered, and...connected to the same processor at the same time. Crossbar switching networks are still too expensive to be practical for connecting large numbers of

  17. Economical Unsteady High-Fidelity Aerodynamics for Structural Optimization with a Flutter Constraint

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Stanford, Bret K.

    2017-01-01

    Structural optimization with a flutter constraint for a vehicle designed to fly in the transonic regime is a particularly difficult task. In this speed range, the flutter boundary is very sensitive to aerodynamic nonlinearities, typically requiring high-fidelity Navier-Stokes simulations. However, the repeated application of unsteady computational fluid dynamics to guide an aeroelastic optimization process is very computationally expensive. This expense has motivated the development of methods that incorporate aspects of the aerodynamic nonlinearity, classical tools of flutter analysis, and more recent methods of optimization. While it is possible to use doublet lattice method aerodynamics, this paper focuses on the use of an unsteady high-fidelity aerodynamic reduced order model combined with successive transformations that allows for an economical way of utilizing high-fidelity aerodynamics in the optimization process. This approach is applied to the common research model wing structural design. As might be expected, the high-fidelity aerodynamics produces a heavier wing than that optimized with doublet lattice aerodynamics. It is found that the optimized lower skin of the wing using high-fidelity aerodynamics differs significantly from that using doublet lattice aerodynamics.

  18. Bayesian sensitivity analysis of bifurcating nonlinear models

    NASA Astrophysics Data System (ADS)

    Becker, W.; Worden, K.; Rowson, J.

    2013-01-01

    Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.

  19. Signal decomposition for surrogate modeling of a constrained ultrasonic design space

    NASA Astrophysics Data System (ADS)

    Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.

    2018-04-01

    The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.

  20. [Direct medical costs of hospital treatment of fractures of the upper extremity of the femur].

    PubMed

    El Ayoubi, Abdelghani; Bouhelo, Kevin Parfait Bienvenu; Chafik, Hachem; Nasri, Mohammed; El Idrissi, Mohammed; Shimi, Mohammed; El Ibrahimi, Abdelhalim; Elmrini, Abdelmajid

    2017-01-01

    Fractures of the upper extremity of the femur are serious because of their morbidity and social and/or economic consequences. They have been the subject of several studies of world literature concerning their hospital treatment, evolution and prevention. The increase in the incidence of this pathology seems unavoidable due to population ageing and to the lengthening life expectancy; it is posing a real long-term public health problem whose importance will be further increased by the need to control health care costs. The results of this study show that the average age of onset of fracture of the proximal extremity of the femur is 68,13 ± 16.9 years, with a male predominance and a sex ratio of 1.14. In our study pertrochanterian fractures represented 69.4% of cases. Direct medical costs of the hospital treatment of fractures of the upper extremity of the femur at the Hassan II University Hospital were £387 714,38 in 222 cases, with an average cost of £1757,4 , including costs for patient's stay in hospital, which represented the majority of expenses ( 77% of total costs). It is desirable to raise staff awareness of the costs of consumables in order to reduce treatment costs and to adopt cost-oriented behaviour. Length of stay should be limited to the maximum extent because it only allows to reduce staff and accommodation costs.

Top