Sample records for minimal computational effort

  1. Resource Balancing Control Allocation

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Bodson, Marc

    2010-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.

  2. Distance Learning and Cloud Computing: "Just Another Buzzword or a Major E-Learning Breakthrough?"

    ERIC Educational Resources Information Center

    Romiszowski, Alexander J.

    2012-01-01

    "Cloud computing is a model for the enabling of ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and other services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This…

  3. Automated Lumber Processing

    Treesearch

    Powsiri Klinkhachorn; J. Moody; Philip A. Araman

    1995-01-01

    For the past few decades, researchers have devoted time and effort to apply automation and modern computer technologies towards improving the productivity of traditional industries. To be competitive, one must streamline operations and minimize production costs, while maintaining an acceptable margin of profit. This paper describes the effort of one such endeavor...

  4. Cross-bispectrum computation and variance estimation

    NASA Technical Reports Server (NTRS)

    Lii, K. S.; Helland, K. N.

    1981-01-01

    A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.

  5. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  6. A Study of Cooperative, Networking, and Computer Activities in Southwestern Libraries.

    ERIC Educational Resources Information Center

    Corbin, John

    The Southwestern Library Association (SWLA) conducted an inventory and study of the SWLA libraries in cooperative, network, and computer activities to collect data for use in planning future activities and in minimizing duplication of efforts. Questionnaires were mailed to 2,060 academic, public, and special libraries in the six SWLA states.…

  7. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  8. Measuring Room Area or Volume Electronically

    NASA Technical Reports Server (NTRS)

    Kavaya, M. J.

    1987-01-01

    Area- and volume-measuring instrument hand-held or mounted on tripod. Instrument rapidly measures distances to walls, ceiling, or floor at many viewing angles and automatically computes area or volume of room. Results obtained rapidly with minimal effort.

  9. Computer Simulations: Inelegant Mathematics and Worse Social Science?

    ERIC Educational Resources Information Center

    Alker, Hayward R., Jr.

    1974-01-01

    Achievements, limitations, and difficulties of social science simulation efforts are discussed with particular reference to three examples. The pedagogical use of complementary developmental, philosophical, mathematical, and scientific approaches is advocated to minimize potential abuses of social simulation research. (LS)

  10. A neuronal model of a global workspace in effortful cognitive tasks.

    PubMed

    Dehaene, S; Kerszberg, M; Changeux, J P

    1998-11-24

    A minimal hypothesis is proposed concerning the brain processes underlying effortful tasks. It distinguishes two main computational spaces: a unique global workspace composed of distributed and heavily interconnected neurons with long-range axons, and a set of specialized and modular perceptual, motor, memory, evaluative, and attentional processors. Workspace neurons are mobilized in effortful tasks for which the specialized processors do not suffice. They selectively mobilize or suppress, through descending connections, the contribution of specific processor neurons. In the course of task performance, workspace neurons become spontaneously coactivated, forming discrete though variable spatio-temporal patterns subject to modulation by vigilance signals and to selection by reward signals. A computer simulation of the Stroop task shows workspace activation to increase during acquisition of a novel task, effortful execution, and after errors. We outline predictions for spatio-temporal activation patterns during brain imaging, particularly about the contribution of dorsolateral prefrontal cortex and anterior cingulate to the workspace.

  11. Blading Design for Axial Turbomachines

    DTIC Science & Technology

    1989-05-01

    three- dimensional, viscous computation systems appear to have a long development period ahead, in which fluid shear stress modeling and computation time ...and n directions and T is the shear stress , As a consequence the solution time is longer than for integral methods, dependent largely on thc accuracy of...distributions over airfoils is an adaptation of thin plate deflection theory from stress analysis. At the same time , it minimizes designer effort

  12. Optimization of a Monte Carlo Model of the Transient Reactor Test Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kristin; DeHart, Mark; Goluoglu, Sedat

    2017-03-01

    The ultimate goal of modeling and simulation is to obtain reasonable answers to problems that don’t have representations which can be easily evaluated while minimizing the amount of computational resources. With the advances during the last twenty years of large scale computing centers, researchers have had the ability to create a multitude of tools to minimize the number of approximations necessary when modeling a system. The tremendous power of these centers requires the user to possess an immense amount of knowledge to optimize the models for accuracy and efficiency.This paper seeks to evaluate the KENO model of TREAT to optimizemore » calculational efforts.« less

  13. Standardization and Optimization of Computed Tomography Protocols to Achieve Low-Dose

    PubMed Central

    Chin, Cynthia; Cody, Dianna D.; Gupta, Rajiv; Hess, Christopher P.; Kalra, Mannudeep K.; Kofler, James M.; Krishnam, Mayil S.; Einstein, Andrew J.

    2014-01-01

    The increase in radiation exposure due to CT scans has been of growing concern in recent years. CT scanners differ in their capabilities and various indications require unique protocols, but there remains room for standardization and optimization. In this paper we summarize approaches to reduce dose, as discussed in lectures comprising the first session of the 2013 UCSF Virtual Symposium on Radiation Safety in Computed Tomography. The experience of scanning at low dose in different body regions, for both diagnostic and interventional CT procedures, is addressed. An essential primary step is justifying the medical need for each scan. General guiding principles for reducing dose include tailoring a scan to a patient, minimizing scan length, use of tube current modulation and minimizing tube current, minimizing-tube potential, iterative reconstruction, and periodic review of CT studies. Organized efforts for standardization have been spearheaded by professional societies such as the American Association of Physicists in Medicine. Finally, all team members should demonstrate an awareness of the importance of minimizing dose. PMID:24589403

  14. A Selective Role for Dopamine in Learning to Maximize Reward But Not to Minimize Effort: Evidence from Patients with Parkinson's Disease.

    PubMed

    Skvortsova, Vasilisa; Degos, Bertrand; Welter, Marie-Laure; Vidailhet, Marie; Pessiglione, Mathias

    2017-06-21

    Instrumental learning is a fundamental process through which agents optimize their choices, taking into account various dimensions of available options such as the possible reward or punishment outcomes and the costs associated with potential actions. Although the implication of dopamine in learning from choice outcomes is well established, less is known about its role in learning the action costs such as effort. Here, we tested the ability of patients with Parkinson's disease (PD) to maximize monetary rewards and minimize physical efforts in a probabilistic instrumental learning task. The implication of dopamine was assessed by comparing performance ON and OFF prodopaminergic medication. In a first sample of PD patients ( n = 15), we observed that reward learning, but not effort learning, was selectively impaired in the absence of treatment, with a significant interaction between learning condition (reward vs effort) and medication status (OFF vs ON). These results were replicated in a second, independent sample of PD patients ( n = 20) using a simplified version of the task. According to Bayesian model selection, the best account for medication effects in both studies was a specific amplification of reward magnitude in a Q-learning algorithm. These results suggest that learning to avoid physical effort is independent from dopaminergic circuits and strengthen the general idea that dopaminergic signaling amplifies the effects of reward expectation or obtainment on instrumental behavior. SIGNIFICANCE STATEMENT Theoretically, maximizing reward and minimizing effort could involve the same computations and therefore rely on the same brain circuits. Here, we tested whether dopamine, a key component of reward-related circuitry, is also implicated in effort learning. We found that patients suffering from dopamine depletion due to Parkinson's disease were selectively impaired in reward learning, but not effort learning. Moreover, anti-parkinsonian medication restored the ability to maximize reward, but had no effect on effort minimization. This dissociation suggests that the brain has evolved separate, domain-specific systems for instrumental learning. These results help to disambiguate the motivational role of prodopaminergic medications: they amplify the impact of reward without affecting the integration of effort cost. Copyright © 2017 the authors 0270-6474/17/376087-11$15.00/0.

  15. AstroGrid: Taverna in the Virtual Observatory .

    NASA Astrophysics Data System (ADS)

    Benson, K. M.; Walton, N. A.

    This paper reports on the implementation of the Taverna workbench by AstroGrid, a tool for designing and executing workflows of tasks in the Virtual Observatory. The workflow approach helps astronomers perform complex task sequences with little technical effort. Visual approach to workflow construction streamlines highly complex analysis over public and private data and uses computational resources as minimal as a desktop computer. Some integration issues and future work are discussed in this article.

  16. Quadratic Programming for Allocating Control Effort

    NASA Technical Reports Server (NTRS)

    Singh, Gurkirpal

    2005-01-01

    A computer program calculates an optimal allocation of control effort in a system that includes redundant control actuators. The program implements an iterative (but otherwise single-stage) algorithm of the quadratic-programming type. In general, in the quadratic-programming problem, one seeks the values of a set of variables that minimize a quadratic cost function, subject to a set of linear equality and inequality constraints. In this program, the cost function combines control effort (typically quantified in terms of energy or fuel consumed) and control residuals (differences between commanded and sensed values of variables to be controlled). In comparison with prior control-allocation software, this program offers approximately equal accuracy but much greater computational efficiency. In addition, this program offers flexibility, robustness to actuation failures, and a capability for selective enforcement of control requirements. The computational efficiency of this program makes it suitable for such complex, real-time applications as controlling redundant aircraft actuators or redundant spacecraft thrusters. The program is written in the C language for execution in a UNIX operating system.

  17. Space shuttle low cost/risk avionics study

    NASA Technical Reports Server (NTRS)

    1971-01-01

    All work breakdown structure elements containing any avionics related effort were examined for pricing the life cycle costs. The analytical, testing, and integration efforts are included for the basic onboard avionics and electrical power systems. The design and procurement of special test equipment and maintenance and repair equipment are considered. Program management associated with these efforts is described. Flight test spares and labor and materials associated with the operations and maintenance of the avionics systems throughout the horizontal flight test are examined. It was determined that cost savings can be achieved by using existing hardware, maximizing orbiter-booster commonality, specifying new equipments to MIL quality standards, basing redundancy on cost effective analysis, minimizing software complexity and reducing cross strapping and computer-managed functions, utilizing compilers and floating point computers, and evolving the design as dictated by the horizontal flight test schedules.

  18. DoDs Efforts to Consolidate Data Centers Need Improvement

    DTIC Science & Technology

    2016-03-29

    Consolidation Initiative, February 26, 2010. 3 Green IT minimizes negative environmental impact of IT operations by ensuring that computers and computer-related...objectives for consolidating data centers. DoD’s objectives were to: • reduce cost; • reduce environmental impact ; • improve efficiency and service levels...number of DoD data centers. Finding A DODIG-2016-068 │ 7 information in DCIM, the DoD CIO did not confirm whether those changes would impact DoD’s

  19. Many Masses on One Stroke:. Economic Computation of Quark Propagators

    NASA Astrophysics Data System (ADS)

    Frommer, Andreas; Nöckel, Bertold; Güsken, Stephan; Lippert, Thomas; Schilling, Klaus

    The computational effort in the calculation of Wilson fermion quark propagators in Lattice Quantum Chromodynamics can be considerably reduced by exploiting the Wilson fermion matrix structure in inversion algorithms based on the non-symmetric Lanczos process. We consider two such methods: QMR (quasi minimal residual) and BCG (biconjugate gradients). Based on the decomposition M/κ = 1/κ-D of the Wilson mass matrix, using QMR, one can carry out inversions on a whole trajectory of masses simultaneously, merely at the computational expense of a single propagator computation. In other words, one has to compute the propagator corresponding to the lightest mass only, while all the heavier masses are given for free, at the price of extra storage. Moreover, the symmetry γ5M = M†γ5 can be used to cut the computational effort in QMR and BCG by a factor of two. We show that both methods then become — in the critical regime of small quark masses — competitive to BiCGStab and significantly better than the standard MR method, with optimal relaxation factor, and CG as applied to the normal equations.

  20. Going Green: The Power of the Individual

    ERIC Educational Resources Information Center

    Neugebauer, Adam

    2009-01-01

    In this article, the author describes wasteful office practices he encountered at work such as minimal recycling efforts, computer equipment being left on at night, kitchens were stocked with disposable items, and others. What really bothered him was that no matter how passionate he was about trying to reduce his own "footprint" at work, it wasn't…

  1. An automated data collection system for a Charpy impact tester

    NASA Technical Reports Server (NTRS)

    Weigman, Bernard J.; Spiegel, F. Xavier

    1993-01-01

    A method for automated data collection has been developed for a Charpy impact tester. A potentiometer is connected to the pivot point of the hammer and measures the angular displacement of the hammer. This data is collected with a computer and, through appropriate software, accurately records the energy absorbed by the specimen. The device can be easily calibrated with minimal effort.

  2. Quadratic String Method for Locating Instantons in Tunneling Splitting Calculations.

    PubMed

    Cvitaš, Marko T

    2018-03-13

    The ring-polymer instanton (RPI) method is an efficient technique for calculating approximate tunneling splittings in high-dimensional molecular systems. In the RPI method, tunneling splitting is evaluated from the properties of the minimum action path (MAP) connecting the symmetric wells, whereby the extensive sampling of the full potential energy surface of the exact quantum-dynamics methods is avoided. Nevertheless, the search for the MAP is usually the most time-consuming step in the standard numerical procedures. Recently, nudged elastic band (NEB) and string methods, originaly developed for locating minimum energy paths (MEPs), were adapted for the purpose of MAP finding with great efficiency gains [ J. Chem. Theory Comput. 2016 , 12 , 787 ]. In this work, we develop a new quadratic string method for locating instantons. The Euclidean action is minimized by propagating the initial guess (a path connecting two wells) over the quadratic potential energy surface approximated by means of updated Hessians. This allows the algorithm to take many minimization steps between the potential/gradient calls with further reductions in the computational effort, exploiting the smoothness of potential energy surface. The approach is general, as it uses Cartesian coordinates, and widely applicable, with computational effort of finding the instanton usually lower than that of determining the MEP. It can be combined with expensive potential energy surfaces or on-the-fly electronic-structure methods to explore a wide variety of molecular systems.

  3. A Comprehensive Review of Existing Risk Assessment Models in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Amini, Ahmad; Jamil, Norziana

    2018-05-01

    Cloud computing is a popular paradigm in information technology and computing as it offers numerous advantages in terms of economical saving and minimal management effort. Although elasticity and flexibility brings tremendous benefits, it still raises many information security issues due to its unique characteristic that allows ubiquitous computing. Therefore, the vulnerabilities and threats in cloud computing have to be identified and proper risk assessment mechanism has to be in place for better cloud computing management. Various quantitative and qualitative risk assessment models have been proposed but up to our knowledge, none of them is suitable for cloud computing environment. This paper, we compare and analyse the strengths and weaknesses of existing risk assessment models. We then propose a new risk assessment model that sufficiently address all the characteristics of cloud computing, which was not appeared in the existing models.

  4. Simulation of the Dropping Mercury Electrode by Orthogonal Collocation.

    DTIC Science & Technology

    1982-08-18

    Electro byOtoyI =111. - e 15,, "A’Al,.rt arp t2g? ____________ ;f f-1e of Navel Rehnar..h .905 Chemistry Program - Chemitry CO&e 41? Unkclass ified Pe . nI...transport to a dropping mercury electrode lomr.i. Accurate values for’ the concentration profiles and current are obtained with minimal computational effort...offered by COMPUTATIONAL ASP’ECTS KOutecky (14) which is corrected for spherical dittusion Results accurate to 0.4 0 of Koutecky’s calculated values I 08 n

  5. The Multiple-Minima Problem in Protein Folding

    NASA Astrophysics Data System (ADS)

    Scheraga, Harold A.

    1991-10-01

    The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.

  6. Toward a Rational and Mechanistic Account of Mental Effort.

    PubMed

    Shenhav, Amitai; Musslick, Sebastian; Lieder, Falk; Kool, Wouter; Griffiths, Thomas L; Cohen, Jonathan D; Botvinick, Matthew M

    2017-07-25

    In spite of its familiar phenomenology, the mechanistic basis for mental effort remains poorly understood. Although most researchers agree that mental effort is aversive and stems from limitations in our capacity to exercise cognitive control, it is unclear what gives rise to those limitations and why they result in an experience of control as costly. The presence of these control costs also raises further questions regarding how best to allocate mental effort to minimize those costs and maximize the attendant benefits. This review explores recent advances in computational modeling and empirical research aimed at addressing these questions at the level of psychological process and neural mechanism, examining both the limitations to mental effort exertion and how we manage those limited cognitive resources. We conclude by identifying remaining challenges for theoretical accounts of mental effort as well as possible applications of the available findings to understanding the causes of and potential solutions for apparent failures to exert the mental effort required of us.

  7. Minimal-effort planning of active alignment processes for beam-shaping optics

    NASA Astrophysics Data System (ADS)

    Haag, Sebastian; Schranner, Matthias; Müller, Tobias; Zontar, Daniel; Schlette, Christian; Losch, Daniel; Brecher, Christian; Roßmann, Jürgen

    2015-03-01

    In science and industry, the alignment of beam-shaping optics is usually a manual procedure. Many industrial applications utilizing beam-shaping optical systems require more scalable production solutions and therefore effort has been invested in research regarding the automation of optics assembly. In previous works, the authors and other researchers have proven the feasibility of automated alignment of beam-shaping optics such as collimation lenses or homogenization optics. Nevertheless, the planning efforts as well as additional knowledge from the fields of automation and control required for such alignment processes are immense. This paper presents a novel approach of planning active alignment processes of beam-shaping optics with the focus of minimizing the planning efforts for active alignment. The approach utilizes optical simulation and the genetic programming paradigm from computer science for automatically extracting features from a simulated data basis with a high correlation coefficient regarding the individual degrees of freedom of alignment. The strategy is capable of finding active alignment strategies that can be executed by an automated assembly system. The paper presents a tool making the algorithm available to end-users and it discusses the results of planning the active alignment of the well-known assembly of a fast-axis collimator. The paper concludes with an outlook on the transferability to other use cases such as application specific intensity distributions which will benefit from reduced planning efforts.

  8. Design of optimally normal minimum gain controllers by continuation method

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Juang, J.-N.; Kim, Z. C.

    1989-01-01

    A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.

  9. Intelligent Manufacturing of Commercial Optics Final Report CRADA No. TC-0313-92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J. S.; Pollicove, H.

    The project combined the research and development efforts of LLNL and the University of Rochester Center for Manufacturing Optics (COM), to develop a new generation of flexible computer controlled optics· grinding machines. COM's principal near term development effort is to commercialize the OPTICAM-SM, a new prototype spherical grinding machine. A crucial requirement for commercializing the OPTICAM-SM is the development of a predictable and repeatable material removal process ( deterministic micro-grinding) that yields high quality surfaces that minimize non-deterministic polishing. OPTICAM machine tools and the fabrication process development studies are part of COM' s response to the DOD (ARPA) request tomore » implement a modernization strategy for revitalizing the U.S. optics manufacturing base. This project was entered into in order to develop a new generation of :flexible, computer-controlled optics grinding machines.« less

  10. Optimal feedback control of turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Bewley, Thomas; Choi, Haecheon; Temam, Roger; Moin, Parviz

    1993-01-01

    Feedback control equations were developed and tested for computing wall normal control velocities to control turbulent flow in a channel with the objective of reducing drag. The technique used is the minimization of a 'cost functional' which is constructed to represent some balance of the drag integrated over the wall and the net control effort. A distribution of wall velocities is found which minimizes this cost functional some time shortly in the future based on current observations of the flow near the wall. Preliminary direct numerical simulations of the scheme applied to turbulent channel flow indicates it provides approximately 17 percent drag reduction. The mechanism apparent when the scheme is applied to a simplified flow situation is also discussed.

  11. Parallelized modelling and solution scheme for hierarchically scaled simulations

    NASA Technical Reports Server (NTRS)

    Padovan, Joe

    1995-01-01

    This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.

  12. Biologically inspired robots elicit a robust fear response in zebrafish

    NASA Astrophysics Data System (ADS)

    Ladu, Fabrizio; Bartolini, Tiziana; Panitz, Sarah G.; Butail, Sachit; Macrı, Simone; Porfiri, Maurizio

    2015-03-01

    We investigate the behavioral response of zebrafish to three fear-evoking stimuli. In a binary choice test, zebrafish are exposed to a live allopatric predator, a biologically-inspired robot, and a computer-animated image of the live predator. A target tracking algorithm is developed to score zebrafish behavior. Unlike computer-animated images, the robotic and live predator elicit a robust avoidance response. Importantly, the robotic stimulus elicits more consistent inter-individual responses than the live predator. Results from this effort are expected to aid in hypothesis-driven studies on zebrafish fear response, by offering a valuable approach to maximize data-throughput and minimize animal subjects.

  13. Application of quadratic optimization to supersonic inlet control.

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Zeller, J. R.

    1972-01-01

    This paper describes the application of linear stochastic optimal control theory to the design of the control system for the air intake, the inlet, of a supersonic air-breathing propulsion system. The controls must maintain a stable inlet shock position in the presence of random airflow disturbances and prevent inlet unstart. Two different linear time invariant controllers are developed. One is designed to minimize a nonquadratic index, the expected frequency of inlet unstart, and the other is designed to minimize the mean square value of inlet shock motion. The quadratic equivalence principle is used to obtain a linear controller that minimizes the nonquadratic index. The two controllers are compared on the basis of unstart prevention, control effort requirements, and frequency response. It is concluded that while controls designed to minimize unstarts are desirable in that the index minimized is physically meaningful, computation time required is longer than for the minimum mean square shock position approach. The simpler minimum mean square shock position solution produced expected unstart frequency values which were not significantly larger than those of the nonquadratic solution.

  14. Image-based ranging and guidance for rotorcraft

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.

    1991-01-01

    This report documents the research carried out under NASA Cooperative Agreement No. NCC2-575 during the period Oct. 1988 - Dec. 1991. Primary emphasis of this effort was on the development of vision based navigation methods for rotorcraft nap-of-the-earth flight regime. A family of field-based ranging algorithms were developed during this research period. These ranging schemes are capable of handling both stereo and motion image sequences, and permits both translational and rotational camera motion. The algorithms require minimal computational effort and appear to be implementable in real time. A series of papers were presented on these ranging schemes, some of which are included in this report. A small part of the research effort was expended on synthesizing a rotorcraft guidance law that directly uses the vision-based ranging data. This work is discussed in the last section.

  15. Virtual reality neurosurgery: a simulator blueprint.

    PubMed

    Spicer, Mark A; van Velsen, Martin; Caffrey, John P; Apuzzo, Michael L J

    2004-04-01

    This article details preliminary studies undertaken to integrate the most relevant advancements across multiple disciplines in an effort to construct a highly realistic neurosurgical simulator based on a distributed computer architecture. Techniques based on modified computational modeling paradigms incorporating finite element analysis are presented, as are current and projected efforts directed toward the implementation of a novel bidirectional haptic device. Patient-specific data derived from noninvasive magnetic resonance imaging sequences are used to construct a computational model of the surgical region of interest. Magnetic resonance images of the brain may be coregistered with those obtained from magnetic resonance angiography, magnetic resonance venography, and diffusion tensor imaging to formulate models of varying anatomic complexity. The majority of the computational burden is encountered in the presimulation reduction of the computational model and allows realization of the required threshold rates for the accurate and realistic representation of real-time visual animations. Intracranial neurosurgical procedures offer an ideal testing site for the development of a totally immersive virtual reality surgical simulator when compared with the simulations required in other surgical subspecialties. The material properties of the brain as well as the typically small volumes of tissue exposed in the surgical field, coupled with techniques and strategies to minimize computational demands, provide unique opportunities for the development of such a simulator. Incorporation of real-time haptic and visual feedback is approached here and likely will be accomplished soon.

  16. Adaptive allocation of decisionmaking responsibility between human and computer in multitask situations

    NASA Technical Reports Server (NTRS)

    Chu, Y.-Y.; Rouse, W. B.

    1979-01-01

    As human and computer come to have overlapping decisionmaking abilities, a dynamic or adaptive allocation of responsibilities may be the best mode of human-computer interaction. It is suggested that the computer serve as a backup decisionmaker, accepting responsibility when human workload becomes excessive and relinquishing responsibility when workload becomes acceptable. A queueing theory formulation of multitask decisionmaking is used and a threshold policy for turning the computer on/off is proposed. This policy minimizes event-waiting cost subject to human workload constraints. An experiment was conducted with a balanced design of several subject runs within a computer-aided multitask flight management situation with different task demand levels. It was found that computer aiding enhanced subsystem performance as well as subjective ratings. The queueing model appears to be an adequate representation of the multitask decisionmaking situation, and to be capable of predicting system performance in terms of average waiting time and server occupancy. Server occupancy was further found to correlate highly with the subjective effort ratings.

  17. Multi-level Hierarchical Poly Tree computer architectures

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug

    1990-01-01

    Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.

  18. Stencil computations for PDE-based applications with examples from DUNE and hypre

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engwer, C.; Falgout, R. D.; Yang, U. M.

    Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less

  19. Stencil computations for PDE-based applications with examples from DUNE and hypre

    DOE PAGES

    Engwer, C.; Falgout, R. D.; Yang, U. M.

    2017-02-24

    Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less

  20. Behavior-based multi-robot collaboration for autonomous construction tasks

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew

    2005-01-01

    The Robot Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous construction of a structure through assembly of Long components. The two robot team demonstrates component placement into an existing structure in a realistic environment. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. A behavior-based architecture provides adaptability. The RCC approach minimizes computation, power, communication, and sensing for applicability to space-related construction efforts, but the techniques are applicable to terrestrial construction tasks.

  1. Behavior-Based Multi-Robot Collaboration for Autonomous Construction Tasks

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew

    2005-01-01

    We present a heterogeneous multi-robot system for autonomous construction of a structure through assembly of long components. Placement of a component within an existing structure in a realistic environment is demonstrated on a two-robot team. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. Far adaptability, the system is designed as a behavior-based architecture. Far applicability to space-related construction efforts, computation, power, communication, and sensing are minimized, though the techniques developed are also applicable to terrestrial construction tasks.

  2. Ant colony system algorithm for the optimization of beer fermentation control.

    PubMed

    Xiao, Jie; Zhou, Ze-Kui; Zhang, Guang-Xin

    2004-12-01

    Beer fermentation is a dynamic process that must be guided along a temperature profile to obtain the desired results. Ant colony system algorithm was applied to optimize the kinetic model of this process. During a fixed period of fermentation time, a series of different temperature profiles of the mixture were constructed. An optimal one was chosen at last. Optimal temperature profile maximized the final ethanol production and minimized the byproducts concentration and spoilage risk. The satisfactory results obtained did not require much computation effort.

  3. Assessment of Chair-side Computer-Aided Design and Computer-Aided Manufacturing Restorations: A Review of the Literature

    PubMed Central

    Baroudi, Kusai; Ibraheem, Shukran Nasser

    2015-01-01

    Background: This paper aimed to evaluate the application of computer-aided design and computer-aided manufacturing (CAD-CAM) technology and the factors that affect the survival of restorations. Materials and Methods: A thorough literature search using PubMed, Medline, Embase, Science Direct, Wiley Online Library and Grey literature were performed from the year 2004 up to June 2014. Only relevant research was considered. Results: The use of chair-side CAD/CAM systems is promising in all dental branches in terms of minimizing time and effort made by dentists, technicians and patients for restoring and maintaining patient oral function and aesthetic, while providing high quality outcome. Conclusion: The way of producing and placing the restorations made with the chair-side CAD/CAM (CEREC and E4D) devices is better than restorations made by conventional laboratory procedures. PMID:25954082

  4. Alignment theory of parallel-beam computed tomography image reconstruction for elastic-type objects using virtual focusing method.

    PubMed

    Jun, Kyungtaek; Kim, Dongwook

    2018-01-01

    X-ray computed tomography has been studied in various fields. Considerable effort has been focused on reconstructing the projection image set from a rigid-type specimen. However, reconstruction of images projected from an object showing elastic motion has received minimal attention. In this paper, a mathematical solution to reconstructing the projection image set obtained from an object with specific elastic motions-periodically, regularly, and elliptically expanded or contracted specimens-is proposed. To reconstruct the projection image set from expanded or contracted specimens, methods are presented for detection of the sample's motion modes, mathematical rescaling of pixel values, and conversion of the projection angle for a common layer.

  5. Image registration assessment in radiotherapy image guidance based on control chart monitoring.

    PubMed

    Xia, Wenyao; Breen, Stephen L

    2018-04-01

    Image guidance with cone beam computed tomography in radiotherapy can guarantee the precision and accuracy of patient positioning prior to treatment delivery. During the image guidance process, operators need to take great effort to evaluate the image guidance quality before correcting a patient's position. This work proposes an image registration assessment method based on control chart monitoring to reduce the effort taken by the operator. According to the control chart plotted by daily registration scores of each patient, the proposed method can quickly detect both alignment errors and image quality inconsistency. Therefore, the proposed method can provide a clear guideline for the operators to identify unacceptable image quality and unacceptable image registration with minimal effort. Experimental results demonstrate that by using control charts from a clinical database of 10 patients undergoing prostate radiotherapy, the proposed method can quickly identify out-of-control signals and find special cause of out-of-control registration events.

  6. A Research Roadmap for Computation-Based Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less

  7. Image-guided interventions and computer-integrated therapy: Quo vadis?

    PubMed

    Peters, Terry M; Linte, Cristian A

    2016-10-01

    Significant efforts have been dedicated to minimizing invasiveness associated with surgical interventions, most of which have been possible thanks to the developments in medical imaging, surgical navigation, visualization and display technologies. Image-guided interventions have promised to dramatically change the way therapies are delivered to many organs. However, in spite of the development of many sophisticated technologies over the past two decades, other than some isolated examples of successful implementations, minimally invasive therapy is far from enjoying the wide acceptance once envisioned. This paper provides a large-scale overview of the state-of-the-art developments, identifies several barriers thought to have hampered the wider adoption of image-guided navigation, and suggests areas of research that may potentially advance the field. Copyright © 2016. Published by Elsevier B.V.

  8. WE-B-BRD-01: Innovation in Radiation Therapy Planning II: Cloud Computing in RT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, K; Kagadis, G; Xing, L

    As defined by the National Institute of Standards and Technology, cloud computing is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Despite the omnipresent role of computers in radiotherapy, cloud computing has yet to achieve widespread adoption in clinical or research applications, though the transition to such “on-demand” access is underway. As this transition proceeds, new opportunities for aggregate studies and efficient use of computational resources are set againstmore » new challenges in patient privacy protection, data integrity, and management of clinical informatics systems. In this Session, current and future applications of cloud computing and distributed computational resources will be discussed in the context of medical imaging, radiotherapy research, and clinical radiation oncology applications. Learning Objectives: Understand basic concepts of cloud computing. Understand how cloud computing could be used for medical imaging applications. Understand how cloud computing could be employed for radiotherapy research.4. Understand how clinical radiotherapy software applications would function in the cloud.« less

  9. Control Allocation with Load Balancing

    NASA Technical Reports Server (NTRS)

    Bodson, Marc; Frost, Susan A.

    2009-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l(infinity) norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is th desired command and the algorithm balances this load among various actuators. The solution using the l(infinity) norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples.

  10. An Integrated Unix-based CAD System for the Design and Testing of Custom VLSI Chips

    NASA Technical Reports Server (NTRS)

    Deutsch, L. J.

    1985-01-01

    A computer aided design (CAD) system that is being used at the Jet Propulsion Laboratory for the design of custom and semicustom very large scale integrated (VLSI) chips is described. The system consists of a Digital Equipment Corporation VAX computer with the UNIX operating system and a collection of software tools for the layout, simulation, and verification of microcircuits. Most of these tools were written by the academic community and are, therefore, available to JPL at little or no cost. Some small pieces of software have been written in-house in order to make all the tools interact with each other with a minimal amount of effort on the part of the designer.

  11. Fan noise prediction assessment

    NASA Technical Reports Server (NTRS)

    Bent, Paul H.

    1995-01-01

    This report is an evaluation of two techniques for predicting the fan noise radiation from engine nacelles. The first is a relatively computational intensive finite element technique. The code is named ARC, an abbreviation of Acoustic Radiation Code, and was developed by Eversman. This is actually a suite of software that first generates a grid around the nacelle, then solves for the potential flowfield, and finally solves the acoustic radiation problem. The second approach is an analytical technique requiring minimal computational effort. This is termed the cutoff ratio technique and was developed by Rice. Details of the duct geometry, such as the hub-to-tip ratio and Mach number of the flow in the duct, and modal content of the duct noise are required for proper prediction.

  12. META-GLARE: a shell for CIG systems.

    PubMed

    Bottrighi, Alessio; Rubrichi, Stefania; Terenziani, Paolo

    2015-01-01

    In the last twenty years, many different approaches to deal with Computer-Interpretable clinical Guidelines (CIGs) have been developed, each one proposing its own representation formalism (mostly based on the Task-Network Model) execution engine. We propose META-GLARE a shell for easily defining new CIG systems. Using META-GLARE, CIG system designers can easily define their own systems (basically by defining their representation language), with a minimal programming effort. META-GLARE is thus a flexible and powerful vehicle for research about CIGs, since it supports easy and fast prototyping of new CIG systems.

  13. Multicore job scheduling in the Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Forti, A.; Pérez-Calero Yzquierdo, A.; Hartmann, T.; Alef, M.; Lahiff, A.; Templon, J.; Dal Pra, S.; Gila, M.; Skipsey, S.; Acosta-Silva, C.; Filipcic, A.; Walker, R.; Walker, C. J.; Traynor, D.; Gadrat, S.

    2015-12-01

    After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.

  14. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.

  15. Simultaneous quaternion estimation (QUEST) and bias determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.

  16. Relative Debugging of Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.

  17. Impact of scaffold rigidity on the design and evolution of an artificial Diels-Alderase

    PubMed Central

    Preiswerk, Nathalie; Beck, Tobias; Schulz, Jessica D.; Milovník, Peter; Mayer, Clemens; Siegel, Justin B.; Baker, David; Hilvert, Donald

    2014-01-01

    By combining targeted mutagenesis, computational refinement, and directed evolution, a modestly active, computationally designed Diels-Alderase was converted into the most proficient biocatalyst for [4+2] cycloadditions known. The high stereoselectivity and minimal product inhibition of the evolved enzyme enabled preparative scale synthesis of a single product diastereomer. X-ray crystallography of the enzyme–product complex shows that the molecular changes introduced over the course of optimization, including addition of a lid structure, gradually reshaped the pocket for more effective substrate preorganization and transition state stabilization. The good overall agreement between the experimental structure and the original design model with respect to the orientations of both the bound product and the catalytic side chains contrasts with other computationally designed enzymes. Because design accuracy appears to correlate with scaffold rigidity, improved control over backbone conformation will likely be the key to future efforts to design more efficient enzymes for diverse chemical reactions. PMID:24847076

  18. Additional Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.

    2013-01-01

    NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM)

  19. Engineered Structured Sorbents for the Adsorption of Carbon Dioxide and Water Vapor from Manned Spacecraft Atmospheres: Applications and Modeling 2007/2008

    NASA Technical Reports Server (NTRS)

    Knox, James C.; Howard, David F.; Perry, Jay L.

    2007-01-01

    In NASA s Vision for Space Exploration, humans will once again travel beyond the confines of earth s gravity, this time to remain there for extended periods. These forays will place unprecedented demands on launch systems. They must not only blast out of earth s gravity well as during the Apollo moon missions, but also launch the supplies needed to sustain a larger crew over much longer periods. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. This paper describes efforts to improve on typical packed beds of sorbent pellets by making use of structured sorbents and alternate bed configurations to improve system efficiency and reliability. The development efforts described offer a complimentary approach combining testing of subscale systems and multiphysics computer simulations to characterize the regenerative heating substrates and evaluation of engineered structured sorbent geometries. Mass transfer, heat transfer, and fluid dynamics are included in the transient simulations.

  20. Mechanical effort predicts the selection of ankle over hip strategies in nonstepping postural responses

    PubMed Central

    Jonkers, Ilse; De Schutter, Joris; De Groote, Friedl

    2016-01-01

    Experimental studies have shown that a continuum of ankle and hip strategies is used to restore posture following an external perturbation. Postural responses can be modeled by feedback control with feedback gains that optimize a specific objective. On the one hand, feedback gains that minimize effort have been used to predict muscle activity during perturbed standing. On the other hand, hip and ankle strategies have been predicted by minimizing postural instability and deviation from upright posture. It remains unclear, however, whether and how effort minimization influences the selection of a specific postural response. We hypothesize that the relative importance of minimizing mechanical work vs. postural instability influences the strategy used to restore upright posture. This hypothesis was investigated based on experiments and predictive simulations of the postural response following a backward support surface translation. Peak hip flexion angle was significantly correlated with three experimentally determined measures of effort, i.e., mechanical work, mean muscle activity and metabolic energy. Furthermore, a continuum of ankle and hip strategies was predicted in simulation when changing the relative importance of minimizing mechanical work and postural instability, with increased weighting of mechanical work resulting in an ankle strategy. In conclusion, the combination of experimental measurements and predictive simulations of the postural response to a backward support surface translation showed that the trade-off between effort and postural instability minimization can explain the selection of a specific postural response in the continuum of potential ankle and hip strategies. PMID:27489362

  1. Titian: Data Provenance Support in Spark

    PubMed Central

    Interlandi, Matteo; Shah, Kshitij; Tetali, Sai Deep; Gulzar, Muhammad Ali; Yoo, Seunghyun; Kim, Miryung; Millstein, Todd; Condie, Tyson

    2015-01-01

    Debugging data processing logic in Data-Intensive Scalable Computing (DISC) systems is a difficult and time consuming effort. Today’s DISC systems offer very little tooling for debugging programs, and as a result programmers spend countless hours collecting evidence (e.g., from log files) and performing trial and error debugging. To aid this effort, we built Titian, a library that enables data provenance—tracking data through transformations—in Apache Spark. Data scientists using the Titian Spark extension will be able to quickly identify the input data at the root cause of a potential bug or outlier result. Titian is built directly into the Spark platform and offers data provenance support at interactive speeds—orders-of-magnitude faster than alternative solutions—while minimally impacting Spark job performance; observed overheads for capturing data lineage rarely exceed 30% above the baseline job execution time. PMID:26726305

  2. Waste minimization/pollution prevention study of high-priority waste streams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogle, R.B.

    1994-03-01

    Although waste minimization has been practiced by the Metals and Ceramics (M&C) Division in the past, the effort has not been uniform or formalized. To establish the groundwork for continuous improvement, the Division Director initiated a more formalized waste minimization and pollution prevention program. Formalization of the division`s pollution prevention efforts in fiscal year (FY) 1993 was initiated by a more concerted effort to determine the status of waste generation from division activities. The goal for this effort was to reduce or minimize the wastes identified as having the greatest impact on human health, the environment, and costs. Two broadmore » categories of division wastes were identified as solid/liquid wastes and those relating to energy use (primarily electricity and steam). This report presents information on the nonradioactive solid and liquid wastes generated by division activities. More specifically, the information presented was generated by teams of M&C staff members empowered by the Division Director to study specific waste streams.« less

  3. Computer Based Procedures for Field Workers - FY16 Research Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Bly, Aaron

    The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. A CBP provides the opportunity to incorporate context-driven jobmore » aids, such as drawings, photos, and just-in-time training. The presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps. This report provides a summary of the main research activities conducted in the Computer-Based Procedures for Field Workers effort since 2012. The main focus of the report is on the research activities conducted in fiscal year 2016. The activities discussed are the Nuclear Electronic Work Packages – Enterprise Requirements initiative, the development of a design guidance for CBPs (which compiles all insights gained through the years of CBP research), the facilitation of vendor studies at the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), a pilot study for how to enhance the plant design modification work process, the collection of feedback from a field evaluation study at Plant Vogtle, and path forward to commercialize INL’s CBP system.« less

  4. Microarthroscopy System With Image Processing Technology Developed for Minimally Invasive Surgery

    NASA Technical Reports Server (NTRS)

    Steele, Gynelle C.

    2001-01-01

    In a joint effort, NASA, Micro Medical Devices, and the Cleveland Clinic have developed a microarthroscopy system with digital image processing. This system consists of a disposable endoscope the size of a needle that is aimed at expanding the use of minimally invasive surgery on the knee, ankle, and other small joints. This device not only allows surgeons to make smaller incisions (by improving the clarity and brightness of images), but it gives them a better view of the injured area to make more accurate diagnoses. Because of its small size, the endoscope helps reduce physical trauma and speeds patient recovery. The faster recovery rate also makes the system cost effective for patients. The digital image processing software used with the device was originally developed by the NASA Glenn Research Center to conduct computer simulations of satellite positioning in space. It was later modified to reflect lessons learned in enhancing photographic images in support of the Center's microgravity program. Glenn's Photovoltaic Branch and Graphics and Visualization Lab (G-VIS) computer programmers and software developers enhanced and speed up graphic imaging for this application. Mary Vickerman at Glenn developed algorithms that enabled Micro Medical Devices to eliminate interference and improve the images.

  5. Complex space monofilar approximation of diffraction currents on a conducting half plane

    NASA Technical Reports Server (NTRS)

    Lindell, I. V.

    1987-01-01

    Simple approximation of diffraction surface currents on a conducting half plane, due to an incoming plane wave, is obtained with a line current (monofile) in complex space. When compared to an approximating current at the edge, the diffraction pattern is seen to improve by an order of magnitude for a minimal increase of computation effort. Thus, the inconvient Fresnel integral functions can be avoided for quick calculations of diffracted fields and the accuracy is good in other directions than along the half plane. The method can be applied to general problems involving planar metal edges.

  6. Simulation of High-Beta Plasma Confinement

    NASA Astrophysics Data System (ADS)

    Font, Gabriel; Welch, Dale; Mitchell, Robert; McGuire, Thomas

    2017-10-01

    The Lockheed Martin Compact Fusion Reactor concept utilizes magnetic cusps to confine the plasma. In order to minimize losses through the axial and ring cusps, the plasma is pushed to a high-beta state. Simulations were made of the plasma and magnetic field system in an effort to quantify particle confinement times and plasma behavior characteristics. Computations are carried out with LSP using implicit PIC methods. Simulations of different sub-scale geometries at high-Beta fusion conditions are used to determine particle loss scaling with reactor size, plasma conditions, and gyro radii. ©2017 Lockheed Martin Corporation. All Rights Reserved.

  7. Testing of materials for passive thermal control of space suits

    NASA Technical Reports Server (NTRS)

    Squire, Bernadette

    1988-01-01

    An effort is underway to determine the coating material of choice for the AX-5 prototype hard space suit. Samples of 6061 aluminum have been coated with one of 10 selected metal coatings, and subjected to corrosion, abrasion, and thermal testing. Changes in reflectance after exposure are documented. Plated gold exhibited minimal degradation of optical properties. A computer model is used in evaluating coating thermal performance in the EVA environment. The model is verified with an experiment designed to measure the heat transfer characteristics of coated space suit parts in a thermal vacuum chamber. Details of this experiment are presented.

  8. Accounting and Accountability for Distributed and Grid Systems

    NASA Technical Reports Server (NTRS)

    Thigpen, William; McGinnis, Laura F.; Hacker, Thomas J.

    2001-01-01

    While the advent of distributed and grid computing systems will open new opportunities for scientific exploration, the reality of such implementations could prove to be a system administrator's nightmare. A lot of effort is being spent on identifying and resolving the obvious problems of security, scheduling, authentication and authorization. Lurking in the background, though, are the largely unaddressed issues of accountability and usage accounting: (1) mapping resource usage to resource users; (2) defining usage economies or methods for resource exchange; (3) describing implementation standards that minimize and compartmentalize the tasks required for a site to participate in a grid.

  9. DEGAS: sharing and tracking target compound ideas with external collaborators.

    PubMed

    Lee, Man-Ling; Aliagas, Ignacio; Dotson, Jennafer; Feng, Jianwen A; Gobbi, Alberto; Heffron, Timothy

    2012-02-27

    To minimize the risk of failure in clinical trials, drug discovery teams must propose active and selective clinical candidates with good physicochemical properties. An additional challenge is that today drug discovery is often conducted by teams at different geographical locations. To improve the collaborative decision making on which compounds to synthesize, we have implemented DEGAS, an application which enables scientists from Genentech and from collaborating external partners to instantly access the same data. DEGAS was implemented to ensure that only the best target compounds are made and that they are made without duplicate effort. Physicochemical properties and DMPK model predictions are computed for each compound to allow the team to make informed decisions when prioritizing. The synthesis progress can be easily tracked. While developing DEGAS, ease of use was a particular goal in order to minimize the difficulty of training and supporting remote users.

  10. Latest trends in craniomaxillofacial surgical instrumentation.

    PubMed

    Yim, Michael; Demke, Joshua

    2012-08-01

    To review the past year's literature regarding recent innovations in surgical instrumentation for craniomaxillofacial surgery. Current advances in surgical instrumentation have led to many improvements in the field, allowing greater visualization and precision both before and during procedures. One of the common goals is to achieve excellent outcomes with minimal complications, while at the same time minimizing invasiveness of surgery. Highlighted innovations include greater capacities for acquisition of data, leading to improved imaging modalities and expansion of computer-assisted surgical techniques; continued developments in biomaterials used in various reconstructions; and novel uses of bone cutting and bone fixation instrumentation. Technology in the field of craniomaxillofacial surgery is developing rapidly, leading to novel instrumentation being utilized across a broad spectrum of areas. Published data have been encouraging to date, indicating an ever increasing adaptation of these innovations in clinical practice. Future efforts need to focus on cost-benefit analysis and constructing larger-scale studies to better understand effectiveness and patient outcomes.

  11. 7 CFR 3430.36 - Procedures to minimize or eliminate duplication of effort.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 15 2011-01-01 2011-01-01 false Procedures to minimize or eliminate duplication of effort. 3430.36 Section 3430.36 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND AGRICULTURE COMPETITIVE AND NONCOMPETITIVE NON-FORMULA FEDERAL ASSISTANCE...

  12. Additional Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.; Gomez, Carlos

    2013-01-01

    NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM) project within the AES program.

  13. Usability analysis of 2D graphics software for designing technical clothing.

    PubMed

    Teodoroski, Rita de Cassia Clark; Espíndola, Edilene Zilma; Silva, Enéias; Moro, Antônio Renato Pereira; Pereira, Vera Lucia D V

    2012-01-01

    With the advent of technology, the computer became a working tool increasingly present in companies. Its purpose is to increase production and reduce the inherent errors in manual production. The aim of this study was to analyze the usability of 2D graphics software in creating clothing designs by a professional during his work. The movements of the mouse, keyboard and graphical tools were monitored in real time by software Camtasia 7® installed on the user's computer. To register the use of mouse and keyboard we used auxiliary software called MouseMeter®, which quantifies the number of times they pressed the right, middle and left mouse's buttons, the keyboard and also the distance traveled in meters by the cursor on the screen. Data was collected in periods of 15 minutes, 1 hour and 8 hours, consecutively. The results showed that the job is considered repetitive and high demands physical efforts, which can lead to the appearance of repetitive strain injuries. Thus, the goal of minimizing operator efforts and thereby enhance the usability of the examined tool, becomes imperative to replace the mouse by a device called tablet, which also offers an electronic pen and a drawing platform for design development.

  14. Star Identification Without Attitude Knowledge: Testing with X-Ray Timing Experiment Data

    NASA Technical Reports Server (NTRS)

    Ketchum, Eleanor

    1997-01-01

    As the budget for the scientific exploration of space shrinks, the need for more autonomous spacecraft increases. For a spacecraft with a star tracker, the ability to determinate attitude from a lost in space state autonomously requires the capability to identify the stars in the field of view of the tracker. Although there have been efforts to produce autonomous star trackers which perform this function internally, many programs cannot afford these sensors. The author previously presented a method for identifying stars without a priori attitude knowledge specifically targeted for onboard computers as it minimizes the necessary computer storage. The method has previously been tested with simulated data. This paper provides results of star identification without a priori attitude knowledge using flight data from two 8 by 8 degree charge coupled device star trackers onboard the X-Ray Timing Experiment.

  15. Computational methods for yeast prion curing curves.

    PubMed

    Ridout, Martin S

    2008-10-01

    If the chemical guanidine hydrochloride is added to a dividing culture of yeast cells in which some of the protein Sup35p is in its prion form, the proportion of cells that carry replicating units of the prion, termed propagons, decreases gradually over time. Stochastic models to describe this process of 'curing' have been developed in earlier work. The present paper investigates the use of numerical methods of Laplace transform inversion to calculate curing curves and contrasts this with an alternative, more direct, approach that involves numerical integration. Transform inversion is found to provide a much more efficient computational approach that allows different models to be investigated with minimal programming effort. The method is used to investigate the robustness of the curing curve to changes in the assumed distribution of cell generation times. Matlab code is available for carrying out the calculations.

  16. Off-Body Boundary-Layer Measurement Techniques Development for Supersonic Low-Disturbance Flows

    NASA Technical Reports Server (NTRS)

    Owens, Lewis R.; Kegerise, Michael A.; Wilkinson, Stephen P.

    2011-01-01

    Investigations were performed to develop accurate boundary-layer measurement techniques in a Mach 3.5 laminar boundary layer on a 7 half-angle cone at 0 angle of attack. A discussion of the measurement challenges is presented as well as how each was addressed. A computational study was performed to minimize the probe aerodynamic interference effects resulting in improved pitot and hot-wire probe designs. Probe calibration and positioning processes were also developed with the goal of reducing the measurement uncertainties from 10% levels to less than 5% levels. Efforts were made to define the experimental boundary conditions for the cone flow so comparisons could be made with a set of companion computational simulations. The development status of the mean and dynamic boundary-layer flow measurements for a nominally sharp cone in a low-disturbance supersonic flow is presented.

  17. Dynamic Resource Allocation in Disaster Response: Tradeoffs in Wildfire Suppression

    PubMed Central

    Petrovic, Nada; Alderson, David L.; Carlson, Jean M.

    2012-01-01

    Challenges associated with the allocation of limited resources to mitigate the impact of natural disasters inspire fundamentally new theoretical questions for dynamic decision making in coupled human and natural systems. Wildfires are one of several types of disaster phenomena, including oil spills and disease epidemics, where (1) the disaster evolves on the same timescale as the response effort, and (2) delays in response can lead to increased disaster severity and thus greater demand for resources. We introduce a minimal stochastic process to represent wildfire progression that nonetheless accurately captures the heavy tailed statistical distribution of fire sizes observed in nature. We then couple this model for fire spread to a series of response models that isolate fundamental tradeoffs both in the strength and timing of response and also in division of limited resources across multiple competing suppression efforts. Using this framework, we compute optimal strategies for decision making scenarios that arise in fire response policy. PMID:22514605

  18. Exploratory procedures with carbon nanotube-based sensors for propellant degradation determinations

    NASA Astrophysics Data System (ADS)

    Ruffin, Paul B.; Edwards, Eugene; Brantley, Christina; McDonald, Brian

    2010-04-01

    Exploratory research is conducted at the US Army Aviation & Missile Research, Development, and Engineering Center (AMRDEC) in order to perform assessments of the degradation of solid propellant used in rocket motors. Efforts are made to discontinue and/or minimize destructive methods and utilize nondestructive techniques to assure the quality and reliability of the weaponry's propulsion system. Collaborative efforts were successfully made between AMRDEC and NASA-Ames for potential add-on configurations to a previously designed sensor that AMRDEC plan to use for preliminary detection of off-gassing. Evaluations were made in order to use the design as the introductory component for the determination of shelf-life degradation rate of rocket motors. Previous and subsequent sensor designs utilize functionalized single-walled carbon nano-tubes (SWCNTs) as the key sensing element. On-going research is conducted to consider key changes that can be implemented (for the existing sensor design) such that a complete wireless sensor system design can be realized. Results should be a cost-saving and timely approach to enhance the Army's ability to develop methodologies for measuring weaponry off-gassing and simultaneously detecting explosives. Expectations are for the resulting sensors to enhance the warfighters' ability to simultaneously detect a greater variety of analytes. Outlined in this paper are the preliminary results that have been accomplished for this research. The behavior of the SWCNT sensor at storage temperatures is outlined, along with the initial sensor response to propellant related analytes. Preparatory computer-based programming routines and computer controlled instrumentation scenarios have been developed in order to subsequently minimize subjective interpretation of test results and provide a means for obtaining data that is reasonable and repetitively quantitative. Typical laboratory evaluation methods are likewise presented, and program limitations/barriers are outlined.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobel, R.

    TRUMP is a general finite difference computer program for the solution of transient and steady state heat transfer problems. It is a very general program capable of solving heat transfer problems in one, two or three dimensions for plane, cylindrical or spherical geometry. Because of the variety of possible geometries, the effort required to describe the geometry can be large. GIFT was written to minimize this effort for one-dimensional heat flow problems. After describing the inner and outer boundaries of a region made of a single material along with the modes of heat transfer which thermally connect different regions, GIFTmore » will calculate all the geometric data (BLOCK 04) and thermal network data (BLOCK 05) required by TRUMP for one-dimensional problems. The heat transfer between layers (or shells) of a material may be by conduction or radiation; also, an interface resistance between layers can be specified. Convection between layers can be accounted for by use of an effective thermal conductivity in which the convection effect is included or by a thermal conductance coefficient. GIFT was written for the Sigma 7 computer, a small digital computer with a versatile graphic display system. This system makes it possible to input the desired data in a question and answer mode and to see both the input and the output displayed on a screen in front of the user at all times. (auth)« less

  20. New Way to Break Down Barriers to Higher Education: Build "Financial Capabilities"

    ERIC Educational Resources Information Center

    Savage, Sarah; Graves, Erin M.

    2015-01-01

    Community colleges have traditionally responded to the financial needs of their students by removing or minimizing financial barriers to attending. Efforts to make community college tuition free fit with this philosophy, but where efforts to minimize or remove financial barriers to attending community college fall short is in empowering students…

  1. Optimizing a mobile robot control system using GPU acceleration

    NASA Astrophysics Data System (ADS)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  2. Instrumentino: An Open-Source Software for Scientific Instruments.

    PubMed

    Koenka, Israel Joel; Sáiz, Jorge; Hauser, Peter C

    2015-01-01

    Scientists often need to build dedicated computer-controlled experimental systems. For this purpose, it is becoming common to employ open-source microcontroller platforms, such as the Arduino. These boards and associated integrated software development environments provide affordable yet powerful solutions for the implementation of hardware control of transducers and acquisition of signals from detectors and sensors. It is, however, a challenge to write programs that allow interactive use of such arrangements from a personal computer. This task is particularly complex if some of the included hardware components are connected directly to the computer and not via the microcontroller. A graphical user interface framework, Instrumentino, was therefore developed to allow the creation of control programs for complex systems with minimal programming effort. By writing a single code file, a powerful custom user interface is generated, which enables the automatic running of elaborate operation sequences and observation of acquired experimental data in real time. The framework, which is written in Python, allows extension by users, and is made available as an open source project.

  3. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  4. Automated Design of Complex Dynamic Systems

    PubMed Central

    Hermans, Michiel; Schrauwen, Benjamin; Bienstman, Peter; Dambre, Joni

    2014-01-01

    Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems. PMID:24497969

  5. Load emphasizes muscle effort minimization during selection of arm movement direction

    PubMed Central

    2012-01-01

    Background Directional preferences during center-out horizontal shoulder-elbow movements were previously established for both the dominant and non-dominant arm with the use of a free-stroke drawing task that required random selection of movement directions. While the preferred directions were mirror-symmetrical in both arms, they were attributed to a tendency specific for the dominant arm to simplify control of interaction torque by actively accelerating one joint and producing largely passive motion at the other joint. No conclusive evidence has been obtained in support of muscle effort minimization as a contributing factor to the directional preferences. Here, we tested whether distal load changes directional preferences, making the influence of muscle effort minimization on the selection of movement direction more apparent. Methods The free-stroke drawing task was performed by the dominant and non-dominant arm with no load and with 0.454 kg load at the wrist. Motion of each arm was limited to rotation of the shoulder and elbow in the horizontal plane. Directional histograms of strokes produced by the fingertip were calculated to assess directional preferences in each arm and load condition. Possible causes for directional preferences were further investigated by studying optimization across directions of a number of cost functions. Results Preferences in both arms to move in the diagonal directions were revealed. The previously suggested tendency to actively accelerate one joint and produce passive motion at the other joint was supported in both arms and load conditions. However, the load increased the tendency to produce strokes in the transverse diagonal directions (perpendicular to the forearm orientation) in both arms. Increases in required muscle effort caused by the load suggested that the higher frequency of movements in the transverse directions represented increased influence of muscle effort minimization on the selection of movement direction. This interpretation was supported by cost function optimization results. Conclusions While without load, the contribution of muscle effort minimization was minor, and therefore, not apparent, the load revealed this contribution by enhancing it. Unlike control of interaction torque, the revealed tendency to minimize muscle effort was independent of arm dominance. PMID:23035925

  6. Recurrence formulas for fully exponentially correlated four-body wave functions

    NASA Astrophysics Data System (ADS)

    Harris, Frank E.

    2009-03-01

    Formulas are presented for the recursive generation of four-body integrals in which the integrand consists of arbitrary integer powers (≥-1) of all the interparticle distances rij , multiplied by an exponential containing an arbitrary linear combination of all the rij . These integrals are generalizations of those encountered using Hylleraas basis functions and include all that are needed to make energy computations on the Li atom and other four-body systems with a fully exponentially correlated Slater-type basis of arbitrary quantum numbers. The only quantities needed to start the recursion are the basic four-body integral first evaluated by Fromm and Hill plus some easily evaluated three-body “boundary” integrals. The computational labor in constructing integral sets for practical computations is less than when the integrals are generated using explicit formulas obtained by differentiating the basic integral with respect to its parameters. Computations are facilitated by using a symbolic algebra program (MAPLE) to compute array index pointers and present syntactically correct FORTRAN source code as output; in this way it is possible to obtain error-free high-speed evaluations with minimal effort. The work can be checked by verifying sum rules the integrals must satisfy.

  7. Development of a Pamphlet Targeting Computer Workstation Ergonomics

    NASA Technical Reports Server (NTRS)

    Faraci, Jennifer S.

    1997-01-01

    With the increased use of computers throughout Goddard Space Flight Center, the Industrial Hygiene Office (IHO) has observed a growing trend in the number of health complaints attributed to poor computer workstation setup. A majority of the complaints has centered around musculoskeletal symptoms, including numbness, pain, and tingling in the upper extremities, shoulders, and neck. Eye strain and headaches have also been reported. In some cases, these symptoms can lead to chronic conditions such as repetitive strain injuries (RSI's). In an effort to prevent or minimize the frequency of these symptoms among the GSFC population, the IHO conducts individual ergonomic workstation evaluations and ergonomics training classes upon request. Because of the extensive number of computer workstations at GSFC, and the limited amount of manpower which the Industrial Hygiene staff could reasonably allocate to conduct workstation evaluations and employee training, a pamphlet was developed with a two-fold purpose: (1) to educate the GSFC population about the importance of ergonomically-correct computer workstation setup and the potential effects of a poorly configured workstation; and (2) to enable employees to perform a general assessment of their own workstations and make any necessary modifications for proper setup.

  8. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    PubMed

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  9. Development of Carbon Dioxide Removal Systems for Advanced Exploration Systems

    NASA Technical Reports Server (NTRS)

    Knox, James C.; Trinh, Diep; Gostowski, Rudy; King, Eric; Mattox, Emily M.; Watson, David; Thomas, John

    2012-01-01

    "NASA's Advanced Exploration Systems (AES) program is pioneering new approaches for rapidly developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit" (NASA 2012). These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must not only blast out of earth's gravity well as during the Apollo moon missions, but also launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by seeking more robust pelletized sorbents, evaluating structured sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach, which is then implemented in a full-scale integrated atmosphere revitalization test. This paper describes the carbon dioxide (CO2) removal hardware design and sorbent screening and characterization effort in support of the Atmosphere Resource Recovery and Environmental Monitoring (ARREM) project within the AES program. A companion paper discusses development of atmosphere revitalization models and simulations for this project.

  10. Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Knox, James C.; Kittredge, Kenneth; Xoker, Robert F.; Cummings, Ramona; Gomez, Carlos F.

    2012-01-01

    "NASA's Advanced Exploration Systems (AES) program is pioneering new approaches for rapidly developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit" (NASA 2012). These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must not only blast out of earth's gravity well as during the Apollo moon missions, but also launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach, which is then implemented in a full-scale integrated atmosphere revitalization test. This paper describes the development of atmosphere revitalization models and simulations. A companion paper discusses the hardware design and sorbent screening and characterization effort in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM) project within the AES program.

  11. Technology transfer into the solid propulsion industry

    NASA Technical Reports Server (NTRS)

    Campbell, Ralph L.; Thomson, Lawrence J.

    1995-01-01

    This paper is a survey of the waste minimization efforts of industries outside of aerospace for possible applications in the manufacture of solid rocket motors (SRM) for NASA. The Redesigned Solid Rocket Motor (RSRM) manufacturing plan was used as the model for processes involved in the production of an SRM. A literature search was conducted to determine the recycling, waste minimization, and waste treatment methods used in the commercial sector that might find application in SRM production. Manufacturers, trade organizations, and professional associations were also contacted. Waste minimization efforts for current processes and replacement technologies, which might reduce the amount or severity of the wastes generated in SRM production, were investigated. An overview of the results of this effort are presented in this paper.

  12. Multi-heuristic dynamic task allocation using genetic algorithms in a heterogeneous distributed system

    PubMed Central

    Page, Andrew J.; Keane, Thomas M.; Naughton, Thomas J.

    2010-01-01

    We present a multi-heuristic evolutionary task allocation algorithm to dynamically map tasks to processors in a heterogeneous distributed system. It utilizes a genetic algorithm, combined with eight common heuristics, in an effort to minimize the total execution time. It operates on batches of unmapped tasks and can preemptively remap tasks to processors. The algorithm has been implemented on a Java distributed system and evaluated with a set of six problems from the areas of bioinformatics, biomedical engineering, computer science and cryptography. Experiments using up to 150 heterogeneous processors show that the algorithm achieves better efficiency than other state-of-the-art heuristic algorithms. PMID:20862190

  13. A survey of keystroke dynamics biometrics.

    PubMed

    Teh, Pin Shen; Teoh, Andrew Beng Jin; Yue, Shigang

    2013-01-01

    Research on keystroke dynamics biometrics has been increasing, especially in the last decade. The main motivation behind this effort is due to the fact that keystroke dynamics biometrics is economical and can be easily integrated into the existing computer security systems with minimal alteration and user intervention. Numerous studies have been conducted in terms of data acquisition devices, feature representations, classification methods, experimental protocols, and evaluations. However, an up-to-date extensive survey and evaluation is not yet available. The objective of this paper is to provide an insightful survey and comparison on keystroke dynamics biometrics research performed throughout the last three decades, as well as offering suggestions and possible future research directions.

  14. The application of artificial intelligence techniques to large distributed networks

    NASA Technical Reports Server (NTRS)

    Dubyah, R.; Smith, T. R.; Star, J. L.

    1985-01-01

    Data accessibility and transfer of information, including the land resources information system pilot, are structured as large computer information networks. These pilot efforts include the reduction of the difficulty to find and use data, reducing processing costs, and minimize incompatibility between data sources. Artificial Intelligence (AI) techniques were suggested to achieve these goals. The applicability of certain AI techniques are explored in the context of distributed problem solving systems and the pilot land data system (PLDS). The topics discussed include: PLDS and its data processing requirements, expert systems and PLDS, distributed problem solving systems, AI problem solving paradigms, query processing, and distributed data bases.

  15. True logarithmic amplification of frequency clock in SS-OCT for calibration

    PubMed Central

    Liu, Bin; Azimi, Ehsan; Brezinski, Mark E.

    2011-01-01

    With swept source optical coherence tomography (SS-OCT), imprecise signal calibration prevents optimal imaging of biological tissues such as coronary artery. This work demonstrates an approach using a true logarithmic amplifier to precondition the clock signal, with the effort to minimize the noises and phase errors for optimal calibration. This method was validated and tested with a high-speed SS-OCT. The experimental results manifest its superior ability on optimization of the calibration and improvement of the imaging performance. Particularly, this hardware-based approach is suitable for real-time calibration in a high-speed system where computation time is constrained. PMID:21698036

  16. Minimizing Dispersion in FDTD Methods with CFL Limit Extension

    NASA Astrophysics Data System (ADS)

    Sun, Chen

    The CFL extension in FDTD methods is receiving considerable attention in order to reduce the computational effort and save the simulation time. One of the major issues in the CFL extension methods is the increased dispersion. We formulate a decomposition of FDTD equations to study the behaviour of the dispersion. A compensation scheme to reduce the dispersion in CFL extension is constructed and proposed. We further study the CFL extension in a FDTD subgridding case, where we improve the accuracy by acting only on the FDTD equations of the fine grid. Numerical results confirm the efficiency of the proposed method for minimising dispersion.

  17. Markerless laser registration in image-guided oral and maxillofacial surgery.

    PubMed

    Marmulla, Rüdiger; Lüth, Tim; Mühling, Joachim; Hassfeld, Stefan

    2004-07-01

    The use of registration markers in computer-assisted surgery is combined with high logistic costs and efforts. Markerless patient registration using laser scan surface registration techniques is a new challenging method. The present study was performed to evaluate the clinical accuracy in finding defined target points within the surgical site after markerless patient registration in image-guided oral and maxillofacial surgery. Twenty consecutive patients with different cranial diseases were scheduled for computer-assisted surgery. Data set alignment between the surgical site and the computed tomography (CT) data set was performed by markerless laser scan surface registration of the patient's face. Intraoral rigidly attached registration markers were used as target points, which had to be detected by an infrared pointer. The Surgical Segment Navigator SSN++ has been used for all procedures. SSN++ is an investigative product based on the SSN system that had previously been developed by the presenting authors with the support of Carl Zeiss (Oberkochen, Germany). SSN++ is connected to a Polaris infrared camera (Northern Digital, Waterloo, Ontario, Canada) and to a Minolta VI 900 3D digitizer (Tokyo, Japan) for high-resolution laser scanning. Minimal differences in shape between the laser scan surface and the surface generated from the CT data set could be detected. Nevertheless, high-resolution laser scan of the skin surface allows for a precise patient registration (mean deviation 1.1 mm, maximum deviation 1.8 mm). Radiation load, logistic costs, and efforts arising from the planning of computer-assisted surgery of the head can be reduced because native (markerless) CT data sets can be used for laser scan-based surface registration.

  18. Level-2 perspectives computed quickly and spontaneously: Evidence from eight- to 9.5-year-old children.

    PubMed

    Elekes, Fruzsina; Varga, Máté; Király, Ildikó

    2017-11-01

    It has been widely assumed that computing how a scene looks from another perspective (level-2 perspective taking, PT) is an effortful process, as opposed to the automatic capacity of tracking visual access to objects (level-1 PT). Recently, adults have been found to compute both forms of visual perspectives in a quick but context-sensitive way, indicating that the two functions share more features than previously assumed. However, the developmental literature still shows the dissociation between automatic level-1 and effortful level-2 PT. In the current paper, we report an experiment showing that in a minimally social situation, participating in a number verification task with an adult confederate, eight- to 9.5-year-old children demonstrate similar online level-2 PT capacities as adults. Future studies need to address whether online PT shows selectivity in children as well and develop paradigms that are adequate to test preschoolers' online level-2 PT abilities. Statement of Contribution What is already known on this subject? Adults can access how objects appear to others (level-2 perspective) spontaneously and online Online level-1, but not level-2 perspective taking (PT) has been documented in school-aged children What the present study adds? Eight- to 9.5-year-olds performed a number verification task with a confederate who had the same task Children showed similar perspective interference as adults, indicating spontaneous level-2 PT Not only agent-object relations but also object appearances are computed online by eight- to 9.5-year-olds. © 2017 The British Psychological Society.

  19. A boundary element alternating method for two-dimensional mixed-mode fracture problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Krishnamurthy, T.

    1992-01-01

    A boundary element alternating method, denoted herein as BEAM, is presented for two dimensional fracture problems. This is an iterative method which alternates between two solutions. An analytical solution for arbitrary polynomial normal and tangential pressure distributions applied to the crack faces of an embedded crack in an infinite plate is used as the fundamental solution in the alternating method. A boundary element method for an uncracked finite plate is the second solution. For problems of edge cracks a technique of utilizing finite elements with BEAM is presented to overcome the inherent singularity in boundary element stress calculation near the boundaries. Several computational aspects that make the algorithm efficient are presented. Finally, the BEAM is applied to a variety of two dimensional crack problems with different configurations and loadings to assess the validity of the method. The method gives accurate stress intensity factors with minimal computing effort.

  20. Beam orbit simulation in the central region of the RIKEN AVF cyclotron

    NASA Astrophysics Data System (ADS)

    Toprek, Dragan; Goto, Akira; Yano, Yasushige

    1999-04-01

    This paper describes the modification design of the central region for h=2 mode of acceleration in the RIKEN AVF cyclotron. we made a small modification to the electrode shape in the central region for optimization of the beam transmission. The central region is equipped with an axial injection system. The spiral type inflector is used for axial injection. The electric field distribution in the inflector and in four acceleration gaps has been numerically calculated from an electric potential map produced by the program RELAX3D. The magnetic field is measured. The geometry of the central region has been tested with the computations of orbits carried out by means of the computer code CYCLONE. The optical properties of the spiral inflector and the central region are studied by using the program CASINO and CYCLONE, respectively. We have also made an effort to minimize the inflector fringe field effects using the RELAX3D program.

  1. Iterative CT reconstruction using coordinate descent with ordered subsets of data

    NASA Astrophysics Data System (ADS)

    Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.

    2016-04-01

    Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.

  2. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  3. To do it or to let an automatic tool do it? The priority of control over effort.

    PubMed

    Osiurak, François; Wagner, Clara; Djerbi, Sara; Navarro, Jordan

    2013-01-01

    The aim of the present study is to provide experimental data relevant to the issue of what leads humans to use automatic tools. Two answers can be offered. The first is that humans strive to minimize physical and/or cognitive effort (principle of least effort). The second is that humans tend to keep their perceived control over the environment (principle of more control). These two factors certainly play a role, but the question raised here is to what do people give priority in situations wherein both manual and automatic actions take the same time - minimizing effort or keeping perceived control? To answer that question, we built four experiments in which participants were confronted with a recurring choice between performing a task manually (physical effort) or in a semi-automatic way (cognitive effort) versus using an automatic tool that completes the task for them (no effort). In this latter condition, participants were required to follow the progression of the automatic tool step by step. Our results showed that participants favored the manual or semi-automatic condition over the automatic condition. However, when they were offered the opportunity to perform recreational tasks in parallel, the shift toward manual condition disappeared. The findings give support to the idea that people give priority to keeping control over minimizing effort.

  4. Managing Errors to Reduce Accidents in High Consequence Networked Information Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganter, J.H.

    1999-02-01

    Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classifymore » these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.« less

  5. Use of Mental Health Care and Unmet Needs for Health Care Among Lesbian and Bisexual Chinese-, Korean-, and Vietnamese-American Women.

    PubMed

    Hahm, Hyeouk Chris; Lee, Jieha; Chiao, Christine; Valentine, Anne; Lê Cook, Benjamin

    2016-12-01

    This study examined associations between sexual orientation of Asian-American women and receipt of mental health care and unmet need for health care. Computer-assisted self-interviews were conducted with 701 unmarried Chinese-, Korean-, and Vietnamese-American women ages 18 to 35. Multivariate regression models examined whether lesbian and bisexual participants differed from exclusively heterosexual participants in use of mental health care and unmet need for health care. After the analyses controlled for mental health status and other covariates, lesbian and bisexual women were more likely than exclusively heterosexual women to have received any past-year mental health services and reported a greater unmet need for health care. Sexual-minority women were no more likely to have received minimally adequate care. Given the high rates of mental health problems among Asian-American sexual-minority women, efforts are needed to identify and overcome barriers to receipt of adequate mental health care and minimize unmet health care needs.

  6. A Representation of Effort in Decision-Making and Motor Control.

    PubMed

    Shadmehr, Reza; Huang, Helen J; Ahmed, Alaa A

    2016-07-25

    Given two rewarding stimuli, animals tend to choose the more rewarding (or less effortful) option. However, they also move faster toward that stimulus [1-5]. This suggests that reward and effort not only affect decision-making, they also influence motor control [6, 7]. How does the brain compute the effort requirements of a task? Here, we considered data acquired during walking, reaching, flying, or isometric force production. In analyzing the decision-making and motor-control behaviors of various animals, we considered the possibility that the brain may estimate effort objectively, via the metabolic energy consumed to produce the action. We measured the energetic cost of reaching and found that, like walking, it was convex in time, with a global minimum, implying that there existed a movement speed that minimized effort. However, reward made it worthwhile to be energetically inefficient. Using a framework in which utility of an action depended on reward and energetic cost, both discounted in time, we found that it was possible to account for a body of data in which animals were free to choose how to move (reach slow or fast), as well as what to do (walk or fly, produce force F1 or F2). We suggest that some forms of decision-making and motor control may share a common utility in which the brain represents the effort associated with performing an action objectively via its metabolic energy cost and then, like reward, temporally discounts it as a function of movement duration. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Scientific Cluster Deployment and Recovery - Using puppet to simplify cluster management

    NASA Astrophysics Data System (ADS)

    Hendrix, Val; Benjamin, Doug; Yao, Yushu

    2012-12-01

    Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited time for and knowledge of the administration of such clusters can be strained by such maintenance tasks. This current work is the result of an effort to maintain a data analysis cluster (DAC) with minimal effort by a local system administrator. The realized benefit is the scientist, who is the local system administrator, is able to focus on the data analysis instead of the intricacies of managing a cluster. Our work provides a cluster deployment and recovery process (CDRP) based on the puppet configuration engine allowing a part-time FTE to easily deploy and recover entire clusters with minimal effort. Puppet is a configuration management system (CMS) used widely in computing centers for the automatic management of resources. Domain experts use Puppet's declarative language to define reusable modules for service configuration and deployment. Our CDRP has three actors: domain experts, a cluster designer and a cluster manager. The domain experts first write the puppet modules for the cluster services. A cluster designer would then define a cluster. This includes the creation of cluster roles, mapping the services to those roles and determining the relationships between the services. Finally, a cluster manager would acquire the resources (machines, networking), enter the cluster input parameters (hostnames, IP addresses) and automatically generate deployment scripts used by puppet to configure it to act as a designated role. In the event of a machine failure, the originally generated deployment scripts along with puppet can be used to easily reconfigure a new machine. The cluster definition produced in our CDRP is an integral part of automating cluster deployment in a cloud environment. Our future cloud efforts will further build on this work.

  8. Optimizing R with SparkR on a commodity cluster for biomedical research.

    PubMed

    Sedlmayr, Martin; Würfl, Tobias; Maier, Christian; Häberle, Lothar; Fasching, Peter; Prokosch, Hans-Ulrich; Christoph, Jan

    2016-12-01

    Medical researchers are challenged today by the enormous amount of data collected in healthcare. Analysis methods such as genome-wide association studies (GWAS) are often computationally intensive and thus require enormous resources to be performed in a reasonable amount of time. While dedicated clusters and public clouds may deliver the desired performance, their use requires upfront financial efforts or anonymous data, which is often not possible for preliminary or occasional tasks. We explored the possibilities to build a private, flexible cluster for processing scripts in R based on commodity, non-dedicated hardware of our department. For this, a GWAS-calculation in R on a single desktop computer, a Message Passing Interface (MPI)-cluster, and a SparkR-cluster were compared with regards to the performance, scalability, quality, and simplicity. The original script had a projected runtime of three years on a single desktop computer. Optimizing the script in R already yielded a significant reduction in computing time (2 weeks). By using R-MPI and SparkR, we were able to parallelize the computation and reduce the time to less than three hours (2.6 h) on already available, standard office computers. While MPI is a proven approach in high-performance clusters, it requires rather static, dedicated nodes. SparkR and its Hadoop siblings allow for a dynamic, elastic environment with automated failure handling. SparkR also scales better with the number of nodes in the cluster than MPI due to optimized data communication. R is a popular environment for clinical data analysis. The new SparkR solution offers elastic resources and allows supporting big data analysis using R even on non-dedicated resources with minimal change to the original code. To unleash the full potential, additional efforts should be invested to customize and improve the algorithms, especially with regards to data distribution. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  9. A Survey of Keystroke Dynamics Biometrics

    PubMed Central

    Yue, Shigang

    2013-01-01

    Research on keystroke dynamics biometrics has been increasing, especially in the last decade. The main motivation behind this effort is due to the fact that keystroke dynamics biometrics is economical and can be easily integrated into the existing computer security systems with minimal alteration and user intervention. Numerous studies have been conducted in terms of data acquisition devices, feature representations, classification methods, experimental protocols, and evaluations. However, an up-to-date extensive survey and evaluation is not yet available. The objective of this paper is to provide an insightful survey and comparison on keystroke dynamics biometrics research performed throughout the last three decades, as well as offering suggestions and possible future research directions. PMID:24298216

  10. Model predictive control for spacecraft rendezvous in elliptical orbit

    NASA Astrophysics Data System (ADS)

    Li, Peng; Zhu, Zheng H.

    2018-05-01

    This paper studies the control of spacecraft rendezvous with attitude stable or spinning targets in an elliptical orbit. The linearized Tschauner-Hempel equation is used to describe the motion of spacecraft and the problem is formulated by model predictive control. The control objective is to maximize control accuracy and smoothness simultaneously to avoid unexpected change or overshoot of trajectory for safe rendezvous. It is achieved by minimizing the weighted summations of control errors and increments. The effects of two sets of horizons (control and predictive horizons) in the model predictive control are examined in terms of fuel consumption, rendezvous time and computational effort. The numerical results show the proposed control strategy is effective.

  11. Computers and the Environment: Minimizing the Carbon Footprint

    ERIC Educational Resources Information Center

    Kaestner, Rich

    2009-01-01

    Computers can be good and bad for the environment; one can maximize the good and minimize the bad. When dealing with environmental issues, it's difficult to ignore the computing infrastructure. With an operations carbon footprint equal to the airline industry's, computer energy use is only part of the problem; everyone is also dealing with the use…

  12. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent LER Calculations

    NASA Astrophysics Data System (ADS)

    Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.

    2017-12-01

    In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.

  13. Design and analysis of advanced flight planning concepts

    NASA Technical Reports Server (NTRS)

    Sorensen, John A.

    1987-01-01

    The objectives of this continuing effort are to develop and evaluate new algorithms and advanced concepts for flight management and flight planning. This includes the minimization of fuel or direct operating costs, the integration of the airborne flight management and ground-based flight planning processes, and the enhancement of future traffic management systems design. Flight management (FMS) concepts are for on-board profile computation and steering of transport aircraft in the vertical plane between a city pair and along a given horizontal path. Flight planning (FPS) concepts are for the pre-flight ground based computation of the three-dimensional reference trajectory that connects the city pair and specifies the horizontal path, fuel load, and weather profiles for initializing the FMS. As part of these objectives, a new computer program called EFPLAN has been developed and utilized to study advanced flight planning concepts. EFPLAN represents an experimental version of an FPS. It has been developed to generate reference flight plans compatible as input to an FMS and to provide various options for flight planning research. This report describes EFPLAN and the associated research conducted in its development.

  14. 5 CFR 630.310 - Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...

  15. 5 CFR 630.310 - Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...

  16. 5 CFR 630.310 - Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...

  17. 5 CFR 630.310 - Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...

  18. 5 CFR 630.310 - Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...

  19. An efficient graph theory based method to identify every minimal reaction set in a metabolic network

    PubMed Central

    2014-01-01

    Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118

  20. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  1. Technology Assessment for Future MILSATCOM Systems; An Update of the EHF Bands

    DTIC Science & Technology

    1980-10-01

    converging these efforts, the MSO has prepared a "Technology Development Program Plan" ( TDPP ). The TOPP defines a coordinated approach to the R&D...required to insure the availability of the technology necessary to support future systems. Some of the objectives of the TDPP are: to minimize...and TDPP have illuminated the need for technology development efforts directed toward minimizing the cost- risk and schedule-risk, and insuring the

  2. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF SHEET METAL COMPONENTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization ssessment Cente...

  3. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION FOR A MANUFACTURER OF ALUMINUM AND STEEL PARTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  4. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF ALUMINUM AND STEEL PARTS

    EPA Science Inventory

    The U.S.Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-sized manufacturers who want to minimize their generation of waste but who lack the expertise to do so. In an effort to assist these manufacturers, Waste Minimization Assessment Ce...

  5. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF CORN SYRUP AND CORN STARCH

    EPA Science Inventory

    The U.S.Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their geneation of waste but who lack the expertise to do so. In an effort to assist these manufacturers, Waste Minimization Assessment Cent...

  6. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF CUTTING AND WELDING EQUIPMENT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot program to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so in an effort to assist these manufacturers Waste Minimization Assessment Cent...

  7. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF SILICON-CONTROLLED RECTIFIERS AND SCHOTTKY RECTIFIERS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. In an effort to assist these manufacturers Waste Minimization Assessment Ce...

  8. MODFLOW-LGR: Practical application to a large regional dataset

    NASA Astrophysics Data System (ADS)

    Barnes, D.; Coulibaly, K. M.

    2011-12-01

    In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.

  9. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory

    NASA Astrophysics Data System (ADS)

    Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.

    2017-09-01

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  10. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory.

    PubMed

    Lee, M; Leiter, K; Eisner, C; Breuer, A; Wang, X

    2017-09-21

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  11. Assignment Of Finite Elements To Parallel Processors

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.

    1990-01-01

    Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.

  12. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  13. Minimizing damage to a propped fracture by controlled flowback procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, B.M.; Holditch, S.A.; Whitehead, W.S.

    1988-06-01

    Severe fracture-conductivity damage can result from proppant crushing and/or proppant flowback into the wellbore. Such damage is often concentrated near the wellbore and can directly affect postfracture performance. Most of the time severe fracture-conductivity damage can be minimized by choosing the correct type of proppant for a particular well. In many cases, however, this is not enough. To minimize excessive crushing or to prevent proppant flowback, it is also necessary to control carefully the flowback of the well after the treatment. Specific procedures can be followed to minimize severe fracture-conductivity damage. These procedures involve controlling the rates at which loadmore » fluids are recovered and maximizing backpressure against the formation. These procedures require much more time and effort than is normally spent on postfracture cleanup; however, the efforts could result in better performance.« less

  14. Using a voice to put a name to a face: the psycholinguistics of proper name comprehension.

    PubMed

    Barr, Dale J; Jackson, Laura; Phillips, Isobel

    2014-02-01

    We propose that hearing a proper name (e.g., Kevin) in a particular voice serves as a compound memory cue that directly activates representations of a mutually known target person, often permitting reference resolution without any complex computation of shared knowledge. In a referential communication study, pairs of friends played a communication game, in which we monitored the eyes of one friend (the addressee) while he or she sought to identify the target person, in a set of four photos, on the basis of a name spoken aloud. When the name was spoken by a friend, addressees rapidly identified the target person, and this facilitation was independent of whether the friend was articulating a message he or she had designed versus one from a third party with whom the target person was not shared. Our findings suggest that the comprehension system takes advantage of regularities in the environment to minimize effortful computation about who knows what.

  15. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  16. Multiphysics Thermal-Fluid Design Analysis of a Non-Nuclear Tester for Hot-Hydrogen Materials and Component Development

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Foote, John; Litchford, Ron

    2006-01-01

    The objective of this effort is to perform design analyses for a non-nuclear hot-hydrogen materials tester, as a first step towards developing efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber design and analysis. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective, and thermal radiative heat transfers. The goals of the design analyses are to maintain maximum hot-hydrogen jet impingement energy and to minimize chamber wall heating. The results of analyses on three test fixture configurations and the rationale for final selection are presented. The interrogation of physics revealed that reactions of hydrogen dissociation and recombination are highly correlated with local temperature and are necessary for accurate prediction of the hot-hydrogen jet temperature.

  17. Real time target allocation in cooperative unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kudleppanavar, Ganesh

    The prolific development of Unmanned Aerial Vehicles (UAV's) in recent years has the potential to provide tremendous advantages in military, commercial and law enforcement applications. While safety and performance take precedence in the development lifecycle, autonomous operations and, in particular, cooperative missions have the ability to significantly enhance the usability of these vehicles. The success of cooperative missions relies on the optimal allocation of targets while taking into consideration the resource limitation of each vehicle. The task allocation process can be centralized or decentralized. This effort presents the development of a real time target allocation algorithm that considers available stored energy in each vehicle while minimizing the communication between each UAV. The algorithm utilizes a nearest neighbor search algorithm to locate new targets with respect to existing targets. Simulations show that this novel algorithm compares favorably to the mixed integer linear programming method, which is computationally more expensive. The implementation of this algorithm on Arduino and Xbee wireless modules shows the capability of the algorithm to execute efficiently on hardware with minimum computation complexity.

  18. Computational Image Analysis Reveals Intrinsic Multigenerational Differences between Anterior and Posterior Cerebral Cortex Neural Progenitor Cells

    PubMed Central

    Winter, Mark R.; Liu, Mo; Monteleone, David; Melunis, Justin; Hershberg, Uri; Goderie, Susan K.; Temple, Sally; Cohen, Andrew R.

    2015-01-01

    Summary Time-lapse microscopy can capture patterns of development through multiple divisions for an entire clone of proliferating cells. Images are taken every few minutes over many days, generating data too vast to process completely by hand. Computational analysis of this data can benefit from occasional human guidance. Here we combine improved automated algorithms with minimized human validation to produce fully corrected segmentation, tracking, and lineaging results with dramatic reduction in effort. A web-based viewer provides access to data and results. The improved approach allows efficient analysis of large numbers of clones. Using this method, we studied populations of progenitor cells derived from the anterior and posterior embryonic mouse cerebral cortex, each growing in a standardized culture environment. Progenitors from the anterior cortex were smaller, less motile, and produced smaller clones compared to those from the posterior cortex, demonstrating cell-intrinsic differences that may contribute to the areal organization of the cerebral cortex. PMID:26344906

  19. An Innovative Infrastructure with a Universal Geo-Spatiotemporal Data Representation Supporting Cost-Effective Integration of Diverse Earth Science Data

    NASA Technical Reports Server (NTRS)

    Rilee, Michael Lee; Kuo, Kwo-Sen

    2017-01-01

    The SpatioTemporal Adaptive Resolution Encoding (STARE) is a unifying scheme encoding geospatial and temporal information for organizing data on scalable computing/storage resources, minimizing expensive data transfers. STARE provides a compact representation that turns set-logic functions into integer operations, e.g. conditional sub-setting, taking into account representative spatiotemporal resolutions of the data in the datasets. STARE geo-spatiotemporally aligns data placements of diverse data on massive parallel resources to maximize performance. Automating important scientific functions (e.g. regridding) and computational functions (e.g. data placement) allows scientists to focus on domain-specific questions instead of expending their efforts and expertise on data processing. With STARE-enabled automation, SciDB (Scientific Database) plus STARE provides a database interface, reducing costly data preparation, increasing the volume and variety of interoperable data, and easing result sharing. Using SciDB plus STARE as part of an integrated analysis infrastructure dramatically eases combining diametrically different datasets.

  20. Atmospheric model development in support of SEASAT. Volume 1: Summary of findings

    NASA Technical Reports Server (NTRS)

    Kesel, P. G.

    1977-01-01

    Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.

  1. A Survey of Memristive Threshold Logic Circuits.

    PubMed

    Maan, Akshay Kumar; Jayadevi, Deepthi Anirudhan; James, Alex Pappachen

    2017-08-01

    In this paper, we review different memristive threshold logic (MTL) circuits that are inspired from the synaptic action of the flow of neurotransmitters in the biological brain. The brainlike generalization ability and the area minimization of these threshold logic circuits aim toward crossing Moore's law boundaries at device, circuits, and systems levels. Fast switching memory, signal processing, control systems, programmable logic, image processing, reconfigurable computing, and pattern recognition are identified as some of the potential applications of MTL systems. The physical realization of nanoscale devices with memristive behavior from materials, such as TiO 2 , ferroelectrics, silicon, and polymers, has accelerated research effort in these application areas, inspiring the scientific community to pursue the design of high-speed, low-cost, low-power, and high-density neuromorphic architectures.

  2. Experiences on p-Version Time-Discontinuous Galerkin's Method for Nonlinear Heat Transfer Analysis and Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    2004-01-01

    The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.

  3. Portable programming on parallel/networked computers using the Application Portable Parallel Library (APPL)

    NASA Technical Reports Server (NTRS)

    Quealy, Angela; Cole, Gary L.; Blech, Richard A.

    1993-01-01

    The Application Portable Parallel Library (APPL) is a subroutine-based library of communication primitives that is callable from applications written in FORTRAN or C. APPL provides a consistent programmer interface to a variety of distributed and shared-memory multiprocessor MIMD machines. The objective of APPL is to minimize the effort required to move parallel applications from one machine to another, or to a network of homogeneous machines. APPL encompasses many of the message-passing primitives that are currently available on commercial multiprocessor systems. This paper describes APPL (version 2.3.1) and its usage, reports the status of the APPL project, and indicates possible directions for the future. Several applications using APPL are discussed, as well as performance and overhead results.

  4. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  5. Observed differences in upper extremity forces, muscle efforts, postures, velocities and accelerations across computer activities in a field study of office workers.

    PubMed

    Bruno Garza, J L; Eijckelhof, B H W; Johnson, P W; Raina, S M; Rynell, P W; Huysmans, M A; van Dieën, J H; van der Beek, A J; Blatter, B M; Dennerlein, J T

    2012-01-01

    This study, a part of the PRedicting Occupational biomechanics in OFfice workers (PROOF) study, investigated whether there are differences in field-measured forces, muscle efforts, postures, velocities and accelerations across computer activities. These parameters were measured continuously for 120 office workers performing their own work for two hours each. There were differences in nearly all forces, muscle efforts, postures, velocities and accelerations across keyboard, mouse and idle activities. Keyboard activities showed a 50% increase in the median right trapezius muscle effort when compared to mouse activities. Median shoulder rotation changed from 25 degrees internal rotation during keyboard use to 15 degrees external rotation during mouse use. Only keyboard use was associated with median ulnar deviations greater than 5 degrees. Idle activities led to the greatest variability observed in all muscle efforts and postures measured. In future studies, measurements of computer activities could be used to provide information on the physical exposures experienced during computer use. Practitioner Summary: Computer users may develop musculoskeletal disorders due to their force, muscle effort, posture and wrist velocity and acceleration exposures during computer use. We report that many physical exposures are different across computer activities. This information may be used to estimate physical exposures based on patterns of computer activities over time.

  6. Evolutionary and molecular foundations of multiple contemporary functions of the nitroreductase superfamily

    PubMed Central

    Akiva, Eyal; Copp, Janine N.; Tokuriki, Nobuhiko; Babbitt, Patricia C.

    2017-01-01

    Insight regarding how diverse enzymatic functions and reactions have evolved from ancestral scaffolds is fundamental to understanding chemical and evolutionary biology, and for the exploitation of enzymes for biotechnology. We undertook an extensive computational analysis using a unique and comprehensive combination of tools that include large-scale phylogenetic reconstruction to determine the sequence, structural, and functional relationships of the functionally diverse flavin mononucleotide-dependent nitroreductase (NTR) superfamily (>24,000 sequences from all domains of life, 54 structures, and >10 enzymatic functions). Our results suggest an evolutionary model in which contemporary subgroups of the superfamily have diverged in a radial manner from a minimal flavin-binding scaffold. We identified the structural design principle for this divergence: Insertions at key positions in the minimal scaffold that, combined with the fixation of key residues, have led to functional specialization. These results will aid future efforts to delineate the emergence of functional diversity in enzyme superfamilies, provide clues for functional inference for superfamily members of unknown function, and facilitate rational redesign of the NTR scaffold. PMID:29078300

  7. First and second order derivatives for optimizing parallel RF excitation waveforms.

    PubMed

    Majewski, Kurt; Ritter, Dieter

    2015-09-01

    For piecewise constant magnetic fields, the Bloch equations (without relaxation terms) can be solved explicitly. This way the magnetization created by an excitation pulse can be written as a concatenation of rotations applied to the initial magnetization. For fixed gradient trajectories, the problem of finding parallel RF waveforms, which minimize the difference between achieved and desired magnetization on a number of voxels, can thus be represented as a finite-dimensional minimization problem. We use quaternion calculus to formulate this optimization problem in the magnitude least squares variant and specify first and second order derivatives of the objective function. We obtain a small tip angle approximation as first order Taylor development from the first order derivatives and also develop algorithms for first and second order derivatives for this small tip angle approximation. All algorithms are accompanied by precise floating point operation counts to assess and compare the computational efforts. We have implemented these algorithms as callback functions of an interior-point solver. We have applied this numerical optimization method to example problems from the literature and report key observations. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. First and second order derivatives for optimizing parallel RF excitation waveforms

    NASA Astrophysics Data System (ADS)

    Majewski, Kurt; Ritter, Dieter

    2015-09-01

    For piecewise constant magnetic fields, the Bloch equations (without relaxation terms) can be solved explicitly. This way the magnetization created by an excitation pulse can be written as a concatenation of rotations applied to the initial magnetization. For fixed gradient trajectories, the problem of finding parallel RF waveforms, which minimize the difference between achieved and desired magnetization on a number of voxels, can thus be represented as a finite-dimensional minimization problem. We use quaternion calculus to formulate this optimization problem in the magnitude least squares variant and specify first and second order derivatives of the objective function. We obtain a small tip angle approximation as first order Taylor development from the first order derivatives and also develop algorithms for first and second order derivatives for this small tip angle approximation. All algorithms are accompanied by precise floating point operation counts to assess and compare the computational efforts. We have implemented these algorithms as callback functions of an interior-point solver. We have applied this numerical optimization method to example problems from the literature and report key observations.

  9. Further reduction of minimal first-met bad markings for the computationally efficient synthesis of a maximally permissive controller

    NASA Astrophysics Data System (ADS)

    Liu, GaiYun; Chao, Daniel Yuh

    2015-08-01

    To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.

  10. On the design of script languages for neural simulation.

    PubMed

    Brette, Romain

    2012-01-01

    In neural network simulators, models are specified according to a language, either specific or based on a general programming language (e.g. Python). There are also ongoing efforts to develop standardized languages, for example NeuroML. When designing these languages, efforts are often focused on expressivity, that is, on maximizing the number of model types than can be described and simulated. I argue that a complementary goal should be to minimize the cognitive effort required on the part of the user to use the language. I try to formalize this notion with the concept of "language entropy", and I propose a few practical guidelines to minimize the entropy of languages for neural simulation.

  11. Minimization of reflection cracks in flexible pavements.

    DOT National Transportation Integrated Search

    1977-01-01

    This report describes the performance of fabrics used under overlays in an effort to minimize longitudinal and alligator cracking in flexible pavements. It is concluded, although the sample size is small, that the treatments will extend the pavement ...

  12. EPA WASTE MINIMIZATION RESEARCH PROGRAM: AN OVERVIEW

    EPA Science Inventory

    The US Environmental Protection Agency (EPA) has established a waste minimization research program within the Office of Research and Development's Risk Reduction Engineering Laboratory which is the primary contact for pollution prevention research efforts concentrating on source ...

  13. Automated intraretinal layer segmentation of optical coherence tomography images using graph-theoretical methods

    NASA Astrophysics Data System (ADS)

    Roy, Priyanka; Gholami, Peyman; Kuppuswamy Parthasarathy, Mohana; Zelek, John; Lakshminarayanan, Vasudevan

    2018-02-01

    Segmentation of spectral-domain Optical Coherence Tomography (SD-OCT) images facilitates visualization and quantification of sub-retinal layers for diagnosis of retinal pathologies. However, manual segmentation is subjective, expertise dependent, and time-consuming, which limits applicability of SD-OCT. Efforts are therefore being made to implement active-contours, artificial intelligence, and graph-search to automatically segment retinal layers with accuracy comparable to that of manual segmentation, to ease clinical decision-making. Although, low optical contrast, heavy speckle noise, and pathologies pose challenges to automated segmentation. Graph-based image segmentation approach stands out from the rest because of its ability to minimize the cost function while maximising the flow. This study has developed and implemented a shortest-path based graph-search algorithm for automated intraretinal layer segmentation of SD-OCT images. The algorithm estimates the minimal-weight path between two graph-nodes based on their gradients. Boundary position indices (BPI) are computed from the transition between pixel intensities. The mean difference between BPIs of two consecutive layers quantify individual layer thicknesses, which shows statistically insignificant differences when compared to a previous study [for overall retina: p = 0.17, for individual layers: p > 0.05 (except one layer: p = 0.04)]. These results substantiate the accurate delineation of seven intraretinal boundaries in SD-OCT images by this algorithm, with a mean computation time of 0.93 seconds (64-bit Windows10, core i5, 8GB RAM). Besides being self-reliant for denoising, the algorithm is further computationally optimized to restrict segmentation within the user defined region-of-interest. The efficiency and reliability of this algorithm, even in noisy image conditions, makes it clinically applicable.

  14. A compilation of technology spinoffs from the US Space Shuttle Program

    NASA Technical Reports Server (NTRS)

    Jackson, David Jeff

    1993-01-01

    As the successful transfer of NASA-developed technology is a stated mission of NASA, the documentation of such transfer is vital in support of the program. The purpose of this report is to document technology transfer, i.e. 'spinoffs', from the U.S. Space Shuttle Program to the commercial sector. These spinoffs have their origin in the many scientific and engineering fields associated with the shuttle program and, as such, span many diverse commercial applications. These applications include, but are not limited to, consumer products, medicine, industrial productivity, manufacturing technology, public safety, resources management, materials processing, transportation, energy, computer technology, construction, and environmental applications. To aide to the generation of this technology spinoff list, significant effort was made to establish numerous and complementary sources of information. The primary sources of information used in compiling this list include: the NASA 'Spinoff' publication, NASA Tech Briefs, the Marshall Space Flight Center (MSFC) Technology Utilization (TU) Office, the NASA Center for Aerospace Information (CASI), the NASA COSMIC Software Center, and MSFC laboratory and contractor personnel. A complete listing of resources may be found in the bibliography of this report. Additionally, effort was made to insure that the obtained information was placed in electronic database form to insure that the subsequent updating would be feasible with minimal effort.

  15. Protein sequence annotation in the genome era: the annotation concept of SWISS-PROT+TREMBL.

    PubMed

    Apweiler, R; Gateau, A; Contrino, S; Martin, M J; Junker, V; O'Donovan, C; Lang, F; Mitaritonna, N; Kappus, S; Bairoch, A

    1997-01-01

    SWISS-PROT is a curated protein sequence database which strives to provide a high level of annotation, a minimal level of redundancy and high level of integration with other databases. Ongoing genome sequencing projects have dramatically increased the number of protein sequences to be incorporated into SWISS-PROT. Since we do not want to dilute the quality standards of SWISS-PROT by incorporating sequences without proper sequence analysis and annotation, we cannot speed up the incorporation of new incoming data indefinitely. However, as we also want to make the sequences available as fast as possible, we introduced TREMBL (TRanslation of EMBL nucleotide sequence database), a supplement to SWISS-PROT. TREMBL consists of computer-annotated entries in SWISS-PROT format derived from the translation of all coding sequences (CDS) in the EMBL nucleotide sequence database, except for CDS already included in SWISS-PROT. While TREMBL is already of immense value, its computer-generated annotation does not match the quality of SWISS-PROTs. The main difference is in the protein functional information attached to sequences. With this in mind, we are dedicating substantial effort to develop and apply computer methods to enhance the functional information attached to TREMBL entries.

  16. Quantitative phase and amplitude imaging using Differential-Interference Contrast (DIC) microscopy

    NASA Astrophysics Data System (ADS)

    Preza, Chrysanthe; O'Sullivan, Joseph A.

    2009-02-01

    We present an extension of the development of an alternating minimization (AM) method for the computation of a specimen's complex transmittance function (magnitude and phase) from DIC images. The ability to extract both quantitative phase and amplitude information from two rotationally-diverse DIC images (i.e., acquired by rotating the sample) extends previous efforts in computational DIC microscopy that have focused on quantitative phase imaging only. Simulation results show that the inverse problem at hand is sensitive to noise as well as to the choice of the AM algorithm parameters. The AM framework allows constraints and penalties on the magnitude and phase estimates to be incorporated in a principled manner. Towards this end, Green and De Pierro's "log-cosh" regularization penalty is applied to the magnitude of differences of neighboring values of the complex-valued function of the specimen during the AM iterations. The penalty is shown to be convex in the complex space. A procedure to approximate the penalty within the iterations is presented. In addition, a methodology to pre-compute AM parameters that are optimal with respect to the convergence rate of the AM algorithm is also presented. Both extensions of the AM method are investigated with simulations.

  17. Aeroacoustic Simulations of a Nose Landing Gear Using FUN3D on Pointwise Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Rhoads, John; Lockard, David P.

    2015-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise(TradeMark) grid generation software are used for these simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these simulations. Solutions are also presented for a wall function model coupled to the standard turbulence model. Time-averaged and instantaneous solutions obtained on these Pointwise grids are compared with the measured data and previous numerical solutions. The resulting CFD solutions are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the farfield noise levels in the flyover and sideline directions. The computed noise levels compare well with previous CFD solutions and experimental data.

  18. Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI

    NASA Astrophysics Data System (ADS)

    Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan

    2016-10-01

    Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.

  19. Preliminary engineering cost trends for highway projects.

    DOT National Transportation Integrated Search

    2011-10-21

    Preliminary engineering (PE) for a highway project encompasses two efforts: planning to minimize the physical, social, and human environmental impacts of projects and engineering design to deliver the best alternative. PE efforts begin years in advan...

  20. U.S. effort in the development of new crops (Lesquerella, Pennycress and Cuphea)

    USDA-ARS?s Scientific Manuscript database

    The U.S. effort for the development of New Crops is directed toward the development of crops that can be grown in rotation with traditional commodity crops, off-season production and utilization of acreage not currently under cultivation. This effort is intended to have no or minimal impact on crop...

  1. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    PubMed

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them.

  2. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    PubMed Central

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them. PMID:22429538

  3. Novel physical constraints on implementation of computational processes

    NASA Astrophysics Data System (ADS)

    Wolpert, David; Kolchinsky, Artemy

    Non-equilibrium statistical physics permits us to analyze computational processes, i.e., ways to drive a physical system such that its coarse-grained dynamics implements some desired map. It is now known how to implement any such desired computation without dissipating work, and what the minimal (dissipationless) work is that such a computation will require (the so-called generalized Landauer bound\\x9D). We consider how these analyses change if we impose realistic constraints on the computational process. First, we analyze how many degrees of freedom of the system must be controlled, in addition to the ones specifying the information-bearing degrees of freedom, in order to avoid dissipating work during a given computation, when local detailed balance holds. We analyze this issue for deterministic computations, deriving a state-space vs. speed trade-off, and use our results to motivate a measure of the complexity of a computation. Second, we consider computations that are implemented with logic circuits, in which only a small numbers of degrees of freedom are coupled at a time. We show that the way a computation is implemented using circuits affects its minimal work requirements, and relate these minimal work requirements to information-theoretic measures of complexity.

  4. Code for Calculating Regional Seismic Travel Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN

    The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minusmore » predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less

  5. Free energy minimization to predict RNA secondary structures and computational RNA design.

    PubMed

    Churkin, Alexander; Weinbrand, Lina; Barash, Danny

    2015-01-01

    Determining the RNA secondary structure from sequence data by computational predictions is a long-standing problem. Its solution has been approached in two distinctive ways. If a multiple sequence alignment of a collection of homologous sequences is available, the comparative method uses phylogeny to determine conserved base pairs that are more likely to form as a result of billions of years of evolution than by chance. In the case of single sequences, recursive algorithms that compute free energy structures by using empirically derived energy parameters have been developed. This latter approach of RNA folding prediction by energy minimization is widely used to predict RNA secondary structure from sequence. For a significant number of RNA molecules, the secondary structure of the RNA molecule is indicative of its function and its computational prediction by minimizing its free energy is important for its functional analysis. A general method for free energy minimization to predict RNA secondary structures is dynamic programming, although other optimization methods have been developed as well along with empirically derived energy parameters. In this chapter, we introduce and illustrate by examples the approach of free energy minimization to predict RNA secondary structures.

  6. Collaborative simulations and experiments for a novel yield model of coal devolatilization in oxy-coal combustion conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.

    Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less

  7. Collaborative simulations and experiments for a novel yield model of coal devolatilization in oxy-coal combustion conditions

    DOE PAGES

    Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.; ...

    2017-06-03

    Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less

  8. Sulcal set optimization for cortical surface registration.

    PubMed

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  9. Minimization search method for data inversion

    NASA Technical Reports Server (NTRS)

    Fymat, A. L.

    1975-01-01

    Technique has been developed for determining values of selected subsets of independent variables in mathematical formulations. Required computation time increases with first power of the number of variables. This is in contrast with classical minimization methods for which computational time increases with third power of the number of variables.

  10. U.S. Effort in the Development of New Crops (Lesquerella, Pennycress, Coriander, and Cuphea)

    USDA-ARS?s Scientific Manuscript database

    The U.S. effort for the development of New Crops is directed toward the advancement of crops that can be grown in rotation with traditional commodity crops, off-season production and utilization of acreage not currently under cultivation. This effort is intended to have no or minimal impact on crop...

  11. Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.

  12. A fast efficient implicit scheme for the gasdynamic equations using a matrix reduction technique

    NASA Technical Reports Server (NTRS)

    Barth, T. J.; Steger, J. L.

    1985-01-01

    An efficient implicit finite-difference algorithm for the gasdynamic equations utilizing matrix reduction techniques is presented. A significant reduction in arithmetic operations is achieved without loss of the stability characteristics generality found in the Beam and Warming approximate factorization algorithm. Steady-state solutions to the conservative Euler equations in generalized coordinates are obtained for transonic flows and used to show that the method offers computational advantages over the conventional Beam and Warming scheme. Existing Beam and Warming codes can be retrofit with minimal effort. The theoretical extension of the matrix reduction technique to the full Navier-Stokes equations in Cartesian coordinates is presented in detail. Linear stability, using a Fourier stability analysis, is demonstrated and discussed for the one-dimensional Euler equations.

  13. The da Vinci telerobotic surgical system: the virtual operative field and telepresence surgery.

    PubMed

    Ballantyne, Garth H; Moll, Fred

    2003-12-01

    The United States Department of Defense developed the telepresence surgery concept to meet battlefield demands. The da Vinci telerobotic surgery system evolved from these efforts. In this article, the authors describe the components of the da Vinci system and explain how the surgeon sits at a computer console, views a three-dimensional virtual operative field, and performs the operation by controlling robotic arms that hold the stereoscopic video telescope and surgical instruments that simulate hand motions with seven degrees of freedom. The three-dimensional imaging and handlike motions of the system facilitate advanced minimally invasive thoracic, cardiac, and abdominal procedures. da Vinci has recently released a second generation of telerobots with four arms and will continue to meet the evolving challenges of surgery.

  14. Neuroethics and Disorders of Consciousness: Discerning Brain States in Clinical Practice and Research.

    PubMed

    Fins, Joseph J

    2016-12-01

    Decisions about end-of-life care and participation in clinical research for patients with disorders of consciousness begin with diagnostic discernment. Accurately distinguishing between brain states clarifies clinicians' ethical obligations and responsibilities. Central to this effort is the obligation to provide neuropalliative care for patients in the minimally conscious state who can perceive pain and to restore functional communication through neuroprosthetics, drugs, and rehabilitation to patients with intact but underactivated neural networks. Efforts to bring scientific advances to patients with disorders of consciousness are reviewed, including the investigational use of deep brain stimulation in patients in the minimally conscious state. These efforts help to affirm the civil rights of a population long on the margins. © 2016 American Medical Association. All Rights Reserved.

  15. Using Computational Toxicology to Enable Risk-Based ...

    EPA Pesticide Factsheets

    presentation at Drug Safety Gordon Research Conference 2016 on research efforts in NCCT to enable Computational Toxicology to support risk assessment. Slide presentation at Drug Safety Gordon Research Conference 2016 on research efforts in NCCT to enable Computational Toxicology to support risk assessment.

  16. U.S. EPA computational toxicology programs: Central role of chemical-annotation efforts and molecular databases

    EPA Science Inventory

    EPA’s National Center for Computational Toxicology is engaged in high-profile research efforts to improve the ability to more efficiently and effectively prioritize and screen thousands of environmental chemicals for potential toxicity. A central component of these efforts invol...

  17. Status and Roadmap of CernVM

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.

  18. A Computational Approach for Model Update of an LS-DYNA Energy Absorbing Cell

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Jackson, Karen E.; Kellas, Sotiris

    2008-01-01

    NASA and its contractors are working on structural concepts for absorbing impact energy of aerospace vehicles. Recently, concepts in the form of multi-cell honeycomb-like structures designed to crush under load have been investigated for both space and aeronautics applications. Efforts to understand these concepts are progressing from tests of individual cells to tests of systems with hundreds of cells. Because of fabrication irregularities, geometry irregularities, and material properties uncertainties, the problem of reconciling analytical models, in particular LS-DYNA models, with experimental data is a challenge. A first look at the correlation results between single cell load/deflection data with LS-DYNA predictions showed problems which prompted additional work in this area. This paper describes a computational approach that uses analysis of variance, deterministic sampling techniques, response surface modeling, and genetic optimization to reconcile test with analysis results. Analysis of variance provides a screening technique for selection of critical parameters used when reconciling test with analysis. In this study, complete ignorance of the parameter distribution is assumed and, therefore, the value of any parameter within the range that is computed using the optimization procedure is considered to be equally likely. Mean values from tests are matched against LS-DYNA solutions by minimizing the square error using a genetic optimization. The paper presents the computational methodology along with results obtained using this approach.

  19. Computational Assessment of the Aerodynamic Performance of a Variable-Speed Power Turbine for Large Civil Tilt-Rotor Application

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    2011-01-01

    The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range, from 100% at take-off to 54% at cruise. The variable-speed power turbine offers one approach by which to effect this speed variation. Key aero-challenges include high work factors at cruise and wide (40 to 60 deg.) incidence variations in blade and vane rows over the speed range. The turbine design approach must optimize cruise efficiency and minimize off-design penalties at take-off. The accuracy of the off-design incidence loss model is therefore critical to the turbine design. In this effort, 3-D computational analyses are used to assess the variation of turbine efficiency with speed change. The conceptual design of a 4-stage variable-speed power turbine for the Large Civil Tilt-Rotor application is first established at the meanline level. The design of 2-D airfoil sections and resulting 3-D blade and vane rows is documented. Three-dimensional Reynolds Averaged Navier-Stokes computations are used to assess the design and off-design performance of an embedded 1.5-stage portion-Rotor 1, Stator 2, and Rotor 2-of the turbine. The 3-D computational results yield the same efficiency versus speed trends predicted by meanline analyses, supporting the design choice to execute the turbine design at the cruise operating speed.

  20. Computational Unification: a Vision for Connecting Researchers

    NASA Astrophysics Data System (ADS)

    Troy, R. M.; Kingrey, O. J.

    2002-12-01

    Computational Unification of science, once only a vision, is becoming a reality. This technology is based upon a scientifically defensible, general solution for Earth Science data management and processing. The computational unification of science offers a real opportunity to foster inter and intra-discipline cooperation, and the end of 're-inventing the wheel'. As we move forward using computers as tools, it is past time to move from computationally isolating, "one-off" or discipline-specific solutions into a unified framework where research can be more easily shared, especially with researchers in other disciplines. The author will discuss how distributed meta-data, distributed processing and distributed data objects are structured to constitute a working interdisciplinary system, including how these resources lead to scientific defensibility through known lineage of all data products. Illustration of how scientific processes are encapsulated and executed illuminates how previously written processes and functions are integrated into the system efficiently and with minimal effort. Meta-data basics will illustrate how intricate relationships may easily be represented and used to good advantage. Retrieval techniques will be discussed including trade-offs of using meta-data versus embedded data, how the two may be integrated, and how simplifying assumptions may or may not help. This system is based upon the experience of the Sequoia 2000 and BigSur research projects at the University of California, Berkeley, whose goals were to find an alternative to the Hughes EOS-DIS system and is presently offered by Science Tools corporation, of which the author is a principal.

  1. IMPROVING THE ENVIRONMENTAL PERFORMANCE OF CHEMICAL PROCESSES THROUGH THE USE OF INFORMATION TECHNOLOGY

    EPA Science Inventory

    Efforts are currently underway at the USEPA to develop information technology applications to improve the environmental performance of the chemical process industry. These efforts include the use of genetic algorithms to optimize different process options for minimal environmenta...

  2. Utilization of steel in industrialized highway bridge systems.

    DOT National Transportation Integrated Search

    1974-01-01

    The space frame concept presented in this report represents the results of an effort to minimize on-site construction time while utilizing steel to provide a high quality but competitive type of bridge structure. A necessary part of the effort was th...

  3. Multimodal system for the planning and guidance of bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Cheirsilp, Ronnarit; Zang, Xiaonan; Byrnes, Patrick

    2015-03-01

    Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system's potential.

  4. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    PubMed

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  5. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    PubMed Central

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  6. STAIRSTEP -- a research-oriented program for undergraduate students at Lamar University

    NASA Astrophysics Data System (ADS)

    Bahrim, Cristian

    2011-03-01

    The relative low number of undergraduate STEM students in many science disciplines, and in particular in physics, represents a major concern for our faculty and the administration at Lamar University. Therefore, a collaborative effort between several science programs, including computer science, chemistry, geology, mathematics and physics was set up with the goal of increasing the number of science majors and to minimize the retention rate. Lamar's Student Advancing through Involvement in Research Student Talent Expansion Program (STAIRSTEP) is a NSF-DUE sponsored program designed to motivate STEM students to graduate with a science degree from one of these five disciplines by involving them in state-of-the-art research projects and various outreach activities organized on-campus or in road shows at the secondary and high schools. The physics program offers hands-on experience in optics, such as computer-based experiments for studying the diffraction and interference of light incident on nettings or electronic wave packets incident on crystals, with applications in optical imaging, electron microscopy, and crystallography. The impact of the various activities done in STAIRSTEP on our Physics Program will be discussed.

  7. Tutorial videos of bioinformatics resources: online distribution trial in Japan named TogoTV.

    PubMed

    Kawano, Shin; Ono, Hiromasa; Takagi, Toshihisa; Bono, Hidemasa

    2012-03-01

    In recent years, biological web resources such as databases and tools have become more complex because of the enormous amounts of data generated in the field of life sciences. Traditional methods of distributing tutorials include publishing textbooks and posting web documents, but these static contents cannot adequately describe recent dynamic web services. Due to improvements in computer technology, it is now possible to create dynamic content such as video with minimal effort and low cost on most modern computers. The ease of creating and distributing video tutorials instead of static content improves accessibility for researchers, annotators and curators. This article focuses on online video repositories for educational and tutorial videos provided by resource developers and users. It also describes a project in Japan named TogoTV (http://togotv.dbcls.jp/en/) and discusses the production and distribution of high-quality tutorial videos, which would be useful to viewer, with examples. This article intends to stimulate and encourage researchers who develop and use databases and tools to distribute how-to videos as a tool to enhance product usability.

  8. Tutorial videos of bioinformatics resources: online distribution trial in Japan named TogoTV

    PubMed Central

    Kawano, Shin; Ono, Hiromasa; Takagi, Toshihisa

    2012-01-01

    In recent years, biological web resources such as databases and tools have become more complex because of the enormous amounts of data generated in the field of life sciences. Traditional methods of distributing tutorials include publishing textbooks and posting web documents, but these static contents cannot adequately describe recent dynamic web services. Due to improvements in computer technology, it is now possible to create dynamic content such as video with minimal effort and low cost on most modern computers. The ease of creating and distributing video tutorials instead of static content improves accessibility for researchers, annotators and curators. This article focuses on online video repositories for educational and tutorial videos provided by resource developers and users. It also describes a project in Japan named TogoTV (http://togotv.dbcls.jp/en/) and discusses the production and distribution of high-quality tutorial videos, which would be useful to viewer, with examples. This article intends to stimulate and encourage researchers who develop and use databases and tools to distribute how-to videos as a tool to enhance product usability. PMID:21803786

  9. 7 CFR 3430.36 - Procedures to minimize or eliminate duplication of effort.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) COOPERATIVE STATE RESEARCH, EDUCATION, AND EXTENSION SERVICE, DEPARTMENT OF AGRICULTURE COMPETITIVE AND... may implement appropriate business processes to minimize or eliminate the awarding of CSREES Federal... awards made by other Federal agencies. Business processes may include the review of the Current and...

  10. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  11. Stereotactic Laser Ablation for Medically Intractable Epilepsy: The Next Generation of Minimally Invasive Epilepsy Surgery

    PubMed Central

    LaRiviere, Michael J.; Gross, Robert E.

    2016-01-01

    Epilepsy is a common, disabling illness that is refractory to medical treatment in approximately one-third of patients, particularly among those with mesial temporal lobe epilepsy. While standard open mesial temporal resection is effective, achieving seizure freedom in most patients, efforts to develop safer, minimally invasive techniques have been underway for over half a century. Stereotactic ablative techniques, in particular, radiofrequency (RF) ablation, were first developed in the 1960s, with refinements in the 1990s with the advent of modern computed tomography and magnetic resonance-based imaging. In the past 5 years, the most recent techniques have used MRI-guided laser interstitial thermotherapy (LITT), the development of which began in the 1980s, saw refinements in MRI thermal imaging through the 1990s, and was initially used primarily for the treatment of intracranial and extracranial tumors. The present review describes the original stereotactic ablation trials, followed by modern imaging-guided RF ablation series for mesial temporal lobe epilepsy. The developments of LITT and MRI thermometry are then discussed. Finally, the two currently available MRI-guided LITT systems are reviewed for their role in the treatment of mesial temporal lobe and other medically refractory epilepsies. PMID:27995127

  12. Neutral buoyancy is optimal to minimize the cost of transport in horizontally swimming seals

    PubMed Central

    Sato, Katsufumi; Aoki, Kagari; Watanabe, Yuuki Y.; Miller, Patrick J. O.

    2013-01-01

    Flying and terrestrial animals should spend energy to move while supporting their weight against gravity. On the other hand, supported by buoyancy, aquatic animals can minimize the energy cost for supporting their body weight and neutral buoyancy has been considered advantageous for aquatic animals. However, some studies suggested that aquatic animals might use non-neutral buoyancy for gliding and thereby save energy cost for locomotion. We manipulated the body density of seals using detachable weights and floats, and compared stroke efforts of horizontally swimming seals under natural conditions using animal-borne recorders. The results indicated that seals had smaller stroke efforts to swim a given speed when they were closer to neutral buoyancy. We conclude that neutral buoyancy is likely the best body density to minimize the cost of transport in horizontal swimming by seals. PMID:23857645

  13. Neutral buoyancy is optimal to minimize the cost of transport in horizontally swimming seals.

    PubMed

    Sato, Katsufumi; Aoki, Kagari; Watanabe, Yuuki Y; Miller, Patrick J O

    2013-01-01

    Flying and terrestrial animals should spend energy to move while supporting their weight against gravity. On the other hand, supported by buoyancy, aquatic animals can minimize the energy cost for supporting their body weight and neutral buoyancy has been considered advantageous for aquatic animals. However, some studies suggested that aquatic animals might use non-neutral buoyancy for gliding and thereby save energy cost for locomotion. We manipulated the body density of seals using detachable weights and floats, and compared stroke efforts of horizontally swimming seals under natural conditions using animal-borne recorders. The results indicated that seals had smaller stroke efforts to swim a given speed when they were closer to neutral buoyancy. We conclude that neutral buoyancy is likely the best body density to minimize the cost of transport in horizontal swimming by seals.

  14. Finite Element Models for Electron Beam Freeform Fabrication Process

    NASA Technical Reports Server (NTRS)

    Chandra, Umesh

    2012-01-01

    Electron beam freeform fabrication (EBF3) is a member of an emerging class of direct manufacturing processes known as solid freeform fabrication (SFF); another member of the class is the laser deposition process. Successful application of the EBF3 process requires precise control of a number of process parameters such as the EB power, speed, and metal feed rate in order to ensure thermal management; good fusion between the substrate and the first layer and between successive layers; minimize part distortion and residual stresses; and control the microstructure of the finished product. This is the only effort thus far that has addressed computer simulation of the EBF3 process. The models developed in this effort can assist in reducing the number of trials in the laboratory or on the shop floor while making high-quality parts. With some modifications, their use can be further extended to the simulation of laser, TIG (tungsten inert gas), and other deposition processes. A solid mechanics-based finite element code, ABAQUS, was chosen as the primary engine in developing these models whereas a computational fluid dynamics (CFD) code, Fluent, was used in a support role. Several innovative concepts were developed, some of which are highlighted below. These concepts were implemented in a number of new computer models either in the form of stand-alone programs or as user subroutines for ABAQUS and Fluent codes. A database of thermo-physical, mechanical, fluid, and metallurgical properties of stainless steel 304 was developed. Computing models for Gaussian and raster modes of the electron beam heat input were developed. Also, new schemes were devised to account for the heat sink effect during the deposition process. These innovations, and others, lead to improved models for thermal management and prediction of transient/residual stresses and distortions. Two approaches for the prediction of microstructure were pursued. The first was an empirical approach involving the computation of thermal gradient, solidification rate, and velocity (G,R,V) coupled with the use of a solidification map that should be known a priori. The second approach relies completely on computer simulation. For this purpose a criterion for the prediction of morphology was proposed, which was combined with three alternative models for the prediction of microstructure; one based on solidification kinetics, the second on phase diagram, and the third on differential scanning calorimetry data. The last was found to be the simplest and the most versatile; it can be used with multicomponent alloys and rapid solidification without any additional difficulty. For the purpose of (limited) experimental validation, finite element models developed in this effort were applied to three different shapes made of stainless steel 304 material, designed expressly for this effort with an increasing level of complexity. These finite element models require large computation time, especially when applied to deposits with multiple adjacent beads and layers. This problem can be overcome, to some extent, by the use of fast, multi-core computers. Also, due to their numerical nature coupled with the fact that solid mechanics- based models are being used to represent the material behavior in liquid and vapor phases as well, the models have some inherent approximations that become more pronounced when dealing with multi-bead and multi-layer deposits.

  15. Seamless Digital Environment – Data Analytics Use Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna

    Multiple research efforts in the U.S Department of Energy Light Water Reactor Sustainability (LWRS) Program studies the need and design of an underlying architecture to support the increased amount and use of data in the nuclear power plant. More specifically the three LWRS research efforts; Digital Architecture for an Automated Plant, Automated Work Packages, Computer-Based Procedures for Field Workers, and the Online Monitoring efforts all have identified the need for a digital architecture and more importantly the need for a Seamless Digital Environment (SDE). A SDE provides a mean to access multiple applications, gather the data points needed, conduct themore » analysis requested, and present the result to the user with minimal or no effort by the user. During the 2016 annual Nuclear Information Technology Strategic Leadership (NITSL) group meeting the nuclear utilities identified the need for research focused on data analytics. The effort was to develop and evaluate use cases for data mining and analytics for employing information from plant sensors and database for use in developing improved business analytics. The goal of the study is to research potential approaches to building an analytics solution for equipment reliability, on a small scale, focusing on either a single piece of equipment or a single system. The analytics solution will likely consist of a data integration layer, predictive and machine learning layer and the user interface layer that will display the output of the analysis in a straight forward, easy to consume manner. This report describes the use case study initiated by NITSL and conducted in a collaboration between Idaho National Laboratory, Arizona Public Service – Palo Verde Nuclear Generating Station, and NextAxiom Inc.« less

  16. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  17. ENVIRONMENTAL RESEARCH BRIEF: POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF FOOD SERVICE EQUIPMENT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  18. ENVIRONMENTAL RESEARCH BRIEF: POLLUTION PREVENTION FOR A MANUFACTURER OF METAL FASTENERS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  19. 40 CFR 63.6605 - What are my general requirements for complying with this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... maintain any affected source, including associated air pollution control equipment and monitoring equipment, in a manner consistent with safety and good air pollution control practices for minimizing emissions. The general duty to minimize emissions does not require you to make any further efforts to reduce...

  20. ENVIRONMENTAL RESEARCH BRIEF: POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF REBUILT INDUSTRIAL CRANKSHAFTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  1. Efforts to reduce mortality to hydroelectric turbine-passed fish: locating and quantifying damaging shear stresses.

    PubMed

    Cada, Glenn; Loar, James; Garrison, Laura; Fisher, Richard; Neitzel, Duane

    2006-06-01

    Severe fluid forces are believed to be a source of injury and mortality to fish that pass through hydroelectric turbines. A process is described by which laboratory bioassays, computational fluid dynamics models, and field studies can be integrated to evaluate the significance of fluid shear stresses that occur in a turbine. Areas containing potentially lethal shear stresses were identified near the stay vanes and wicket gates, runner, and in the draft tube of a large Kaplan turbine. However, under typical operating conditions, computational models estimated that these dangerous areas comprise less than 2% of the flow path through the modeled turbine. The predicted volumes of the damaging shear stress zones did not correlate well with observed fish mortality at a field installation of this turbine, which ranged from less than 1% to nearly 12%. Possible reasons for the poor correlation are discussed. Computational modeling is necessary to develop an understanding of the role of particular fish injury mechanisms, to compare their effects with those of other sources of injury, and to minimize the trial and error previously needed to mitigate those effects. The process we describe is being used to modify the design of hydroelectric turbines to improve fish passage survival.

  2. Aeroacoustic Simulations of a Nose Landing Gear with FUN3D: A Grid Refinement Study

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Lockard, David P.

    2017-01-01

    A systematic grid refinement study is presented for numerical simulations of a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise (Registered Trademark) grid generation software are used for numerical simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A set of grids was generated in this manner to create a family of uniformly refined grids. The finest grid was then modified to coarsen the wall-normal spacing to create a grid suitable for the wall-function implementation in FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence modeling approach is used for these simulations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. These CFD solutions are used as input to a FfowcsWilliams-Hawkings (FW-H) noise propagation code to compute the farfield noise levels. The agreement of the computed results with the experimental data improves as the grid is refined.

  3. NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1979-01-01

    A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.

  4. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  5. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  6. Utilizing information technology to mitigate the handoff risks caused by resident work hour restrictions.

    PubMed

    Bernstein, Joseph; MacCourt, Duncan C; Jacob, Dan M; Mehta, Samir

    2010-10-01

    Resident duty hours have been restricted to 80 per week, a limitation thought to increase patient safety by allowing adequate sleep. Yet decreasing work hours increases the number of patient exchanges (so-called "handoff") at the end of shifts. WHERE ARE WE NOW?: A greater frequency of handoff leads to an increased risk of physician error. Information technology can be used to minimize that risk. WHERE DO WE NEED TO GO?: A computer-based expert system can alleviate the problems of data omissions and data overload and minimize asynchrony and asymmetry. A smart system can further prompt departing physicians for information that improves their understanding of the patient's condition. Likewise, such a system can take full advantage of multimedia; generate a study record for self-improvement; and strengthen the interaction between specialists jointly managing patients. HOW DO WE GET THERE?: There are impediments to implementation, notably requirements of the Health Insurance Portability and Accountability Act; medical-legal ramifications, and computer programming costs. Nonetheless, the use of smart systems, not to supplant physicians' rational facilities but to supplement them, promises to mitigate the risks of frequent patient handoff and advance patient care. Thus, a concerted effort to promote such smart systems on the part of the Accreditation Council for Graduate Medical Education (the source of the duty hour restrictions) and the Association of American Medical Colleges (representing medical schools and teaching hospitals) may be effective. We propose that these organizations host a contest for the best smart handoff systems and vigorously promote the winners.

  7. Interdepartmental conflict management and negotiation in cardiovascular imaging.

    PubMed

    Otero, Hansel J; Nallamshetty, Leelakrishna; Rybicki, Frank J

    2008-07-01

    Although the relationship between cardiologists and radiologists has a thorny history, advanced cardiac imaging technology and the promise of cardiac computed tomography are forcing both specialties back to the negotiation table. These discussions represent an opportunity for better communication, collaboration, and resource allocation. The authors address the aspects of interdepartmental conflict management and negotiation through their radiology department's ongoing efforts to provide high-quality advanced noninvasive cardiovascular imaging services at a large academic institution. The definition and causes of conflict are defined, with a specific focus on noninvasive cardiovascular imaging, followed by a description of steps used in the negotiation process. The authors encourage radiologists to entertain an open dialogue with cardiology, because in many cases, both sides can benefit. The benefits of a negotiated outcome include minimizing internal competitors, incorporating cardiologists' expertise to cardiac imaging algorithms, and more effective training opportunities.

  8. A pluggable framework for parallel pairwise sequence search.

    PubMed

    Archuleta, Jeremy; Feng, Wu-chun; Tilevich, Eli

    2007-01-01

    The current and near future of the computing industry is one of multi-core and multi-processor technology. Most existing sequence-search tools have been designed with a focus on single-core, single-processor systems. This discrepancy between software design and hardware architecture substantially hinders sequence-search performance by not allowing full utilization of the hardware. This paper presents a novel framework that will aid the conversion of serial sequence-search tools into a parallel version that can take full advantage of the available hardware. The framework, which is based on a software architecture called mixin layers with refined roles, enables modules to be plugged into the framework with minimal effort. The inherent modular design improves maintenance and extensibility, thus opening up a plethora of opportunities for advanced algorithmic features to be developed and incorporated while routine maintenance of the codebase persists.

  9. Long-term Preservation of Data Analysis Capabilities

    NASA Astrophysics Data System (ADS)

    Gabriel, C.; Arviset, C.; Ibarra, A.; Pollock, A.

    2015-09-01

    While the long-term preservation of scientific data obtained by large astrophysics missions is ensured through science archives, the issue of data analysis software preservation has hardly been addressed. Efforts by large data centres have contributed so far to maintain some instrument or mission-specific data reduction packages on top of high-level general purpose data analysis software. However, it is always difficult to keep software alive without support and maintenance once the active phase of a mission is over. This is especially difficult in the budgetary model followed by space agencies. We discuss the importance of extending the lifetime of dedicated data analysis packages and review diverse strategies under development at ESA using new paradigms such as Virtual Machines, Cloud Computing, and Software as a Service for making possible full availability of data analysis and calibration software for decades at minimal cost.

  10. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  11. Shor's factoring algorithm and modern cryptography. An illustration of the capabilities inherent in quantum computers

    NASA Astrophysics Data System (ADS)

    Gerjuoy, Edward

    2005-06-01

    The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.

  12. In vitro screening techniques for reactive metabolites for minimizing bioactivation potential in drug discovery.

    PubMed

    Prakash, Chandra; Sharma, Raman; Gleave, Michelle; Nedderman, Angus

    2008-11-01

    Drug induced toxicity remains one of the major reasons for failures of new pharmaceuticals, and for the withdrawal of approved drugs from the market. Efforts are being made to reduce attrition of drug candidates, and to minimize their bioactivation potential in the early stages of drug discovery in order to bring safer compounds to the market. Therefore, in addition to potency and selectivity; drug candidates are now selected on the basis of acceptable metabolism/toxicology profiles in preclinical species. To support this, new approaches have been developed, which include extensive in vitro methods using human and animal hepatic cellular and subcellular systems, recombinant human drug metabolizing enzymes, increased automation for higher-throughput screens, sensitive analytical technologies and in silico computational models to assess the metabolism aspects of the new chemical entities. By using these approaches many compounds that might have serious adverse reactions associated with them are effectively eliminated before reaching clinical trials, however some toxicities such as those caused by idiosyncratic responses, are not detected until a drug is in late stages of clinical trials or has become available to the market. One of the proposed mechanisms for the development of idiosyncratic drug toxicity is the bioactivation of drugs to form reactive metabolites by drug metabolizing enzymes. This review discusses the different approaches to, and benefits of using existing in vitro techniques, for the detection of reactive intermediates in order to minimize bioactivation potential in drug discovery.

  13. TED: A Tolerant Edit Distance for segmentation evaluation.

    PubMed

    Funke, Jan; Klein, Jonas; Moreno-Noguer, Francesc; Cardona, Albert; Cook, Matthew

    2017-02-15

    In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods. Copyright © 2016. Published by Elsevier Inc.

  14. CFD analysis of turbopump volutes

    NASA Technical Reports Server (NTRS)

    Ascoli, Edward P.; Chan, Daniel C.; Darian, Armen; Hsu, Wayne W.; Tran, Ken

    1993-01-01

    An effort is underway to develop a procedure for the regular use of CFD analysis in the design of turbopump volutes. Airflow data to be taken at NASA Marshall will be used to validate the CFD code and overall procedure. Initial focus has been on preprocessing (geometry creation, translation, and grid generation). Volute geometries have been acquired electronically and imported into the CATIA CAD system and RAGGS (Rockwell Automated Grid Generation System) via the IGES standard. An initial grid topology has been identified and grids have been constructed for turbine inlet and discharge volutes. For CFD analysis of volutes to be used regularly, a procedure must be defined to meet engineering design needs in a timely manner. Thus, a compromise must be established between making geometric approximations, the selection of grid topologies, and possible CFD code enhancements. While the initial grid developed approximated the volute tongue with a zero thickness, final computations should more accurately account for the geometry in this region. Additionally, grid topologies will be explored to minimize skewness and high aspect ratio cells that can affect solution accuracy and slow code convergence. Finally, as appropriate, code modifications will be made to allow for new grid topologies in an effort to expedite the overall CFD analysis process.

  15. Applications of Computer-Assisted Navigation for the Minimally Invasive Reduction of Isolated Zygomatic Arch Fractures.

    PubMed

    Li, Zhi; Yang, Rong-Tao; Li, Zu-Bing

    2015-09-01

    Computer-assisted navigation has been widely used in oral and maxillofacial surgery. The purpose of this study was to describe the applications of computer-assisted navigation for the minimally invasive reduction of isolated zygomatic arch fractures. All patients identified as having isolated zygomatic arch fractures presenting to the authors' department from April 2013 through November 2014 were included in this prospective study. Minimally invasive reductions of isolated zygomatic arch fractures were performed on these patients under the guidance of computer-assisted navigation. The reduction status was evaluated by postoperative computed tomography (CT) 1 week after the operation. Postoperative complications and facial contours were evaluated during follow-up. Functional recovery was evaluated by the difference between the preoperative maximum interincisal mouth opening and that at the final follow-up. Twenty-three patients were included in this case series. The operation proceeded well in all patients. Postoperatively, all patients displayed uneventful healing without postoperative complication. Postoperative CT showed exact reduction in all cases. Satisfactory facial contour and functional recovery were observed in all patients. The preoperative maximal mouth opening ranged from 8 to 25 mm, and the maximal mouth opening at the final follow-up ranged from 36 to 42 mm. Computer-assisted navigation can be used not only for guiding zygomatic arch fracture reduction, but also for assessing reduction. Computer-assisted navigation is an effective and minimally invasive technique that can be applied in the reduction of isolated zygomatic arch fractures. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Understanding crop genetic diversity under modern plant breeding.

    PubMed

    Fu, Yong-Bi

    2015-11-01

    Maximizing crop yield while at the same time minimizing crop failure for sustainable agriculture requires a better understanding of the impacts of plant breeding on crop genetic diversity. This review identifies knowledge gaps and shows the need for more research into genetic diversity changes under plant breeding. Modern plant breeding has made a profound impact on food production and will continue to play a vital role in world food security. For sustainable agriculture, a compromise should be sought between maximizing crop yield under changing climate and minimizing crop failure under unfavorable conditions. Such a compromise requires better understanding of the impacts of plant breeding on crop genetic diversity. Efforts have been made over the last three decades to assess crop genetic diversity using molecular marker technologies. However, these assessments have revealed some temporal diversity patterns that are largely inconsistent with our perception that modern plant breeding reduces crop genetic diversity. An attempt was made in this review to explain such discrepancies by examining empirical assessments of crop genetic diversity and theoretical investigations of genetic diversity changes over time under artificial selection. It was found that many crop genetic diversity assessments were not designed to assess diversity impacts from specific plant breeding programs, while others were experimentally inadequate and contained technical biases from the sampling of cultivars and genomes. Little attention has been paid to theoretical investigations on crop genetic diversity changes from plant breeding. A computer simulation of five simplified breeding schemes showed the substantial effects of plant breeding on the retention of heterozygosity over generations. It is clear that more efforts are needed to investigate crop genetic diversity in space and time under plant breeding to achieve sustainable crop production.

  17. Proposed Directions for Research in Computer-Based Education.

    ERIC Educational Resources Information Center

    Waugh, Michael L.

    Several directions for potential research efforts in the field of computer-based education (CBE) are discussed. (For the purposes of this paper, CBE is defined as any use of computers to promote learning with no intended inference as to the specific nature or organization of the educational application under discussion.) Efforts should be directed…

  18. 24 CFR 983.254 - Vacancies.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... vacancy (and notwithstanding the reasonable good faith efforts of the PHA to fill such vacancies), the PHA... on the PHA waiting list referred by the PHA. (3) The PHA and the owner must make reasonable good faith efforts to minimize the likelihood and length of any vacancy. (b) Reducing number of contract...

  19. From the Floor: Raising Child Care Salaries.

    ERIC Educational Resources Information Center

    Whitebook, Marcy; And Others

    The comprehensive National Child Care Staffing Study confirmed that American children are in jeopardy because their teachers are poorly compensated and minimally trained. An increasing number of local and state efforts have begun to face this crisis head-on. This booklet reviews these efforts, focusing primarily on strategies for raising salaries.…

  20. 77 FR 38298 - Agency Information Collection Activities: Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-27

    ... of information technology to minimize the information collection burden. 1. Type of Information... issues: (1) supporting CMS' efforts to improve payment accuracy and (2) understanding issues of access.... As a new collection, the information collected is expected to support CMS' efforts to improve the...

  1. Continuous Improvement in State Funded Preschool Programs

    ERIC Educational Resources Information Center

    Jackson, Sarah L.

    2012-01-01

    State funded preschool programs were constantly faced with the need to change in order to address internal and external demands. As programs engaged in efforts towards change, minimal research was available on how to support continuous improvement efforts within the context unique to state funded preschool programs. Guidance available had…

  2. Quasi-Optimal Elimination Trees for 2D Grids with Singularities

    DOE PAGES

    Paszyńska, A.; Paszyński, M.; Jopek, K.; ...

    2015-01-01

    We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log ⁡ N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less

  3. Quasi-Optimal Elimination Trees for 2D Grids with Singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paszyńska, A.; Paszyński, M.; Jopek, K.

    We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log ⁡ N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less

  4. Minimizing the influence of unconscious bias in evaluations: a practical guide.

    PubMed

    Goldyne, Adam J

    2007-01-01

    The forensic psychiatrist's efforts to strive for objectivity may be impaired by unrecognized unconscious biases. The author presents a framework for understanding such biases. He then offers a practical approach for individual forensic psychiatrists who want to identify and minimize the influence of previously unrecognized biases on their evaluations.

  5. Decision Making and the Avoidance of Cognitive Demand

    ERIC Educational Resources Information Center

    Kool, Wouter; McGuire, Joseph T.; Rosen, Zev B.; Botvinick, Matthew M.

    2010-01-01

    Behavioral and economic theories have long maintained that actions are chosen so as to minimize demands for exertion or work, a principle sometimes referred to as the "law of less work". The data supporting this idea pertain almost entirely to demands for physical effort. However, the same minimization principle has often been assumed also to…

  6. ENVIRONMENTAL RESEARCH BRIEF: POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF GEAR CASES FOR OUTBOARD MOTORS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  7. Reprocessing Multiyear GPS Data from Continuously Operating Reference Stations on Cloud Computing Platform

    NASA Astrophysics Data System (ADS)

    Yoon, S.

    2016-12-01

    To define geodetic reference frame using GPS data collected by Continuously Operating Reference Stations (CORS) network, historical GPS data needs to be reprocessed regularly. Reprocessing GPS data collected by upto 2000 CORS sites for the last two decades requires a lot of computational resource. At National Geodetic Survey (NGS), there has been one completed reprocessing in 2011, and currently, the second reprocessing is undergoing. For the first reprocessing effort, in-house computing resource was utilized. In the current second reprocessing effort, outsourced cloud computing platform is being utilized. In this presentation, the outline of data processing strategy at NGS is described as well as the effort to parallelize the data processing procedure in order to maximize the benefit of the cloud computing. The time and cost savings realized by utilizing cloud computing approach will also be discussed.

  8. Developpement et implementation d'une methode pour resoudre les equations de la couche limite laminaire et turbulente

    NASA Astrophysics Data System (ADS)

    Leuca, Maxim

    CFD (Computational Fluid Dynamics) is a computational tool for studying flow in science and technology. The Aerospace Industry uses increasingly the CFD modeling and design phase of the aircraft, so the precision with which phenomena are simulated boundary layer is very important. The research efforts are focused on optimizing the aerodynamic performance of airfoils to predict the drag and delay the laminar-turbulent transition. CFD codes must be fast and efficient to model complex geometries for aerodynamic flows. The resolution of the boundary layer equations requires a large amount of computing resources for viscous flows. CFD codes are commonly used to simulate aerodynamic flows, require normal meshes to the wall, extremely fine, and, by consequence, the calculations are very expensive. . This thesis proposes a new approach to solve the equations of boundary layer for laminar and turbulent flows using an approach based on the finite difference method. Integrated into a code of panels, this concept allows to solve airfoils avoiding the use of iterative algorithms, usually computing time and often involving convergence problems. The main advantages of panels methods are their simplicity and ability to obtain, with minimal computational effort, solutions in complex flow conditions for relatively complicated configurations. To verify and validate the developed program, experimental data are used as references when available. Xfoil code is used to obtain data as a pseudo references. Pseudo-reference, as in the absence of experimental data, we cannot really compare two software together. Xfoil is a program that has proven to be accurate and inexpensive computing resources. Developed by Drela (1985), this program uses the method with two integral to design and analyze profiles of wings at low speed (Drela et Youngren, 2014), (Drela, 2003). NACA 0012, NACA 4412, and ATR-42 airfoils have been used for this study. For the airfoils NACA 0012 and NACA 4412 the calculations are made using the Mach number M =0.17 and Reynolds number Re = 6x10 6 conditions for which we have experimental results. For the airfoil ATR-42 the calculations are made using the Mach number M =0.1 and Reynolds number Re=536450 as it was analysed in LARCASE's Price-Paidoussis wind tunnel. Keywords: boundary layer, direct method, displacement thickness, finite differences, Xfoil code.

  9. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  10. Prediction of forces and moments for flight vehicle control effectors: Workplan

    NASA Technical Reports Server (NTRS)

    Maughmer, Mark D.

    1989-01-01

    Two research activities directed at hypersonic vehicle configurations are currently underway. The first involves the validation of a number of classical local surface inclination methods commonly employed in preliminary design studies of hypersonic flight vehicles. Unlike studies aimed at validating such methods for predicting overall vehicle aerodynamics, this effort emphasizes validating the prediction of forces and moments for flight control studies. Specifically, several vehicle configurations for which experimental or flight-test data are available are being examined. By comparing the theoretical predictions with these data, the strengths and weaknesses of the local surface inclination methods can be ascertained and possible improvements suggested. The second research effort, of significance to control during take-off and landing of most proposed hypersonic vehicle configurations, is aimed at determining the change due to ground effect in control effectiveness of highly swept delta planforms. Central to this research is the development of a vortex-lattice computer program which incorporates an unforced trailing vortex sheet and an image ground plane. With this program, the change in pitching moment of the basic vehicle due to ground proximity, and whether or not there is sufficient control power available to trim, can be determined. In addition to the current work, two different research directions are suggested for future study. The first is aimed at developing an interactive computer program to assist the flight controls engineer in determining the forces and moments generated by different types of control effectors that might be used on hypersonic vehicles. The first phase of this work would deal in the subsonic portion of the flight envelope, while later efforts would explore the supersonic/hypersonic flight regimes. The second proposed research direction would explore methods for determining the aerodynamic trim drag of a generic hypersonic flight vehicle and ways in which it can be minimized through vehicle design and trajectory optimization.

  11. A true minimally invasive approach for cochlear implantation: high accuracy in cranial base navigation through flat-panel-based volume computed tomography.

    PubMed

    Majdani, Omid; Bartling, Soenke H; Leinung, Martin; Stöver, Timo; Lenarz, Minoo; Dullin, Christian; Lenarz, Thomas

    2008-02-01

    High-precision intraoperative navigation using high-resolution flat-panel volume computed tomography makes feasible the possibility of minimally invasive cochlear implant surgery, including cochleostomy. Conventional cochlear implant surgery is typically performed via mastoidectomy with facial recess to identify and avoid damage to vital anatomic landmarks. To accomplish this procedure via a minimally invasive approach--without performing mastoidectomy--in a precise fashion, image-guided technology is necessary. With such an approach, surgical time and expertise may be reduced, and hearing preservation may be improved. Flat-panel volume computed tomography was used to scan 4 human temporal bones. A drilling channel was planned preoperatively from the mastoid surface to the round window niche, providing a margin of safety to all functional important structures (e.g., facial nerve, chorda tympani, incus). Postoperatively, computed tomographic imaging and conventional surgical exploration of the drilled route to the cochlea were performed. All 4 specimens showed a cochleostomy located at the scala tympani anterior inferior to the round window. The chorda tympani was damaged in 1 specimen--this was preoperatively planned as a narrow facial recess was encountered. Using flat-panel volume computed tomography for image-guided surgical navigation, we were able to perform minimally invasive cochlear implant surgery defined as a narrow, single-channel mastoidotomy with cochleostomy. Although this finding is preliminary, it is technologically achievable.

  12. Neural Signatures of Value Comparison in Human Cingulate Cortex during Decisions Requiring an Effort-Reward Trade-off

    PubMed Central

    Kennerley, Steven W.; Friston, Karl; Bestmann, Sven

    2016-01-01

    Integrating costs and benefits is crucial for optimal decision-making. Although much is known about decisions that involve outcome-related costs (e.g., delay, risk), many of our choices are attached to actions and require an evaluation of the associated motor costs. Yet how the brain incorporates motor costs into choices remains largely unclear. We used human fMRI during choices involving monetary reward and physical effort to identify brain regions that serve as a choice comparator for effort-reward trade-offs. By independently varying both options' effort and reward levels, we were able to identify the neural signature of a comparator mechanism. A network involving supplementary motor area and the caudal portion of dorsal anterior cingulate cortex encoded the difference in reward (positively) and effort levels (negatively) between chosen and unchosen choice options. We next modeled effort-discounted subjective values using a novel behavioral model. This revealed that the same network of regions involving dorsal anterior cingulate cortex and supplementary motor area encoded the difference between the chosen and unchosen options' subjective values, and that activity was best described using a concave model of effort-discounting. In addition, this signal reflected how precisely value determined participants' choices. By contrast, separate signals in supplementary motor area and ventromedial prefrontal cortex correlated with participants' tendency to avoid effort and seek reward, respectively. This suggests that the critical neural signature of decision-making for choices involving motor costs is found in human cingulate cortex and not ventromedial prefrontal cortex as typically reported for outcome-based choice. Furthermore, distinct frontal circuits seem to drive behavior toward reward maximization and effort minimization. SIGNIFICANCE STATEMENT The neural processes that govern the trade-off between expected benefits and motor costs remain largely unknown. This is striking because energetic requirements play an integral role in our day-to-day choices and instrumental behavior, and a diminished willingness to exert effort is a characteristic feature of a range of neurological disorders. We use a new behavioral characterization of how humans trade off reward maximization with effort minimization to examine the neural signatures that underpin such choices, using BOLD MRI neuroimaging data. We find the critical neural signature of decision-making, a signal that reflects the comparison of value between choice options, in human cingulate cortex, whereas two distinct brain circuits drive behavior toward reward maximization or effort minimization. PMID:27683898

  13. Neural Signatures of Value Comparison in Human Cingulate Cortex during Decisions Requiring an Effort-Reward Trade-off.

    PubMed

    Klein-Flügge, Miriam C; Kennerley, Steven W; Friston, Karl; Bestmann, Sven

    2016-09-28

    Integrating costs and benefits is crucial for optimal decision-making. Although much is known about decisions that involve outcome-related costs (e.g., delay, risk), many of our choices are attached to actions and require an evaluation of the associated motor costs. Yet how the brain incorporates motor costs into choices remains largely unclear. We used human fMRI during choices involving monetary reward and physical effort to identify brain regions that serve as a choice comparator for effort-reward trade-offs. By independently varying both options' effort and reward levels, we were able to identify the neural signature of a comparator mechanism. A network involving supplementary motor area and the caudal portion of dorsal anterior cingulate cortex encoded the difference in reward (positively) and effort levels (negatively) between chosen and unchosen choice options. We next modeled effort-discounted subjective values using a novel behavioral model. This revealed that the same network of regions involving dorsal anterior cingulate cortex and supplementary motor area encoded the difference between the chosen and unchosen options' subjective values, and that activity was best described using a concave model of effort-discounting. In addition, this signal reflected how precisely value determined participants' choices. By contrast, separate signals in supplementary motor area and ventromedial prefrontal cortex correlated with participants' tendency to avoid effort and seek reward, respectively. This suggests that the critical neural signature of decision-making for choices involving motor costs is found in human cingulate cortex and not ventromedial prefrontal cortex as typically reported for outcome-based choice. Furthermore, distinct frontal circuits seem to drive behavior toward reward maximization and effort minimization. The neural processes that govern the trade-off between expected benefits and motor costs remain largely unknown. This is striking because energetic requirements play an integral role in our day-to-day choices and instrumental behavior, and a diminished willingness to exert effort is a characteristic feature of a range of neurological disorders. We use a new behavioral characterization of how humans trade off reward maximization with effort minimization to examine the neural signatures that underpin such choices, using BOLD MRI neuroimaging data. We find the critical neural signature of decision-making, a signal that reflects the comparison of value between choice options, in human cingulate cortex, whereas two distinct brain circuits drive behavior toward reward maximization or effort minimization. Copyright © 2016 Klein-Flügge et al.

  14. High Positive End-Expiratory Pressure Renders Spontaneous Effort Noninjurious.

    PubMed

    Morais, Caio C A; Koyama, Yukiko; Yoshida, Takeshi; Plens, Glauco M; Gomes, Susimeire; Lima, Cristhiano A S; Ramos, Ozires P S; Pereira, Sérgio M; Kawaguchi, Naomasa; Yamamoto, Hirofumi; Uchiyama, Akinori; Borges, João B; Vidal Melo, Marcos F; Tucci, Mauro R; Amato, Marcelo B P; Kavanagh, Brian P; Costa, Eduardo L V; Fujino, Yuji

    2018-05-15

    In acute respiratory distress syndrome (ARDS), atelectatic solid-like lung tissue impairs transmission of negative swings in pleural pressure (Ppl) that result from diaphragmatic contraction. The localization of more negative Ppl proportionally increases dependent lung stretch by drawing gas either from other lung regions (e.g., nondependent lung [pendelluft]) or from the ventilator. Lowering the level of spontaneous effort and/or converting solid-like to fluid-like lung might render spontaneous effort noninjurious. To determine whether spontaneous effort increases dependent lung injury, and whether such injury would be reduced by recruiting atelectatic solid-like lung with positive end-expiratory pressure (PEEP). Established models of severe ARDS (rabbit, pig) were used. Regional histology (rabbit), inflammation (positron emission tomography; pig), regional inspiratory Ppl (intrabronchial balloon manometry), and stretch (electrical impedance tomography; pig) were measured. Respiratory drive was evaluated in 11 patients with ARDS. Although injury during muscle paralysis was predominantly in nondependent and middle lung regions at low (vs. high) PEEP, strong inspiratory effort increased injury (indicated by positron emission tomography and histology) in dependent lung. Stronger effort (vs. muscle paralysis) caused local overstretch and greater tidal recruitment in dependent lung, where more negative Ppl was localized and greater stretch was generated. In contrast, high PEEP minimized lung injury by more uniformly distributing negative Ppl, and lowering the magnitude of spontaneous effort (i.e., deflection in esophageal pressure observed in rabbits, pigs, and patients). Strong effort increased dependent lung injury, where higher local lung stress and stretch was generated; effort-dependent lung injury was minimized by high PEEP in severe ARDS, which may offset need for paralysis.

  15. Technology and Sexuality--What's the Connection? Addressing Youth Sexualities in Efforts to Increase Girls' Participation in Computing

    ERIC Educational Resources Information Center

    Ashcraft, Catherine

    2015-01-01

    To date, girls and women are significantly underrepresented in computer science and technology. Concerns about this underrepresentation have sparked a wealth of educational efforts to promote girls' participation in computing, but these programs have demonstrated limited impact on reversing current trends. This paper argues that this is, in part,…

  16. Computer simulation and performance assessment of the packet-data service of the Aeronautical Mobile Satellite Service (AMSS)

    NASA Technical Reports Server (NTRS)

    Ferzali, Wassim; Zacharakis, Vassilis; Upadhyay, Triveni; Weed, Dennis; Burke, Gregory

    1995-01-01

    The ICAO Aeronautical Mobile Communications Panel (AMCP) completed the drafting of the Aeronautical Mobile Satellite Service (AMSS) Standards and Recommended Practices (SARP's) and the associated Guidance Material and submitted these documents to ICAO Air Navigation Commission (ANC) for ratification in May 1994. This effort, encompassed an extensive, multi-national SARP's validation. As part of this activity, the US Federal Aviation Administration (FAA) sponsored an effort to validate the SARP's via computer simulation. This paper provides a description of this effort. Specifically, it describes: (1) the approach selected for the creation of a high-fidelity AMSS computer model; (2) the test traffic generation scenarios; and (3) the resultant AMSS performance assessment. More recently, the AMSS computer model was also used to provide AMSS performance statistics in support of the RTCA standardization activities. This paper describes this effort as well.

  17. Examining the Minimal Required Elements of a Computer-Tailored Intervention Aimed at Dietary Fat Reduction: Results of a Randomized Controlled Dismantling Study

    ERIC Educational Resources Information Center

    Kroeze, Willemieke; Oenema, Anke; Dagnelie, Pieter C.; Brug, Johannes

    2008-01-01

    This study investigated the minimally required feedback elements of a computer-tailored dietary fat reduction intervention to be effective in improving fat intake. In all 588 Healthy Dutch adults were randomly allocated to one of four conditions in an randomized controlled trial: (i) feedback on dietary fat intake [personal feedback (P feedback)],…

  18. Evaluation of outdoor-to-indoor response to minimized sonic booms

    NASA Technical Reports Server (NTRS)

    Brown, David; Sutherland, Louis C.

    1992-01-01

    Various studies were conducted by NASA and others on the practical limitations of sonic boom signature shaping/minimization for the High-Speed Civil Transport (HSCT) and on the effects of these shaped boom signatures on perceived loudness. This current effort is a further part of this research with emphasis on examining shaped boom signatures which are representative of the most recent investigations of practical limitations on sonic boom minimization, and on examining and comparing the expected response to these signatures when experienced indoors and outdoors.

  19. AlaScan: A Graphical User Interface for Alanine Scanning Free-Energy Calculations.

    PubMed

    Ramadoss, Vijayaraj; Dehez, François; Chipot, Christophe

    2016-06-27

    Computation of the free-energy changes that underlie molecular recognition and association has gained significant importance due to its considerable potential in drug discovery. The massive increase of computational power in recent years substantiates the application of more accurate theoretical methods for the calculation of binding free energies. The impact of such advances is the application of parent approaches, like computational alanine scanning, to investigate in silico the effect of amino-acid replacement in protein-ligand and protein-protein complexes, or probe the thermostability of individual proteins. Because human effort represents a significant cost that precludes the routine use of this form of free-energy calculations, minimizing manual intervention constitutes a stringent prerequisite for any such systematic computation. With this objective in mind, we propose a new plug-in, referred to as AlaScan, developed within the popular visualization program VMD to automate the major steps in alanine-scanning calculations, employing free-energy perturbation as implemented in the widely used molecular dynamics code NAMD. The AlaScan plug-in can be utilized upstream, to prepare input files for selected alanine mutations. It can also be utilized downstream to perform the analysis of different alanine-scanning calculations and to report the free-energy estimates in a user-friendly graphical user interface, allowing favorable mutations to be identified at a glance. The plug-in also assists the end-user in assessing the reliability of the calculation through rapid visual inspection.

  20. Causal Attribution: A New Scale Developed to Minimize Existing Methodological Problems.

    ERIC Educational Resources Information Center

    Bull, Kay Sather; Feuquay, Jeffrey P.

    In order to facilitate research on the construct of causal attribution, this paper details developmental procedures used to minimize previous deficiencies and proposes a new scale. The first version of the scale was in ipsative form and provided two basic sets of indices: (1) ability, effort, luck, and task difficulty indices in success and…

  1. Early Success Is Vital in Minimal Worksite Wellness Interventions at Small Worksites

    ERIC Educational Resources Information Center

    Ablah, Elizabeth; Dong, Frank; Konda, Kurt; Konda, Kelly; Armbruster, Sonja; Tuttle, Becky

    2015-01-01

    Intervention: In an effort to increase physical activity, 15 workplaces participated in a minimal-contact 10,000-steps-a-day program sponsored by the Sedgwick County Health Department in 2007 and 2008. Pedometers were provided to measure participants' weekly steps for the 10-week intervention. Method: Participants were defined as those who…

  2. System identification using Nuclear Norm & Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.

    2018-01-01

    In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.

  3. New opportunities for quality enhancing of images captured by passive THz camera

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2014-10-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.

  4. A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code,more » the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.« less

  5. Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.

    PubMed

    Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P

    2010-12-22

    Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.

  6. TEMPEST: A three-dimensional time-dependent computer program for hydrothermal analysis: Volume 2, Assessment and verification results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eyler, L L; Trent, D S; Budden, M J

    During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.

  7. 42 CFR 441.182 - Maintenance of effort: Computation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES Inpatient Psychiatric Services for Individuals Under Age 21 in Psychiatric Facilities or Programs § 441.182 Maintenance of effort: Computation. (a) For expenditures for inpatient psychiatric services... total State Medicaid expenditures in the current quarter for inpatient psychiatric services and...

  8. An accurate binding interaction model in de novo computational protein design of interactions: if you build it, they will bind.

    PubMed

    London, Nir; Ambroggio, Xavier

    2014-02-01

    Computational protein design efforts aim to create novel proteins and functions in an automated manner and, in the process, these efforts shed light on the factors shaping natural proteins. The focus of these efforts has progressed from the interior of proteins to their surface and the design of functions, such as binding or catalysis. Here we examine progress in the development of robust methods for the computational design of non-natural interactions between proteins and molecular targets such as other proteins or small molecules. This problem is referred to as the de novo computational design of interactions. Recent successful efforts in de novo enzyme design and the de novo design of protein-protein interactions open a path towards solving this problem. We examine the common themes in these efforts, and review recent studies aimed at understanding the nature of successes and failures in the de novo computational design of interactions. While several approaches culminated in success, the use of a well-defined structural model for a specific binding interaction in particular has emerged as a key strategy for a successful design, and is therefore reviewed with special consideration. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Near-optimal protocols in complex nonequilibrium transformations

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Crooks, Gavin E.; ...

    2016-08-29

    The development of sophisticated experimental means to control nanoscale systems has motivated efforts to design driving protocols that minimize the energy dissipated to the environment. Computational models are a crucial tool in this practical challenge. In this paper, we describe a general method for sampling an ensemble of finite-time, nonequilibrium protocols biased toward a low average dissipation. In addition, we show that this scheme can be carried out very efficiently in several limiting cases. As an application, we sample the ensemble of low-dissipation protocols that invert the magnetization of a 2D Ising model and explore how the diversity of themore » protocols varies in response to constraints on the average dissipation. In this example, we find that there is a large set of protocols with average dissipation close to the optimal value, which we argue is a general phenomenon.« less

  10. Distributed parameter statics of magnetic catheters.

    PubMed

    Tunay, Ilker

    2011-01-01

    We discuss how to use special Cosserat rod theory for deriving distributed-parameter static equilibrium equations of magnetic catheters. These medical devices are used for minimally-invasive diagnostic and therapeutic procedures and can be operated remotely or controlled by automated algorithms. The magnetic material can be lumped in rigid segments or distributed in flexible segments. The position vector of the cross-section centroid and quaternion representation of an orthonormal triad are selected as DOF. The strain energy for transversely isotropic, hyperelastic rods is augmented with the mechanical potential energy of the magnetic field and a penalty term to enforce the quaternion unity constraint. Numerical solution is found by 1D finite elements. Material properties of polymer tubes in extension, bending and twist are determined by mechanical and magnetic experiments. Software experiments with commercial FEM software indicate that the computational effort with the proposed method is at least one order of magnitude less than standard 3D FEM.

  11. Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds

    NASA Astrophysics Data System (ADS)

    Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.

    2012-11-01

    A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.

  12. Indicators for the automated analysis of drug prescribing quality.

    PubMed

    Coste, J; Séné, B; Milstein, C; Bouée, S; Venot, A

    1998-01-01

    Irrational and inconsistent drug prescription has considerable impact on morbidity, mortality, health service utilization, and community burden. However, few studies have addressed the methodology of processing the information contained in these drug orders used to study the quality of drug prescriptions and prescriber behavior. We present a comprehensive set of quantitative indicators for the quality of drug prescriptions which can be derived from a drug order. These indicators were constructed using explicit a priori criteria which were previously validated on the basis of scientific data. Automatic computation is straightforward, using a relational database system, such that large sets of prescriptions can be processed with minimal human effort. We illustrate the feasibility and value of this approach by using a large set of 23,000 prescriptions for several diseases, selected from a nationally representative prescriptions database. Our study may result in direct and wide applications in the epidemiology of medical practice and in quality control procedures.

  13. Generating Stimuli for Neuroscience Using PsychoPy.

    PubMed

    Peirce, Jonathan W

    2008-01-01

    PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.). The structure of scripts is simple and intuitive. As a result, new experiments can be written very quickly, and trying to understand a previously written script is easy, even with minimal code comments. PsychoPy can also generate movies and image sequences to be used in demos or simulated neuroscience experiments. This paper describes the range of tools and stimuli that it provides and the environment in which experiments are conducted.

  14. Simulation of hypersonic rarefied flows with the immersed-boundary method

    NASA Astrophysics Data System (ADS)

    Bruno, D.; De Palma, P.; de Tullio, M. D.

    2011-05-01

    This paper provides a validation of an immersed boundary method for computing hypersonic rarefied gas flows. The method is based on the solution of the Navier-Stokes equation and is validated versus numerical results obtained by the DSMC approach. The Navier-Stokes solver employs a flexible local grid refinement technique and is implemented on parallel machines using a domain-decomposition approach. Thanks to the efficient grid generation process, based on the ray-tracing technique, and the use of the METIS software, it is possible to obtain the partitioned grids to be assigned to each processor with a minimal effort by the user. This allows one to by-pass the expensive (in terms of time and human resources) classical generation process of a body fitted grid. First-order slip-velocity boundary conditions are employed and tested for taking into account rarefied gas effects.

  15. Central American information system for energy planning (in English; Spanish)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, M.G.; Lyon, P.C.; Heskett, J.C.

    1991-04-01

    SICAPE (Sistema de Information Centroamericano para Planificacion Energetica) is an expandable information system designed for energy planning. Its objective is to satisfy ongoing information requirements by means of a menu driver operational environment. SICAPE is as easily used by the novice computer user as those with more experience. Moreover, the system is capable of evolving concurrently with future requirements of the individual country. The expansion is accomplished by menu restructuring as data and user requirements change. The new menu configurations require no programming effort. The use and modification of SICAPE are separate menu-driven processes that allow for rapid data query,more » minimal training, and effortless continued growth. SICAPE's data is organized by country or region. Information is available in the following areas: energy balance, macro economics, electricity generation capacity, and electricity and petroleum product pricing. (JF)« less

  16. Miniature Bioreactor System for Long-Term Cell Culture

    NASA Technical Reports Server (NTRS)

    Gonda, Steve R.; Kleis, Stanley J.; Geffert, Sandara K.

    2010-01-01

    A prototype miniature bioreactor system is designed to serve as a laboratory benchtop cell-culturing system that minimizes the need for relatively expensive equipment and reagents and can be operated under computer control, thereby reducing the time and effort required of human investigators and reducing uncertainty in results. The system includes a bioreactor, a fluid-handling subsystem, a chamber wherein the bioreactor is maintained in a controlled atmosphere at a controlled temperature, and associated control subsystems. The system can be used to culture both anchorage-dependent and suspension cells, which can be either prokaryotic or eukaryotic. Cells can be cultured for extended periods of time in this system, and samples of cells can be extracted and analyzed at specified intervals. By integrating this system with one or more microanalytical instrument(s), one can construct a complete automated analytical system that can be tailored to perform one or more of a large variety of assays.

  17. Coherent Anti-Stokes Raman Spectroscopic Thermometry in a Supersonic Combustor

    NASA Technical Reports Server (NTRS)

    Cutler, A. D.; Danehy, P. M.; Springer, R. R.; OByrne, S.; Capriotti, D. P.; DeLoach, R.

    2003-01-01

    An experiment has been conducted to acquire data for the validation of computational fluid dynamics codes used in the design of supersonic combustors. The flow in a supersonic combustor, consisting of a diverging duct with a single downstream-angled wail injector, is studied. Combustor entrance Mach number is 2 and enthalpy nominally corresponds to Mach 7 flight. The primary measurement technique is coherent anti-Stokes Raman spectroscopy, but surface pressures and temperatures have also been acquired. Modern design of experiment techniques have been used to maximize the quality of the data set (for the given level of effort) and to minimize systematic errors. Temperature maps are obtained at several planes in the flow for a case in which the combustor is piloted by injecting fuel upstream of the main injector and one case in which it is not piloted. Boundary conditions and uncertainties are characterized.

  18. From metadynamics to dynamics.

    PubMed

    Tiwary, Pratyush; Parrinello, Michele

    2013-12-06

    Metadynamics is a commonly used and successful enhanced sampling method. By the introduction of a history dependent bias which depends on a restricted number of collective variables it can explore complex free energy surfaces characterized by several metastable states separated by large free energy barriers. Here we extend its scope by introducing a simple yet powerful method for calculating the rates of transition between different metastable states. The method does not rely on a previous knowledge of the transition states or reaction coordinates, as long as collective variables are known that can distinguish between the various stable minima in free energy space. We demonstrate that our method recovers the correct escape rates out of these stable states and also preserves the correct sequence of state-to-state transitions, with minimal extra computational effort needed over ordinary metadynamics. We apply the formalism to three different problems and in each case find excellent agreement with the results of long unbiased molecular dynamics runs.

  19. From Metadynamics to Dynamics

    NASA Astrophysics Data System (ADS)

    Tiwary, Pratyush; Parrinello, Michele

    2013-12-01

    Metadynamics is a commonly used and successful enhanced sampling method. By the introduction of a history dependent bias which depends on a restricted number of collective variables it can explore complex free energy surfaces characterized by several metastable states separated by large free energy barriers. Here we extend its scope by introducing a simple yet powerful method for calculating the rates of transition between different metastable states. The method does not rely on a previous knowledge of the transition states or reaction coordinates, as long as collective variables are known that can distinguish between the various stable minima in free energy space. We demonstrate that our method recovers the correct escape rates out of these stable states and also preserves the correct sequence of state-to-state transitions, with minimal extra computational effort needed over ordinary metadynamics. We apply the formalism to three different problems and in each case find excellent agreement with the results of long unbiased molecular dynamics runs.

  20. Zephyr: Open-source Parallel Seismic Waveform Inversion in an Integrated Python-based Framework

    NASA Astrophysics Data System (ADS)

    Smithyman, B. R.; Pratt, R. G.; Hadden, S. M.

    2015-12-01

    Seismic Full-Waveform Inversion (FWI) is an advanced method to reconstruct wave properties of materials in the Earth from a series of seismic measurements. These methods have been developed by researchers since the late 1980s, and now see significant interest from the seismic exploration industry. As researchers move towards implementing advanced numerical modelling (e.g., 3D, multi-component, anisotropic and visco-elastic physics), it is desirable to make use of a modular approach, minimizing the effort developing a new set of tools for each new numerical problem. SimPEG (http://simpeg.xyz) is an open source project aimed at constructing a general framework to enable geophysical inversion in various domains. In this abstract we describe Zephyr (https://github.com/bsmithyman/zephyr), which is a coupled research project focused on parallel FWI in the seismic context. The software is built on top of Python, Numpy and IPython, which enables very flexible testing and implementation of new features. Zephyr is an open source project, and is released freely to enable reproducible research. We currently implement a parallel, distributed seismic forward modelling approach that solves the 2.5D (two-and-one-half dimensional) viscoacoustic Helmholtz equation at a range modelling frequencies, generating forward solutions for a given source behaviour, and gradient solutions for a given set of observed data. Solutions are computed in a distributed manner on a set of heterogeneous workers. The researcher's frontend computer may be separated from the worker cluster by a network link to enable full support for computation on remote clusters from individual workstations or laptops. The present codebase introduces a numerical discretization equivalent to that used by FULLWV, a well-known seismic FWI research codebase. This makes it straightforward to compare results from Zephyr directly with FULLWV. The flexibility introduced by the use of a Python programming environment makes extension of the codebase with new methods much more straightforward. This enables comparison and integration of new efforts with existing results.

  1. Causal Learning with Local Computations

    ERIC Educational Resources Information Center

    Fernbach, Philip M.; Sloman, Steven A.

    2009-01-01

    The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require…

  2. The design and testing of a novel mechanomyogram-driven switch controlled by small eyebrow movements

    PubMed Central

    2010-01-01

    Background Individuals with severe physical disabilities and minimal motor behaviour may be unable to use conventional mechanical switches for access. These persons may benefit from access technologies that harness the volitional activity of muscles. In this study, we describe the design and demonstrate the performance of a binary switch controlled by mechanomyogram (MMG) signals recorded from the frontalis muscle during eyebrow movements. Methods Muscle contractions, detected in real-time with a continuous wavelet transform algorithm, were used to control a binary switch for computer access. The automatic selection of scale-specific thresholds reduced the effect of artefact, such as eye blinks and head movement, on the performance of the switch. Switch performance was estimated by cued response-tests performed by eleven participants (one with severe physical disabilities). Results The average sensitivity and specificity of the switch was 99.7 ± 0.4% and 99.9 ± 0.1%, respectively. The algorithm performance was robust against typical participant movement. Conclusions The results suggest that the frontalis muscle is a suitable site for controlling the MMG-driven switch. The high accuracies combined with the minimal requisite effort and training show that MMG is a promising binary control signal. Further investigation of the potential benefits of MMG-control for the target population is warranted. PMID:20492680

  3. The design and testing of a novel mechanomyogram-driven switch controlled by small eyebrow movements.

    PubMed

    Alves, Natasha; Chau, Tom

    2010-05-21

    Individuals with severe physical disabilities and minimal motor behaviour may be unable to use conventional mechanical switches for access. These persons may benefit from access technologies that harness the volitional activity of muscles. In this study, we describe the design and demonstrate the performance of a binary switch controlled by mechanomyogram (MMG) signals recorded from the frontalis muscle during eyebrow movements. Muscle contractions, detected in real-time with a continuous wavelet transform algorithm, were used to control a binary switch for computer access. The automatic selection of scale-specific thresholds reduced the effect of artefact, such as eye blinks and head movement, on the performance of the switch. Switch performance was estimated by cued response-tests performed by eleven participants (one with severe physical disabilities). The average sensitivity and specificity of the switch was 99.7 +/- 0.4% and 99.9 +/- 0.1%, respectively. The algorithm performance was robust against typical participant movement. The results suggest that the frontalis muscle is a suitable site for controlling the MMG-driven switch. The high accuracies combined with the minimal requisite effort and training show that MMG is a promising binary control signal. Further investigation of the potential benefits of MMG-control for the target population is warranted.

  4. Low-dose CT in clinical diagnostics.

    PubMed

    Fuentes-Orrego, Jorge M; Sahani, Dushyant V

    2013-09-01

    Computed tomography (CT) has become key for patient management due to its outstanding capabilities for detecting disease processes and assessing treatment response, which has led to expansion in CT imaging for diagnostic and image-guided therapeutic interventions. Despite these benefits, the growing use of CT has raised concerns as radiation risks associated with radiation exposure. The purpose of this article is to familiarize the reader with fundamental concepts of dose metrics for assessing radiation exposure and weighting radiation-associated risks. The article also discusses general approaches for reducing radiation dose while preserving diagnostic quality. The authors provide additional insight for undertaking protocol optimization, customizing scanning techniques based on the patients' clinical scenario and demographics. Supplemental strategies are postulated using more advanced post-processing techniques for achieving further dose improvements. The technologic offerings of CT are integral to modern medicine and its role will continue to evolve. Although, the estimated risks from low levels of radiation of a single CT exam are uncertain, it is prudent to minimize the dose from CT by applying common sense solutions and using other simple strategies as well as exploiting technologic innovations. These efforts will enable us to take advantage of all the clinical benefits of CT while minimizing the likelihood of harm to patients.

  5. Fishery stock assessment of Kiddi shrimp ( Parapenaeopsis stylifera) in the Northern Arabian Sea Coast of Pakistan by using surplus production models

    NASA Astrophysics Data System (ADS)

    Mohsin, Muhammad; Mu, Yongtong; Memon, Aamir Mahmood; Kalhoro, Muhammad Talib; Shah, Syed Baber Hussain

    2017-07-01

    Pakistani marine waters are under an open access regime. Due to poor management and policy implications, blind fishing is continued which may result in ecological as well as economic losses. Thus, it is of utmost importance to estimate fishery resources before harvesting. In this study, catch and effort data, 1996-2009, of Kiddi shrimp Parapenaeopsis stylifera fishery from Pakistani marine waters was analyzed by using specialized fishery software in order to know fishery stock status of this commercially important shrimp. Maximum, minimum and average capture production of P. stylifera was observed as 15 912 metric tons (mt) (1997), 9 438 mt (2009) and 11 667 mt/a. Two stock assessment tools viz. CEDA (catch and effort data analysis) and ASPIC (a stock production model incorporating covariates) were used to compute MSY (maximum sustainable yield) of this organism. In CEDA, three surplus production models, Fox, Schaefer and Pella-Tomlinson, along with three error assumptions, log, log normal and gamma, were used. For initial proportion (IP) 0.8, the Fox model computed MSY as 6 858 mt (CV=0.204, R 2 =0.709) and 7 384 mt (CV=0.149, R 2 =0.72) for log and log normal error assumption respectively. Here, gamma error produced minimization failure. Estimated MSY by using Schaefer and Pella-Tomlinson models remained the same for log, log normal and gamma error assumptions i.e. 7 083 mt, 8 209 mt and 7 242 mt correspondingly. The Schafer results showed highest goodness of fit R 2 (0.712) values. ASPIC computed MSY, CV, R 2, F MSY and B MSY parameters for the Fox model as 7 219 mt, 0.142, 0.872, 0.111 and 65 280, while for the Logistic model the computed values remained 7 720 mt, 0.148, 0.868, 0.107 and 72 110 correspondingly. Results obtained have shown that P. stylifera has been overexploited. Immediate steps are needed to conserve this fishery resource for the future and research on other species of commercial importance is urgently needed.

  6. Turbulence modeling of free shear layers for high performance aircraft

    NASA Technical Reports Server (NTRS)

    Sondak, Douglas

    1993-01-01

    In many flowfield computations, accuracy of the turbulence model employed is frequently a limiting factor in the overall accuracy of the computation. This is particularly true for complex flowfields such as those around full aircraft configurations. Free shear layers such as wakes, impinging jets (in V/STOL applications), and mixing layers over cavities are often part of these flowfields. Although flowfields have been computed for full aircraft, the memory and CPU requirements for these computations are often excessive. Additional computer power is required for multidisciplinary computations such as coupled fluid dynamics and conduction heat transfer analysis. Massively parallel computers show promise in alleviating this situation, and the purpose of this effort was to adapt and optimize CFD codes to these new machines. The objective of this research effort was to compute the flowfield and heat transfer for a two-dimensional jet impinging normally on a cool plate. The results of this research effort were summarized in an AIAA paper titled 'Parallel Implementation of the k-epsilon Turbulence Model'. Appendix A contains the full paper.

  7. A study on acceptance of mobileschool at secondary schools in Malaysia: Urban vs rural

    NASA Astrophysics Data System (ADS)

    Hashim, Ahmad Sobri; Ahmad, Wan Fatimah Wan; Sarlan, Aliza

    2017-10-01

    Developing countries are in dilemma where sophisticated technologies are more advance as compared to the way their people think. In education, there have been many novel approaches and technologies were introduced. However, very minimal efforts were put to apply in our education. MobileSchool is a mobile learning (m-learning) management system, developed for administrative, teaching and learning processes at secondary schools in Malaysia. The paper presents the acceptance of MobileSchool between urban and rural secondary schools in Malaysia. Research framework was designed based on Technology Acceptance Model (TAM). The constructs of the framework include computer anxiety, self-efficacy, facilitating condition, technological complexity, perceived behavioral control, perceive ease of use, perceive usefulness, attitude and behavioral intention. Questionnaire was applied as research instrument which involved 373 students from four secondary schools (two schools in urban category and another two in rural category) in Perak. Inferential analyses using hypothesis and t-test, and descriptive analyses using mean and percentage were used to analyze the data. Results showed that there were no big difference (<20%) of all acceptance constructs between urban and rural secondary schools except computer anxiety.

  8. Determination of acoustical transfer functions using an impulse method

    NASA Astrophysics Data System (ADS)

    MacPherson, J.

    1985-02-01

    The Transfer Function of a system may be defined as the relationship of the output response to the input of a system. Whilst recent advances in digital processing systems have enabled Impulse Transfer Functions to be determined by computation of the Fast Fourier Transform, there has been little work done in applying these techniques to room acoustics. Acoustical Transfer Functions have been determined for auditoria, using an impulse method. The technique is based on the computation of the Fast Fourier Transform (FFT) of a non-ideal impulsive source, both at the source and at the receiver point. The Impulse Transfer Function (ITF) is obtained by dividing the FFT at the receiver position by the FFT of the source. This quantity is presented both as linear frequency scale plots and also as synthesized one-third octave band data. The technique enables a considerable quantity of data to be obtained from a small number of impulsive signals recorded in the field, thereby minimizing the time and effort required on site. As the characteristics of the source are taken into account in the calculation, the choice of impulsive source is non-critical. The digital analysis equipment required for the analysis is readily available commercially.

  9. Compression of the right coronary artery by an aortic pseudoaneurysm after infective endocarditis: an unusual case of myocardial ischemia.

    PubMed

    Lacalzada-Almeida, Juan; De la Rosa-Hernández, Alejandro; Izquierdo-Gómez, María Manuela; García-Niebla, Javier; Hernández-Betancor, Iván; Bonilla-Arjona, Juan Alfonso; Barragán-Acea, Antonio; Laynez-Cerdeña, Ignacio

    2018-01-01

    A 61-year-old male with a prosthetic St Jude aortic valve size 24 presented with heart failure symptoms and minimal-effort angina. Eleven months earlier, the patient had undergone cardiac surgery because of an aortic root dilatation and bicuspid aortic valve with severe regurgitation secondary to infectious endocarditis by Coxiela burnetii and coronary artery disease in the left circumflex coronary artery. Then, a prosthesis valve and a saphenous bypass graft to the left circumflex coronary artery were placed. The patient was admitted to the Cardiology Department of Hospital Universitario de Canarias, Tenerife, Spain and a transthoracic echocardiography was performed that showed severe paraprosthetic aortic regurgitation and an aortic pseudoaneurysm. The 64-slice multidetector computed tomography confirmed the pseudoaneurysm, originating from the right sinus of Valsalva, with a compression of the native right coronary artery and a normal saphenous bypass graft. On the basis of these findings, we performed surgical treatment with a favorable postoperative evolution. In our case, results from complementary cardiac imaging techniques were crucial for patient management. The multidetector computed tomography allowed for a confident diagnosis of an unusual mechanism of coronary ischemia.

  10. A new method for computation of eigenvector derivatives with distinct and repeated eigenvalues in structural dynamic analysis

    NASA Astrophysics Data System (ADS)

    Li, Zhengguang; Lai, Siu-Kai; Wu, Baisheng

    2018-07-01

    Determining eigenvector derivatives is a challenging task due to the singularity of the coefficient matrices of the governing equations, especially for those structural dynamic systems with repeated eigenvalues. An effective strategy is proposed to construct a non-singular coefficient matrix, which can be directly used to obtain the eigenvector derivatives with distinct and repeated eigenvalues. This approach also has an advantage that only requires eigenvalues and eigenvectors of interest, without solving the particular solutions of eigenvector derivatives. The Symmetric Quasi-Minimal Residual (SQMR) method is then adopted to solve the governing equations, only the existing factored (shifted) stiffness matrix from an iterative eigensolution such as the subspace iteration method or the Lanczos algorithm is utilized. The present method can deal with both cases of simple and repeated eigenvalues in a unified manner. Three numerical examples are given to illustrate the accuracy and validity of the proposed algorithm. Highly accurate approximations to the eigenvector derivatives are obtained within a few iteration steps, making a significant reduction of the computational effort. This method can be incorporated into a coupled eigensolver/derivative software module. In particular, it is applicable for finite element models with large sparse matrices.

  11. Noise correction on LANDSAT images using a spline-like algorithm

    NASA Technical Reports Server (NTRS)

    Vijaykumar, N. L. (Principal Investigator); Dias, L. A. V.

    1985-01-01

    Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.

  12. An Automated Method for High-Definition Transcranial Direct Current Stimulation Modeling*

    PubMed Central

    Huang, Yu; Su, Yuzhuo; Rorden, Christopher; Dmochowski, Jacek; Datta, Abhishek; Parra, Lucas C.

    2014-01-01

    Targeted transcranial stimulation with electric currents requires accurate models of the current flow from scalp electrodes to the human brain. Idiosyncratic anatomy of individual brains and heads leads to significant variability in such current flows across subjects, thus, necessitating accurate individualized head models. Here we report on an automated processing chain that computes current distributions in the head starting from a structural magnetic resonance image (MRI). The main purpose of automating this process is to reduce the substantial effort currently required for manual segmentation, electrode placement, and solving of finite element models. In doing so, several weeks of manual labor were reduced to no more than 4 hours of computation time and minimal user interaction, while current-flow results for the automated method deviated by less than 27.9% from the manual method. Key facilitating factors are the addition of three tissue types (skull, scalp and air) to a state-of-the-art automated segmentation process, morphological processing to correct small but important segmentation errors, and automated placement of small electrodes based on easily reproducible standard electrode configurations. We anticipate that such an automated processing will become an indispensable tool to individualize transcranial direct current stimulation (tDCS) therapy. PMID:23367144

  13. Hierarchical Bayesian Markov switching models with application to predicting spawning success of shovelnose sturgeon

    USGS Publications Warehouse

    Holan, S.H.; Davis, G.M.; Wildhaber, M.L.; DeLonay, A.J.; Papoulias, D.M.

    2009-01-01

    The timing of spawning in fish is tightly linked to environmental factors; however, these factors are not very well understood for many species. Specifically, little information is available to guide recruitment efforts for endangered species such as the sturgeon. Therefore, we propose a Bayesian hierarchical model for predicting the success of spawning of the shovelnose sturgeon which uses both biological and behavioural (longitudinal) data. In particular, we use data that were produced from a tracking study that was conducted in the Lower Missouri River. The data that were produced from this study consist of biological variables associated with readiness to spawn along with longitudinal behavioural data collected by using telemetry and archival data storage tags. These high frequency data are complex both biologically and in the underlying behavioural process. To accommodate such complexity we developed a hierarchical linear regression model that uses an eigenvalue predictor, derived from the transition probability matrix of a two-state Markov switching model with generalized auto-regressive conditional heteroscedastic dynamics. Finally, to minimize the computational burden that is associated with estimation of this model, a parallel computing approach is proposed. ?? Journal compilation 2009 Royal Statistical Society.

  14. Test Facilities and Experience on Space Nuclear System Developments at the Kurchatov Institute

    NASA Astrophysics Data System (ADS)

    Ponomarev-Stepnoi, Nikolai N.; Garin, Vladimir P.; Glushkov, Evgeny S.; Kompaniets, George V.; Kukharkin, Nikolai E.; Madeev, Vicktor G.; Papin, Vladimir K.; Polyakov, Dmitry N.; Stepennov, Boris S.; Tchuniyaev, Yevgeny I.; Tikhonov, Lev Ya.; Uksusov, Yevgeny I.

    2004-02-01

    The complexity of space fission systems and rigidity of requirement on minimization of weight and dimension characteristics along with the wish to decrease expenditures on their development demand implementation of experimental works which results shall be used in designing, safety substantiation, and licensing procedures. Experimental facilities are intended to solve the following tasks: obtainment of benchmark data for computer code validations, substantiation of design solutions when computational efforts are too expensive, quality control in a production process, and ``iron'' substantiation of criticality safety design solutions for licensing and public relations. The NARCISS and ISKRA critical facilities and unique ORM facility on shielding investigations at the operating OR nuclear research reactor were created in the Kurchatov Institute to solve the mentioned tasks. The range of activities performed at these facilities within the implementation of the previous Russian nuclear power system programs is briefly described in the paper. This experience shall be analyzed in terms of methodological approach to development of future space nuclear systems (this analysis is beyond this paper). Because of the availability of these facilities for experiments, the brief description of their critical assemblies and characteristics is given in this paper.

  15. Funtools: Fits Users Need Tools for Quick, Quantitative Analysis

    NASA Technical Reports Server (NTRS)

    Mandel, Eric; Brederkamp, Joe (Technical Monitor)

    2001-01-01

    The Funtools project arose out of conversations with astronomers about the decline in their software development efforts over the past decade. A stated reason for this decline is that it takes too much effort to master one of the existing FITS libraries simply in order to write a few analysis programs. This problem is exacerbated by the fact that astronomers typically develop new programs only occasionally, and the long interval between coding efforts often necessitates re-learning the FITS interfaces. We therefore set ourselves the goal of developing a minimal buy-in FITS library for researchers who are occasional (but serious) coders. In this case, "minimal buy-in" meant "easy to learn, easy to use, and easy to re-learn next month". Based on conversations with astronomers interested in writing code, we concluded that this goal could be achieved by emphasizing two essential capabilities. The first was the ability to write FITS programs without knowing much about FITS, i.e., without having to deal with the arcane rules for generating a properly formatted FITS file. The second was to support the use of already-familiar C/Unix facilities, especially C structs and Unix stdio. Taken together, these two capabilities would allow researchers to leverage their existing programming expertise while minimizing the need to learn new and complex coding rules.

  16. Lyme disease and conservation

    USGS Publications Warehouse

    Ginsberg, H.

    1994-01-01

    Lyme disease is a tick-borne illness that is wide-spread in North America, especially in the northeastern and northcentral United States. This disease could negatively influence efforts to conserve natural populations in two ways: (1) the disease could directly affect wild animal health; and (2) tick control efforts could adversely affect natural populations and communities. Lyme disease affects several domestic animals, but symptoms have been reported in only a few wild species. Direct effects of Lyme disease on wild animal populations have not been reported, but the disease should be considered as a possible cause in cases of unexplained population declines in endemic areas. Methods available to manage ticks and Lyme disease include human self-protection techniques, manipulation of habitats and hosts species populations, biological control, and pesticide applications. The diversity of available techniques allows selection of approaches to minimize environmental effects by (1) emphasizing personal protection techniques, (2) carefully targeting management efforts to maximize efficiency, and (3) integrating environmentally benign techniques to improve management while avoiding broad-scale environmentally destructive approaches. The environmental effects of Lyme disease depend, to a large extent, on the methods chosen to minimize human exposure to infected ticks. Conservation biologists can help design tick management programs that effectively lower the incidence of human Lyme disease while simultaneously minimizing negative effects on natural populations.

  17. Computational Analysis of Artificial Gravity as a Possible Countermeasure to Spaceflight Induced Bone Loss

    NASA Technical Reports Server (NTRS)

    Mulugeta, L.; Werner, C. R.; Pennline, J. A.

    2015-01-01

    During exploration class missions, such as to asteroids and Mars, astronauts will be exposed to reduced gravity for extended periods. Data has shown that astronauts lose bone mass at a rate of 1% to 2% a month in microgravity, particularly in lower extremities such as the proximal femur. Exercise countermeasures have not completely eliminated bone loss from long duration spaceflight missions, which leaves astronauts susceptible to early onset osteoporosis and greater risk of fracture. Introduction of the Advanced Resistive Exercise Device and other large exercise devices on the International Space Station (ISS), coupled with improved nutrition, has further minimized bone loss. However, unlike the ISS, exploration vehicles will have very limited volume and power available to accommodate such capabilities. Therefore, novel concepts like artificial gravity systems are being explored as a means to provide sufficient load stimulus to the musculoskeletal system to mitigate bone changes that may lead to early onset osteoporosis and increased risk of fracture. Currently, there is minimal data available to drive further research and development efforts to appropriately explore such options. Computational modeling can be leveraged to gain insight on the level of osteoprotection that may be achieved using artificial gravity produced by a spinning spacecraft or centrifuge. With this in mind, NASA's Digital Astronaut Project (DAP) has developed a bone remodeling model that has been validated for predicting volumetric bone mineral density (vBMD) changes of trabecular and cortical bone both for gravitational unloading condition and the equivalent of 1g daily load stimulus. Using this model, it is possible to simulate vBMD changes in trabecular and cortical bone under different gravity conditions. In this presentation, we will discuss our preliminary findings regarding if and how artificial gravity may be used to mitigate spaceflight induced bone loss.

  18. n-body simulations using message passing parallel computers.

    NASA Astrophysics Data System (ADS)

    Grama, A. Y.; Kumar, V.; Sameh, A.

    The authors present new parallel formulations of the Barnes-Hut method for n-body simulations on message passing computers. These parallel formulations partition the domain efficiently incurring minimal communication overhead. This is in contrast to existing schemes that are based on sorting a large number of keys or on the use of global data structures. The new formulations are augmented by alternate communication strategies which serve to minimize communication overhead. The impact of these communication strategies is experimentally studied. The authors report on experimental results obtained from an astrophysical simulation on an nCUBE2 parallel computer.

  19. Cost Optimization Model for Business Applications in Virtualized Grid Environments

    NASA Astrophysics Data System (ADS)

    Strebel, Jörg

    The advent of Grid computing gives enterprises an ever increasing choice of computing options, yet research has so far hardly addressed the problem of mixing the different computing options in a cost-minimal fashion. The following paper presents a comprehensive cost model and a mixed integer optimization model which can be used to minimize the IT expenditures of an enterprise and help in decision-making when to outsource certain business software applications. A sample scenario is analyzed and promising cost savings are demonstrated. Possible applications of the model to future research questions are outlined.

  20. Rapid Ice-Sheet Changes and Mechanical Coupling to Solid-Earth/Sea-Level and Space Geodetic Observation

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Ivins, E. R.; Larour, E. Y.

    2015-12-01

    Perturbations in gravitational and rotational potentials caused by climate driven mass redistribution on the earth's surface, such as ice sheet melting and terrestrial water storage, affect the spatiotemporal variability in global and regional sea level. Here we present a numerically accurate, computationally efficient, high-resolution model for sea level. Unlike contemporary models that are based on spherical-harmonic formulation, the model can operate efficiently in a flexible embedded finite-element mesh system, thus capturing the physics operating at km-scale yet capable of simulating geophysical quantities that are inherently of global scale with minimal computational cost. One obvious application is to compute evolution of sea level fingerprints and associated geodetic and astronomical observables (e.g., geoid height, gravity anomaly, solid-earth deformation, polar motion, and geocentric motion) as a companion to a numerical 3-D thermo-mechanical ice sheet simulation, thus capturing global signatures of climate driven mass redistribution. We evaluate some important time-varying signatures of GRACE inferred ice sheet mass balance and continental hydrological budget; for example, we identify dominant sources of ongoing sea-level change at the selected tide gauge stations, and explain the relative contribution of different sources to the observed polar drift. We also report our progress on ice-sheet/solid-earth/sea-level model coupling efforts toward realistic simulation of Pine Island Glacier over the past several hundred years.

  1. In Situ Three-Dimensional Reciprocal-Space Mapping of Diffuse Scattering Intensity Distribution and Data Analysis for Precursor Phenomenon in Shape-Memory Alloy

    NASA Astrophysics Data System (ADS)

    Cheng, Tian-Le; Ma, Fengde D.; Zhou, Jie E.; Jennings, Guy; Ren, Yang; Jin, Yongmei M.; Wang, Yu U.

    2012-01-01

    Diffuse scattering contains rich information on various structural disorders, thus providing a useful means to study the nanoscale structural deviations from the average crystal structures determined by Bragg peak analysis. Extraction of maximal information from diffuse scattering requires concerted efforts in high-quality three-dimensional (3D) data measurement, quantitative data analysis and visualization, theoretical interpretation, and computer simulations. Such an endeavor is undertaken to study the correlated dynamic atomic position fluctuations caused by thermal vibrations (phonons) in precursor state of shape-memory alloys. High-quality 3D diffuse scattering intensity data around representative Bragg peaks are collected by using in situ high-energy synchrotron x-ray diffraction and two-dimensional digital x-ray detector (image plate). Computational algorithms and codes are developed to construct the 3D reciprocal-space map of diffuse scattering intensity distribution from the measured data, which are further visualized and quantitatively analyzed to reveal in situ physical behaviors. Diffuse scattering intensity distribution is explicitly formulated in terms of atomic position fluctuations to interpret the experimental observations and identify the most relevant physical mechanisms, which help set up reduced structural models with minimal parameters to be efficiently determined by computer simulations. Such combined procedures are demonstrated by a study of phonon softening phenomenon in precursor state and premartensitic transformation of Ni-Mn-Ga shape-memory alloy.

  2. Optimal Control Based Stiffness Identification of an Ankle-Foot Orthosis Using a Predictive Walking Model

    PubMed Central

    Sreenivasa, Manish; Millard, Matthew; Felis, Martin; Mombaur, Katja; Wolf, Sebastian I.

    2017-01-01

    Predicting the movements, ground reaction forces and neuromuscular activity during gait can be a valuable asset to the clinical rehabilitation community, both to understand pathology, as well as to plan effective intervention. In this work we use an optimal control method to generate predictive simulations of pathological gait in the sagittal plane. We construct a patient-specific model corresponding to a 7-year old child with gait abnormalities and identify the optimal spring characteristics of an ankle-foot orthosis that minimizes muscle effort. Our simulations include the computation of foot-ground reaction forces, as well as the neuromuscular dynamics using computationally efficient muscle torque generators and excitation-activation equations. The optimal control problem (OCP) is solved with a direct multiple shooting method. The solution of this problem is physically consistent synthetic neural excitation commands, muscle activations and whole body motion. Our simulations produced similar changes to the gait characteristics as those recorded on the patient. The orthosis-equipped model was able to walk faster with more extended knees. Notably, our approach can be easily tuned to simulate weakened muscles, produces physiologically realistic ground reaction forces and smooth muscle activations and torques, and can be implemented on a standard workstation to produce results within a few hours. These results are an important contribution toward bridging the gap between research methods in computational neuromechanics and day-to-day clinical rehabilitation. PMID:28450833

  3. Minimizing liability during internal investigations.

    PubMed

    Morris, Cole

    2010-01-01

    Today's security professional must appreciate the potential landmines in any investigative effort and work collaboratively with others to minimize liability risks, the author points out. In this article he examines six civil torts that commonly arise from unprofessionally planned or poorly executed internal investigations-defamation, false imprisonment. intentional infliction of emotional distress, assault and battery, invasion of privacy, and malicious prosecution and abuse of process.

  4. An optimal control strategies using vaccination and fogging in dengue fever transmission model

    NASA Astrophysics Data System (ADS)

    Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan

    2017-08-01

    This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.

  5. Use of minimal invasive extracorporeal circulation in cardiac surgery: principles, definitions and potential benefits. A position paper from the Minimal invasive Extra-Corporeal Technologies international Society (MiECTiS)

    PubMed Central

    Anastasiadis, Kyriakos; Murkin, John; Antonitsis, Polychronis; Bauer, Adrian; Ranucci, Marco; Gygax, Erich; Schaarschmidt, Jan; Fromes, Yves; Philipp, Alois; Eberle, Balthasar; Punjabi, Prakash; Argiriadou, Helena; Kadner, Alexander; Jenni, Hansjoerg; Albrecht, Guenter; van Boven, Wim; Liebold, Andreas; de Somer, Fillip; Hausmann, Harald; Deliopoulos, Apostolos; El-Essawi, Aschraf; Mazzei, Valerio; Biancari, Fausto; Fernandez, Adam; Weerwind, Patrick; Puehler, Thomas; Serrick, Cyril; Waanders, Frans; Gunaydin, Serdar; Ohri, Sunil; Gummert, Jan; Angelini, Gianni; Falk, Volkmar; Carrel, Thierry

    2016-01-01

    Minimal invasive extracorporeal circulation (MiECC) systems have initiated important efforts within science and technology to further improve the biocompatibility of cardiopulmonary bypass components to minimize the adverse effects and improve end-organ protection. The Minimal invasive Extra-Corporeal Technologies international Society was founded to create an international forum for the exchange of ideas on clinical application and research of minimal invasive extracorporeal circulation technology. The present work is a consensus document developed to standardize the terminology and the definition of minimal invasive extracorporeal circulation technology as well as to provide recommendations for the clinical practice. The goal of this manuscript is to promote the use of MiECC systems into clinical practice as a multidisciplinary strategy involving cardiac surgeons, anaesthesiologists and perfusionists. PMID:26819269

  6. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  7. Interface design and human factors considerations for model-based tight glycemic control in critical care.

    PubMed

    Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey

    2012-01-01

    Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. © 2012 Diabetes Technology Society.

  8. Optimal Inspection of Imports to Prevent Invasive Pest Introduction.

    PubMed

    Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G

    2018-03-01

    The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.

  9. Interface Design and Human Factors Considerations for Model-Based Tight Glycemic Control in Critical Care

    PubMed Central

    Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey

    2012-01-01

    Introduction Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. Method The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. Results The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. Conclusions The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. PMID:22401330

  10. Long-term seafloor monitoring at an open ocean aquaculture site in the western Gulf of Maine, USA: development of an adaptive protocol.

    PubMed

    Grizzle, R E; Ward, L G; Fredriksson, D W; Irish, J D; Langan, R; Heinig, C S; Greene, J K; Abeels, H A; Peter, C R; Eberhardt, A L

    2014-11-15

    The seafloor at an open ocean finfish aquaculture facility in the western Gulf of Maine, USA was monitored from 1999 to 2008 by sampling sites inside a predicted impact area modeled by oceanographic conditions and fecal and food settling characteristics, and nearby reference sites. Univariate and multivariate analyses of benthic community measures from box core samples indicated minimal or no significant differences between impact and reference areas. These findings resulted in development of an adaptive monitoring protocol involving initial low-cost methods that required more intensive and costly efforts only when negative impacts were initially indicated. The continued growth of marine aquaculture is dependent on further development of farming methods that minimize negative environmental impacts, as well as effective monitoring protocols. Adaptive monitoring protocols, such as the one described herein, coupled with mathematical modeling approaches, have the potential to provide effective protection of the environment while minimize monitoring effort and costs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Minimally invasive computer-assisted stereotactic fenestration of an aqueductal cyst: case report.

    PubMed

    Fonoff, E T; Gentil, A F; Padilha, P M; Teixeira, M J

    2010-02-01

    Current advances in frame modeling and computer software allow stereotactic procedures to be performed with great accuracy and minimal risk of neural tissue or vascular injury. In this report we associate a previously described minimally invasive stereotactic technique with state-of-the-art 3D computer guidance technology to successfully treat a 55-year-old patient with an arachnoidal cyst obstructing the aqueduct of Sylvius. We provide detailed technical information and discuss how this technique deals with previous limitations for stereotactic manipulation of the aqueductal region. We further discuss current advances in neuroendoscopy for treating obstructive hydrocephalus and make comparisons with our proposed technique. We advocate that this technique is not only capable of treating this pathology but it also has the advantages to enable reestablishment of physiological CSF flow thus preventing future brainstem compression by cyst enlargement. (c) Georg Thieme Verlag KG Stuttgart . New York.

  12. Terminology and global standardization of endoscopic information: Minimal Standard Terminology (MST).

    PubMed

    Fujino, Masayuki A; Bito, Shigeru; Takei, Kazuko; Mizuno, Shigeto; Yokoi, Hideto

    2006-01-01

    Since 1994, following the leading efforts by the European Society for Gastrointestinal Endoscopy, Organisation Mondiale d'Endoscopie Digestive (OMED) has succeeded in compiling minimal number of terms required for computer generation of digestive endoscopy reports nicknamed MST (Minimal Standard Terminology). Though with some insufficiencies, and though developed only for digestive endoscopy, MST has been the only available terminology that is globally standardized in medicine. By utilizing the merits of a unified, structured terminology that can be used in multiple languages we can utilize the data stored in different languages as a common database. For this purpose, a standing, terminology-managing organization that manages and maintains and, when required, expands the terminology on a global level, is absolutely necessary. Unfortunately, however, the organization that performs version control of MST (OMED terminology, standardization and data processing committee) is currently suspending its activity. Medical practice of the world demands more and more specialization, with resultant needs for information exchange among specialized territories. As the cooperation between endoscopy and pathology has become currently the most important problem in the Endoscopy Working Group of Integrating Healthcare Enterprise-Japan (IHE-J,) the cooperation among different specialties is essential. There are DICOM or HL7 standards as the protocols for storage, and exchange (communication) of the data, but there is yet no organization that manages the terminology itself astride different specialties. We hereby propose to establish, within IEEE, for example, a system that promotes standardization of the terminology that can transversely describe a patient, and that can control different societies and groups, as far as the terminology is concerned.

  13. Computational psychiatry

    PubMed Central

    Montague, P. Read; Dolan, Raymond J.; Friston, Karl J.; Dayan, Peter

    2013-01-01

    Computational ideas pervade many areas of science and have an integrative explanatory role in neuroscience and cognitive science. However, computational depictions of cognitive function have had surprisingly little impact on the way we assess mental illness because diseases of the mind have not been systematically conceptualized in computational terms. Here, we outline goals and nascent efforts in the new field of computational psychiatry, which seeks to characterize mental dysfunction in terms of aberrant computations over multiple scales. We highlight early efforts in this area that employ reinforcement learning and game theoretic frameworks to elucidate decision-making in health and disease. Looking forwards, we emphasize a need for theory development and large-scale computational phenotyping in human subjects. PMID:22177032

  14. ``But it doesn't come naturally'': how effort expenditure shapes the benefit of growth mindset on women's sense of intellectual belonging in computing

    NASA Astrophysics Data System (ADS)

    Stout, Jane G.; Blaney, Jennifer M.

    2017-10-01

    Research suggests growth mindset, or the belief that knowledge is acquired through effort, may enhance women's sense of belonging in male-dominated disciplines, like computing. However, other research indicates women who spend a great deal of time and energy in technical fields experience a low sense of belonging. The current study assessed the benefits of a growth mindset on women's (and men's) sense of intellectual belonging in computing, accounting for the amount of time and effort dedicated to academics. We define "intellectual belonging" as the sense that one is believed to be a competent member of the community. Whereas a stronger growth mindset was associated with stronger intellectual belonging for men, a growth mindset only boosted women's intellectual belonging when they did not work hard on academics. Our findings suggest, paradoxically, women may not benefit from a growth mindset in computing when they exert a lot of effort.

  15. Predicting Motivation: Computational Models of PFC Can Explain Neural Coding of Motivation and Effort-based Decision-making in Health and Disease.

    PubMed

    Vassena, Eliana; Deraeve, James; Alexander, William H

    2017-10-01

    Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO model based on hierarchical error prediction, developed to explain MPFC-DLPFC interactions. We derive behavioral predictions that describe how effort and reward information is coded in PFC and how changing the configuration of such environmental information might affect decision-making and task performance involving motivation.

  16. Status of Computational Aerodynamic Modeling Tools for Aircraft Loss-of-Control

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.; Murphy, Patrick C.; Atkins, Harold L.; Viken, Sally A.; Petrilli, Justin L.; Gopalarathnam, Ashok; Paul, Ryan C.

    2016-01-01

    A concerted effort has been underway over the past several years to evolve computational capabilities for modeling aircraft loss-of-control under the NASA Aviation Safety Program. A principal goal has been to develop reliable computational tools for predicting and analyzing the non-linear stability & control characteristics of aircraft near stall boundaries affecting safe flight, and for utilizing those predictions for creating augmented flight simulation models that improve pilot training. Pursuing such an ambitious task with limited resources required the forging of close collaborative relationships with a diverse body of computational aerodynamicists and flight simulation experts to leverage their respective research efforts into the creation of NASA tools to meet this goal. Considerable progress has been made and work remains to be done. This paper summarizes the status of the NASA effort to establish computational capabilities for modeling aircraft loss-of-control and offers recommendations for future work.

  17. Developing a Computer Workshop To Facilitate Computer Skills and Minimize Anxiety for Early Childhood Educators.

    ERIC Educational Resources Information Center

    Wood, Eileen; Willoughby, Teena; Specht, Jacqueline; Stern-Cavalcante, Wilma; Child, Carol

    2002-01-01

    Early childhood educators were assigned to one of three instructional conditions to assess the impact of computer workshops on their level of computer anxiety, knowledge, and comfort with technology. Overall, workshops provided gains that could translate into more effective and efficient computer use in the classroom. (Author)

  18. Facilitating higher-fidelity simulations of axial compressor instability and other turbomachinery flow conditions

    NASA Astrophysics Data System (ADS)

    Herrick, Gregory Paul

    The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, high-fidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higher-fidelity solutions of complicated geometries (due to the inclusion of grids for "small'' flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wall-clock computation time. The principal modifications implemented in this revision consist of a "multiple grid block per processing core'' software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, time-accurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research---experimental, theoretical, and computational---has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?''. This research begins to address that question.

  19. A new Mumford-Shah total variation minimization based model for sparse-view x-ray computed tomography image reconstruction.

    PubMed

    Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong

    2018-04-12

    Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.

  20. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C. J.; Edwards, M. J.; McCoy, M. G.

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  1. Periodic-disturbance accommodating control of the space station for asymptotic momentum management

    NASA Technical Reports Server (NTRS)

    Warren, Wayne; Wie, Bong

    1989-01-01

    Periodic maneuvering control is developed for asymptotic momentum management of control gyros used as primary actuating devices for the Space Station. The proposed controller utilizes the concepts of quaternion feedback control and periodic-disturbance accommodation to achieve oscillations about the constant torque equilibrium attitude, while minimizing the control effort required. Three-axis coupled equations of motion, written in terms of quaternions, are derived for roll/yaw controller design and stability analysis. It is shown that the quaternion feedback controller is very robust for a wide range of pitch angles. It is also shown that the proposed controller tunes the open-loop unstable vehicle to a stable oscillatory motion which minimizes the control effort needed for steady-state operations.

  2. Circular motion geometry using minimal data.

    PubMed

    Jiang, Guang; Quan, Long; Tsui, Hung-Tat

    2004-06-01

    Circular motion or single axis motion is widely used in computer vision and graphics for 3D model acquisition. This paper describes a new and simple method for recovering the geometry of uncalibrated circular motion from a minimal set of only two points in four images. This problem has been previously solved using nonminimal data either by computing the fundamental matrix and trifocal tensor in three images or by fitting conics to tracked points in five or more images. It is first established that two sets of tracked points in different images under circular motion for two distinct space points are related by a homography. Then, we compute a plane homography from a minimal two points in four images. After that, we show that the unique pair of complex conjugate eigenvectors of this homography are the image of the circular points of the parallel planes of the circular motion. Subsequently, all other motion and structure parameters are computed from this homography in a straighforward manner. The experiments on real image sequences demonstrate the simplicity, accuracy, and robustness of the new method.

  3. [Development and future of minimally invasive surgery in western China].

    PubMed

    Yu, Peiwu; Hao, Yingxue

    2017-03-25

    There are vast land and lots of people in western China, but the economy developing is relatively slow. However, the minimally invasive surgery was carried out firstly in China. Moreover, the type, number and difficulty of the minimally invasive surgery increased year by year. Especially, in the western area of China, Dr Zhou Zongguang, Yu Peiwu and Zheng Shuguo et al. have performed much pioneering work in laparoscopic surgery for rectal cancer, gastric cancer and laparoscopic liver resection. They led the standard development of minimally invasive in China. In the future, western China should continue to strengthen the standardized training of minimally invasive surgery, make great effort to carry out evidence-based research of minimally invasive surgery, provide evidences of high level of clinical application in minimally invasive surgery. At the same time, we should carry out the robotic and 3D laparoscopic surgery actively, leading the development of minimally invasive surgery more standardized and more widespread in western China.

  4. Artificial intelligence and immediacy: designing health communication to personally engage consumers and providers.

    PubMed

    Kreps, Gary L; Neuhauser, Linda

    2013-08-01

    We describe how ehealth communication programs can be improved by using artificial intelligence (AI) to increase immediacy. We analyzed major deficiencies in ehealth communication programs, illustrating how programs often fail to fully engage audiences and can even have negative consequences by undermining the effective delivery of information intended to guide health decision-making and influence adoption of health-promoting behaviors. We examined the use of AI in ehealth practices to promote immediacy and provided examples from the ChronologyMD project. Strategic use of AI is shown to help enhance immediacy in ehealth programs by making health communication more engaging, relevant, exciting, and actionable. AI can enhance the "immediacy" of ehealth by humanizing health promotion efforts, promoting physical and emotional closeness, increasing authenticity and enthusiasm in health promotion efforts, supporting personal involvement in communication interactions, increasing exposure to relevant messages, reducing demands on healthcare staff, improving program efficiency, and minimizing costs. User-centered AI approaches, such as the use of personally involving verbal and nonverbal cues, natural language translation, virtual coaches, and comfortable human-computer interfaces can promote active information processing and adoption of new ideas. Immediacy can improve information access, trust, sharing, motivation, and behavior changes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Asteroid Crew Segment Mission Lean Development

    NASA Technical Reports Server (NTRS)

    Gard, Joseph; McDonald, Mark

    2014-01-01

    Asteroid Retrieval Crewed Mission (ARCM) requires a minimum set of Key Capabilities compared in the context of the baseline EM-1/2 Orion and SLS capabilities. These include: Life Support & Human Systems Capabilities; Mission Kit Capabilities; Minimizing the impact to the Orion and SLS development schedules and funding. Leveraging existing technology development efforts to develop the kits adds functionality to Orion while minimizing cost and mass impact.

  6. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  7. Energy, Weatherization and Indoor Air Quality

    EPA Pesticide Factsheets

    Climate change presents many challenges, including the production of severe weather events. These events and efforts to minimize their effects through weatherization can adversely affect indoor environments.

  8. Documentary effort.

    PubMed

    2006-01-01

    This spring, Virtua Health, the largest health system in Southern New Jersey, launched an innovative campaign aimed at raising overall awareness of its facilities by documenting real-life patients undergoing a variety of experiences (e.g., breast cancer, high-risk pregnancy, spine surgery, and minimally-invasive knee replacement surgery). The effort, called "The Virtua Experience" became a 30-minute hospital documentary that aired on Philadelphia's NBC affiliate this summer.

  9. 3D Acoustic Full Waveform Inversion for Engineering Purpose

    NASA Astrophysics Data System (ADS)

    Lim, Y.; Shin, S.; Kim, D.; Kim, S.; Chung, W.

    2017-12-01

    Seismic waveform inversion is the most researched data processing technique. In recent years, with an increase in marine development projects, seismic surveys are commonly conducted for engineering purposes; however, researches for application of waveform inversion are insufficient. The waveform inversion updates the subsurface physical property by minimizing the difference between modeled and observed data. Furthermore, it can be used to generate an accurate subsurface image; however, this technique consumes substantial computational resources. Its most compute-intensive step is the calculation of the gradient and hessian values. This aspect gains higher significance in 3D as compared to 2D. This paper introduces a new method for calculating gradient and hessian values, in an effort to reduce computational overburden. In the conventional waveform inversion, the calculation area covers all sources and receivers. In seismic surveys for engineering purposes, the number of receivers is limited. Therefore, it is inefficient to construct the hessian and gradient for the entire region (Figure 1). In order to tackle this problem, we calculate the gradient and the hessian for a single shot within the range of the relevant source and receiver. This is followed by summing up of these positions for the entire shot (Figure 2). In this paper, we demonstrate that reducing the area of calculation of the hessian and gradient for one shot reduces the overall amount of computation and therefore, the computation time. Furthermore, it is proved that the waveform inversion can be suitably applied for engineering purposes. In future research, we propose to ascertain an effective calculation range. This research was supported by the Basic Research Project(17-3314) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  10. They See a Rat, We Seek a Cure for Diseases: The Current Status of Animal Experimentation in Medical Practice

    PubMed Central

    Kehinde, Elijah O.

    2013-01-01

    The objective of this review article was to examine current and prospective developments in the scientific use of laboratory animals, and to find out whether or not there are still valid scientific benefits of and justification for animal experimentation. The PubMed and Web of Science databases were searched using the following key words: animal models, basic research, pharmaceutical research, toxicity testing, experimental surgery, surgical simulation, ethics, animal welfare, benign, malignant diseases. Important relevant reviews, original articles and references from 1970 to 2012 were reviewed for data on the use of experimental animals in the study of diseases. The use of laboratory animals in scientific research continues to generate intense public debate. Their use can be justified today in the following areas of research: basic scientific research, use of animals as models for human diseases, pharmaceutical research and development, toxicity testing and teaching of new surgical techniques. This is because there are inherent limitations in the use of alternatives such as in vitro studies, human clinical trials or computer simulation. However, there are problems of transferability of results obtained from animal research to humans. Efforts are on-going to find suitable alternatives to animal experimentation like cell and tissue culture and computer simulation. For the foreseeable future, it would appear that to enable scientists to have a more precise understanding of human disease, including its diagnosis, prognosis and therapeutic intervention, there will still be enough grounds to advocate animal experimentation. However, efforts must continue to minimize or eliminate the need for animal testing in scientific research as soon as possible. PMID:24217224

  11. They see a rat, we seek a cure for diseases: the current status of animal experimentation in medical practice.

    PubMed

    Kehinde, Elijah O

    2013-01-01

    The objective of this review article was to examine current and prospective developments in the scientific use of laboratory animals, and to find out whether or not there are still valid scientific benefits of and justification for animal experimentation. The PubMed and Web of Science databases were searched using the following key words: animal models, basic research, pharmaceutical research, toxicity testing, experimental surgery, surgical simulation, ethics, animal welfare, benign, malignant diseases. Important relevant reviews, original articles and references from 1970 to 2012 were reviewed for data on the use of experimental animals in the study of diseases. The use of laboratory animals in scientific research continues to generate intense public debate. Their use can be justified today in the following areas of research: basic scientific research, use of animals as models for human diseases, pharmaceutical research and development, toxicity testing and teaching of new surgical techniques. This is because there are inherent limitations in the use of alternatives such as in vitro studies, human clinical trials or computer simulation. However, there are problems of transferability of results obtained from animal research to humans. Efforts are on-going to find suitable alternatives to animal experimentation like cell and tissue culture and computer simulation. For the foreseeable future, it would appear that to enable scientists to have a more precise understanding of human disease, including its diagnosis, prognosis and therapeutic intervention, there will still be enough grounds to advocate animal experimentation. However, efforts must continue to minimize or eliminate the need for animal testing in scientific research as soon as possible. © 2013 S. Karger AG, Basel.

  12. New Mexico district work-effort analysis computer program

    USGS Publications Warehouse

    Hiss, W.L.; Trantolo, A.P.; Sparks, J.L.

    1972-01-01

    The computer program (CAN 2) described in this report is one of several related programs used in the New Mexico District cost-analysis system. The work-effort information used in these programs is accumulated and entered to the nearest hour on forms completed by each employee. Tabulating cards are punched directly from these forms after visual examinations for errors are made. Reports containing detailed work-effort data itemized by employee within each project and account and by account and project for each employee are prepared for both current-month and year-to-date periods by the CAN 2 computer program. An option allowing preparation of reports for a specified 3-month period is provided. The total number of hours worked on each account and project and a grand total of hours worked in the New Mexico District is computed and presented in a summary report for each period. Work effort not chargeable directly to individual projects or accounts is considered as overhead and can be apportioned to the individual accounts and projects on the basis of the ratio of the total hours of work effort for the individual accounts or projects to the total New Mexico District work effort at the option of the user. The hours of work performed by a particular section, such as General Investigations or Surface Water, are prorated and charged to the projects or accounts within the particular section. A number of surveillance or buffer accounts are employed to account for the hours worked on special events or on those parts of large projects or accounts that require a more detailed analysis. Any part of the New Mexico District operation can be separated and analyzed in detail by establishing an appropriate buffer account. With the exception of statements associated with word size, the computer program is written in FORTRAN IV in a relatively low and standard language level to facilitate its use on different digital computers. The program has been run only on a Control Data Corporation 6600 computer system. Central processing computer time has seldom exceeded 5 minutes on the longest year-to-date runs.

  13. Japanese and Taiwanese pelagic longline fleet dynamics and the impacts of climate change in the southern Indian Ocean

    NASA Astrophysics Data System (ADS)

    Michael, P. E.; Wilcox, C.; Tuck, G. N.; Hobday, A. J.; Strutton, P. G.

    2017-06-01

    Climate change is projected to continue shifting the distribution of marine species, leading to changes in local assemblages and different interactions with human activities. With regard to fisheries, understanding the relationship between fishing fleets, target species catch per unit effort (CPUE), and the environment enhances our ability to anticipate fisher response and is an essential step towards proactive management. Here, we explore the potential impact of climate change in the southern Indian Ocean by modelling Japanese and Taiwanese pelagic longline fleet dynamics. We quantify the mean and variability of target species CPUE and the relative value and cost of fishing in different areas. Using linear mixed models, we identify fleet-specific effort allocation strategies most related to observed effort and predict the future distribution of effort and tuna catch under climate change for 2063-2068. The Japanese fleet's strategy targets high-value species and minimizes the variability in CPUE of the primary target species. Conversely, the Taiwanese strategy indicated flexible targeting of a broad range of species, fishing in areas of high and low variability in catch, and minimizing costs. The projected future mean and variability in CPUE across species suggest a slight increase in CPUE in currently high CPUE areas for most species. The corresponding effort projections suggest a slight increase in Japanese effort in the western and eastern study area, and Taiwanese effort increasing east of Madagascar. This approach provides a useful method for managers to explore the impacts of different fishing and fleet management strategies for the future.

  14. Redistricting Is Less Torturous When a Computer Does the Nitty-Gritty for You.

    ERIC Educational Resources Information Center

    Rust, Albert O.; Judd, Frank F.

    1984-01-01

    Describes "optimization" computer programing to aid in school redistricting. Using diverse demographic data, the computer plots district boundaries to minimize children's walking distance and maximize safety, improve racial balance, and keep enrollment within school capacity. (TE)

  15. 2013 Los Alamos National Laboratory Hazardous Waste Minimization Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salzman, Sonja L.; English, Charles J.

    2015-08-24

    Waste minimization and pollution prevention are inherent goals within the operating procedures of Los Alamos National Security, LLC (LANS). The US Department of Energy (DOE) and LANS are required to submit an annual hazardous waste minimization report to the New Mexico Environment Department (NMED) in accordance with the Los Alamos National Laboratory (LANL or the Laboratory) Hazardous Waste Facility Permit. The report was prepared pursuant to the requirements of Section 2.9 of the LANL Hazardous Waste Facility Permit. This report describes the hazardous waste minimization program (a component of the overall Waste Minimization/Pollution Prevention [WMin/PP] Program) administered by the Environmentalmore » Stewardship Group (ENV-ES). This report also supports the waste minimization and pollution prevention goals of the Environmental Programs Directorate (EP) organizations that are responsible for implementing remediation activities and describes its programs to incorporate waste reduction practices into remediation activities and procedures. LANS was very successful in fiscal year (FY) 2013 (October 1-September 30) in WMin/PP efforts. Staff funded four projects specifically related to reduction of waste with hazardous constituents, and LANS won four national awards for pollution prevention efforts from the National Nuclear Security Administration (NNSA). In FY13, there was no hazardous, mixedtransuranic (MTRU), or mixed low-level (MLLW) remediation waste generated at the Laboratory. More hazardous waste, MTRU waste, and MLLW was generated in FY13 than in FY12, and the majority of the increase was related to MTRU processing or lab cleanouts. These accomplishments and analysis of the waste streams are discussed in much more detail within this report.« less

  16. Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2

    NASA Technical Reports Server (NTRS)

    Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)

    1998-01-01

    The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.

  17. [Earth Science Technology Office's Computational Technologies Project

    NASA Technical Reports Server (NTRS)

    Fischer, James (Technical Monitor); Merkey, Phillip

    2005-01-01

    This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.

  18. Automated Stitching of Microtubule Centerlines across Serial Electron Tomograms

    PubMed Central

    Weber, Britta; Tranfield, Erin M.; Höög, Johanna L.; Baum, Daniel; Antony, Claude; Hyman, Tony; Verbavatz, Jean-Marc; Prohaska, Steffen

    2014-01-01

    Tracing microtubule centerlines in serial section electron tomography requires microtubules to be stitched across sections, that is lines from different sections need to be aligned, endpoints need to be matched at section boundaries to establish a correspondence between neighboring sections, and corresponding lines need to be connected across multiple sections. We present computational methods for these tasks: 1) An initial alignment is computed using a distance compatibility graph. 2) A fine alignment is then computed with a probabilistic variant of the iterative closest points algorithm, which we extended to handle the orientation of lines by introducing a periodic random variable to the probabilistic formulation. 3) Endpoint correspondence is established by formulating a matching problem in terms of a Markov random field and computing the best matching with belief propagation. Belief propagation is not generally guaranteed to converge to a minimum. We show how convergence can be achieved, nonetheless, with minimal manual input. In addition to stitching microtubule centerlines, the correspondence is also applied to transform and merge the electron tomograms. We applied the proposed methods to samples from the mitotic spindle in C. elegans, the meiotic spindle in X. laevis, and sub-pellicular microtubule arrays in T. brucei. The methods were able to stitch microtubules across section boundaries in good agreement with experts' opinions for the spindle samples. Results, however, were not satisfactory for the microtubule arrays. For certain experiments, such as an analysis of the spindle, the proposed methods can replace manual expert tracing and thus enable the analysis of microtubules over long distances with reasonable manual effort. PMID:25438148

  19. Development of instructional, interactive, multimedia anatomy dissection software: a student-led initiative.

    PubMed

    Inwood, Matthew J; Ahmad, Jamil

    2005-11-01

    Although dissection provides an unparalleled means of teaching gross anatomy, it constitutes a significant logistical and financial investment for educational institutions. The increasing availability and waning cost of computer equipment has enabled many institutions to supplement their anatomy curriculum with Computer Aided Learning (CAL) software. At the Royal College of Surgeons in Ireland, two undergraduate medical students designed and produced instructional anatomy dissection software for use by first and second year medical students. The software consists of full-motion, narrated, QuickTime MPG movies presented in a Macromedia environment. Forty-four movies, between 1-11 min in duration, were produced. Each movie corresponds to a dissection class and precisely demonstrates the dissection and educational objectives for that class. The software is distributed to students free of charge and they are encouraged to install it on their Apple iBook computers. Results of a student evaluation indicated that the software was useful, easy to use, and improved the students' experience in the dissection classes. The evaluation also indicated that only a minority of students regularly used the software or had it installed on their laptop computers. Accordingly, effort should also be directed toward making the software more accessible and increasing students' comfort and familiarity with novel instructional media. The successful design and implementation of this software demonstrates that CAL software can be employed to augment, enhance and improve anatomy instruction. In addition, effective, high quality, instructional multimedia software can be tailored to an educational institution's requirements and produced by novice programmers at minimal cost. Copyright 2005 Wiley-Liss, Inc

  20. Automated stitching of microtubule centerlines across serial electron tomograms.

    PubMed

    Weber, Britta; Tranfield, Erin M; Höög, Johanna L; Baum, Daniel; Antony, Claude; Hyman, Tony; Verbavatz, Jean-Marc; Prohaska, Steffen

    2014-01-01

    Tracing microtubule centerlines in serial section electron tomography requires microtubules to be stitched across sections, that is lines from different sections need to be aligned, endpoints need to be matched at section boundaries to establish a correspondence between neighboring sections, and corresponding lines need to be connected across multiple sections. We present computational methods for these tasks: 1) An initial alignment is computed using a distance compatibility graph. 2) A fine alignment is then computed with a probabilistic variant of the iterative closest points algorithm, which we extended to handle the orientation of lines by introducing a periodic random variable to the probabilistic formulation. 3) Endpoint correspondence is established by formulating a matching problem in terms of a Markov random field and computing the best matching with belief propagation. Belief propagation is not generally guaranteed to converge to a minimum. We show how convergence can be achieved, nonetheless, with minimal manual input. In addition to stitching microtubule centerlines, the correspondence is also applied to transform and merge the electron tomograms. We applied the proposed methods to samples from the mitotic spindle in C. elegans, the meiotic spindle in X. laevis, and sub-pellicular microtubule arrays in T. brucei. The methods were able to stitch microtubules across section boundaries in good agreement with experts' opinions for the spindle samples. Results, however, were not satisfactory for the microtubule arrays. For certain experiments, such as an analysis of the spindle, the proposed methods can replace manual expert tracing and thus enable the analysis of microtubules over long distances with reasonable manual effort.

  1. Subsite mapping of enzymes. Depolymerase computer modelling.

    PubMed Central

    Allen, J D; Thoma, J A

    1976-01-01

    We have developed a depolymerase computer model that uses a minimization routine. The model is designed so that, given experimental bond-cleavage frequencies for oligomeric substrates and experimental Michaelis parameters as a function of substrate chain length, the optimum subsite map is generated. The minimized sum of the weighted-squared residuals of the experimental and calculated data is used as a criterion of the goodness-of-fit for the optimized subsite map. The application of the minimization procedure to subsite mapping is explored through the use of simulated data. A procedure is developed whereby the minimization model can be used to determine the number of subsites in the enzymic binding region and to locate the position of the catalytic amino acids among these subsites. The degree of propagation of experimental variance into the subsite-binding energies is estimated. The question of whether hydrolytic rate coefficients are constant or a function of the number of filled subsites is examined. PMID:999629

  2. Search for Minimal and Semi-Minimal Rule Sets in Incremental Learning of Context-Free and Definite Clause Grammars

    NASA Astrophysics Data System (ADS)

    Imada, Keita; Nakamura, Katsuhiko

    This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called “bridging” based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.

  3. Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup

    PubMed Central

    Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.

    2010-01-01

    Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651

  4. Computational Methods for Stability and Control (COMSAC): The Time Has Come

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.

    2005-01-01

    Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.

  5. COMCAN: a computer program for common cause analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burdick, G.R.; Marshall, N.H.; Wilson, J.R.

    1976-05-01

    The computer program, COMCAN, searches the fault tree minimal cut sets for shared susceptibility to various secondary events (common causes) and common links between components. In the case of common causes, a location check may also be performed by COMCAN to determine whether barriers to the common cause exist between components. The program can locate common manufacturers of components having events in the same minimal cut set. A relative ranking scheme for secondary event susceptibility is included in the program.

  6. A compendium of computational fluid dynamics at the Langley Research Center

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Through numerous summary examples, the scope and general nature of the computational fluid dynamics (CFD) effort at Langley is identified. These summaries will help inform researchers in CFD and line management at Langley of the overall effort. In addition to the inhouse efforts, out of house CFD work supported by Langley through industrial contracts and university grants are included. Researchers were encouraged to include summaries of work in preliminary and tentative states of development as well as current research approaching definitive results.

  7. Work intensity in sacroiliac joint fusion and lumbar microdiscectomy

    PubMed Central

    Frank, Clay; Kondrashov, Dimitriy; Meyer, S Craig; Dix, Gary; Lorio, Morgan; Kovalsky, Don; Cher, Daniel

    2016-01-01

    Background The evidence base supporting minimally invasive sacroiliac (SI) joint fusion (SIJF) surgery is increasing. The work relative value units (RVUs) associated with minimally invasive SIJF are seemingly low. To date, only one published study describes the relative work intensity associated with minimally invasive SIJF. No study has compared work intensity vs other commonly performed spine surgery procedures. Methods Charts of 192 patients at five sites who underwent either minimally invasive SIJF (American Medical Association [AMA] CPT® code 27279) or lumbar microdiscectomy (AMA CPT® code 63030) were reviewed. Abstracted were preoperative times associated with diagnosis and patient care, intraoperative parameters including operating room (OR) in/out times and procedure start/stop times, and postoperative care requirements. Additionally, using a visual analog scale, surgeons estimated the intensity of intraoperative care, including mental, temporal, and physical demands and effort and frustration. Work was defined as operative time multiplied by task intensity. Results Patients who underwent minimally invasive SIJF were more likely female. Mean procedure times were lower in SIJF by about 27.8 minutes (P<0.0001) and mean total OR times were lower by 27.9 minutes (P<0.0001), but there was substantial overlap across procedures. Mean preservice and post-service total labor times were longer in minimally invasive SIJF (preservice times longer by 63.5 minutes [P<0.0001] and post-service labor times longer by 20.2 minutes [P<0.0001]). The number of postoperative visits was higher in minimally invasive SIJF. Mean total service time (preoperative + OR time + postoperative) was higher in the minimally invasive SIJF group (261.5 vs 211.9 minutes, P<0.0001). Intraoperative intensity levels were higher for mental, physical, effort, and frustration domains (P<0.0001 each). After taking into account intensity, intraoperative workloads showed substantial overlap. Conclusion Compared to a commonly performed lumbar spine surgical procedure, lumbar microdiscectomy, that currently has a higher work RVU, preoperative, intraoperative, and postoperative workload for minimally invasive SIJF is higher. The work RVU for minimally invasive SIJF should be adjusted upward as the relative amount of work is comparable. PMID:27555790

  8. Computer Documentation: Effects on Students' Computing Behaviors, Attitudes, and Use of Computers.

    ERIC Educational Resources Information Center

    Duin, Ann Hill

    This study investigated the effects of a minimal manual version versus a cards version of documentation on students' computing behaviors while learning to use a telecommunication system (Appleshare), students' attitudes toward the documentation, and students' later use of the system. Guidelines from the work of Carroll and colleagues at the IBM…

  9. The minimal work cost of information processing

    NASA Astrophysics Data System (ADS)

    Faist, Philippe; Dupuis, Frédéric; Oppenheim, Jonathan; Renner, Renato

    2015-07-01

    Irreversible information processing cannot be carried out without some inevitable thermodynamical work cost. This fundamental restriction, known as Landauer's principle, is increasingly relevant today, as the energy dissipation of computing devices impedes the development of their performance. Here we determine the minimal work required to carry out any logical process, for instance a computation. It is given by the entropy of the discarded information conditional to the output of the computation. Our formula takes precisely into account the statistically fluctuating work requirement of the logical process. It enables the explicit calculation of practical scenarios, such as computational circuits or quantum measurements. On the conceptual level, our result gives a precise and operational connection between thermodynamic and information entropy, and explains the emergence of the entropy state function in macroscopic thermodynamics.

  10. Job-shop scheduling applied to computer vision

    NASA Astrophysics Data System (ADS)

    Sebastian y Zuniga, Jose M.; Torres-Medina, Fernando; Aracil, Rafael; Reinoso, Oscar; Jimenez, Luis M.; Garcia, David

    1997-09-01

    This paper presents a method for minimizing the total elapsed time spent by n tasks running on m differents processors working in parallel. The developed algorithm not only minimizes the total elapsed time but also reduces the idle time and waiting time of in-process tasks. This condition is very important in some applications of computer vision in which the time to finish the total process is particularly critical -- quality control in industrial inspection, real- time computer vision, guided robots. The scheduling algorithm is based on the use of two matrices, obtained from the precedence relationships between tasks, and the data obtained from the two matrices. The developed scheduling algorithm has been tested in one application of quality control using computer vision. The results obtained have been satisfactory in the application of different image processing algorithms.

  11. Crosstalk Cancellation for a Simultaneous Phase Shifting Interferometer

    NASA Technical Reports Server (NTRS)

    Olczak, Eugene (Inventor)

    2014-01-01

    A method of minimizing fringe print-through in a phase-shifting interferometer, includes the steps of: (a) determining multiple transfer functions of pixels in the phase-shifting interferometer; (b) computing a crosstalk term for each transfer function; and (c) displaying, to a user, a phase-difference map using the crosstalk terms computed in step (b). Determining a transfer function in step (a) includes measuring intensities of a reference beam and a test beam at the pixels, and measuring an optical path difference between the reference beam and the test beam at the pixels. Computing crosstalk terms in step (b) includes computing an N-dimensional vector, where N corresponds to the number of transfer functions, and the N-dimensional vector is obtained by minimizing a variance of a modulation function in phase shifted images.

  12. Computed intraoperative navigation guidance--a preliminary report on a new technique.

    PubMed

    Enislidis, G; Wagner, A; Ploder, O; Ewers, R

    1997-08-01

    To assess the value of a computer-assisted three-dimensional guidance system (Virtual Patient System) in maxillofacial operations. Laboratory and open clinical study. Teaching Hospital, Austria. 6 patients undergoing various procedures including removal of foreign body (n=3) and biopsy, maxillary advancement, and insertion of implants (n=1 each). Storage of computed tomographic (CT) pictures on an optical disc, and imposition of intraoperative video images on to these. The resulting display is shown to the surgeon on a micromonitor in his head-up display for guidance during the operations. To improve orientation during complex or minimally invasive maxillofacial procedures and to make such operations easier and less traumatic. Successful transferral of computed navigation technology into an operation room environment and positive evaluation of the method by the surgeons involved. Computer-assisted three-dimensional guidance systems have the potential for making complex or minimally invasive procedures easier to do, thereby reducing postoperative morbidity.

  13. Motivational Beliefs, Student Effort, and Feedback Behaviour in Computer-Based Formative Assessment

    ERIC Educational Resources Information Center

    Timmers, Caroline F.; Braber-van den Broek, Jannie; van den Berg, Stephanie M.

    2013-01-01

    Feedback can only be effective when students seek feedback and process it. This study examines the relations between students' motivational beliefs, effort invested in a computer-based formative assessment, and feedback behaviour. Feedback behaviour is represented by whether a student seeks feedback and the time a student spends studying the…

  14. Establishing a K-12 Circuit Design Program

    ERIC Educational Resources Information Center

    Inceoglu, Mustafa M.

    2010-01-01

    Outreach, as defined by Wikipedia, is an effort by an organization or group to connect its ideas or practices to the efforts of other organizations, groups, specific audiences, or the general public. This paper describes a computer engineering outreach project of the Department of Computer Engineering at Ege University, Izmir, Turkey, to a local…

  15. Identifying Predictors of Achievement in the Newly Defined Information Literacy: A Neural Network Analysis

    ERIC Educational Resources Information Center

    Sexton, Randall; Hignite, Michael; Margavio, Thomas M.; Margavio, Geanie W.

    2009-01-01

    Information Literacy is a concept that evolved as a result of efforts to move technology-based instructional and research efforts beyond the concepts previously associated with "computer literacy." While computer literacy was largely a topic devoted to knowledge of hardware and software, information literacy is concerned with students' abilities…

  16. Electromagnetic Navigational Bronchoscopy Reduces the Time Required for Localization and Resection of Lung Nodules.

    PubMed

    Bolton, William David; Cochran, Thomas; Ben-Or, Sharon; Stephenson, James E; Ellis, William; Hale, Allyson L; Binks, Andrew P

    The aims of the study were to evaluate electromagnetic navigational bronchoscopy (ENB) and computed tomography-guided placement as localization techniques for minimally invasive resection of small pulmonary nodules and determine whether electromagnetic navigational bronchoscopy is a safer and more effective method than computed tomography-guided localization. We performed a retrospective review of our thoracic surgery database to identify patients who underwent minimally invasive resection for a pulmonary mass and used either electromagnetic navigational bronchoscopy or computed tomography-guided localization techniques between July 2011 and May 2015. Three hundred eighty-three patients had a minimally invasive resection during our study period, 117 of whom underwent electromagnetic navigational bronchoscopy or computed tomography localization (electromagnetic navigational bronchoscopy = 81; computed tomography = 36). There was no significant difference between computed tomography and electromagnetic navigational bronchoscopy patient groups with regard to age, sex, race, pathology, nodule size, or location. Both computed tomography and electromagnetic navigational bronchoscopy were 100% successful at localizing the mass, and there was no difference in the type of definitive surgical resection (wedge, segmentectomy, or lobectomy) (P = 0.320). Postoperative complications occurred in 36% of all patients, but there were no complications related to the localization procedures. In terms of localization time and surgical time, there was no difference between groups. However, the down/wait time between localization and resection was significant (computed tomography = 189 minutes; electromagnetic navigational bronchoscopy = 27 minutes); this explains why the difference in total time (sum of localization, down, and surgery) was significant (P < 0.001). We found electromagnetic navigational bronchoscopy to be as safe and effective as computed tomography-guided wire placement and to provide a significantly decreased down time between localization and surgical resection.

  17. Mating programs including genomic relationships

    USDA-ARS?s Scientific Manuscript database

    Computer mating programs have helped breeders minimize pedigree inbreeding and avoid recessive defects by mating animals with parents that have fewer common ancestors. With genomic selection, breed associations, AI organizations, and on-farm software providers could use new programs to minimize geno...

  18. AF Cr(VI) Minimize Roadmap: Phase 1 Results

    DTIC Science & Technology

    2010-12-01

    Environmental Technology Technical Symposium & Workshop, 30 Nov ? 2 Dec 2010, Washington, DC. Sponsored by SERDP and ESTCP. 14. ABSTRACT Hexavalent chromium ...unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Minimizing Hexavalent Chromium Use in DoD...Operations Technical Session No. 2B C-40 CURRENT STATE OF AIR FORCE HEXAVALENT CHROMIUM REDUCTION EFFORTS MR. CARL PERAZZOLA Air Force Corrosion

  19. Using benchmarking to minimize common DOE waste streams: Volume 5. Office paper waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levin, V.

    Finding innovative ways to reduce waste streams generated at US Department of Energy (DOE) sites by 50% by the year 2000 is a challenge for DOE`s waste minimization efforts. A team composed of members from several DOE facilities used the quality tool known as benchmarking to improve waste minimization efforts. First the team examined office waste generation and handling processes at their sites. Then team members developed telephone and written questionnaires to help identify potential ``best-in-class`` industry partners willing to share information about their best waste minimization techniques and technologies. The team identified two benchmarking partners, NIKE, Inc., in Beaverton,more » Oregon, and Microsoft, Inc., in Redmond, Washington. Both companies have proactive, employee-driven environmental issues programs. Both companies report strong employee involvement, management commitment, and readily available markets for recyclable materials such as white paper and nonwhite assorted paper. The availability of markets, the initiative and cooperation of employees, and management support are the main enablers for their programs. At both companies, recycling and waste reduction programs often cut across traditional corporate divisions such as procurement, janitorial services, environmental compliance, grounds maintenance, cafeteria operations, surplus sales, and shipping and receiving. These companies exhibited good cooperation between these functions to design and implement recycling and waste reduction programs.« less

  20. Periodic-disturbance accommodating control of the space station for asymptotic momentum management

    NASA Technical Reports Server (NTRS)

    Warren, Wayne; Wie, Bong; Geller, David

    1989-01-01

    Periodic-disturbance accommodating control is investigated for asymptotic momentum management of control moment gyros used as primary actuating devices for the Space Station. The proposed controller utilizes the concepts of quaternion feedback control and periodic-disturbance accommodation to achieve oscillations about the constant torque equilibrium attitude, while minimizing the control effort required. Three-axis coupled equations of motion, written in terms of quaternions, are derived for roll/yaw controller design and stability analysis. The quaternion feedback controller designed using the linear-quadratic regulator synthesis technique is shown to be robust for a wide range of pitch angles. It is also shown that the proposed controller tunes the open-loop unstable vehicle to a stable oscillatory motion which minimizes the control effort needed for steady-state operations.

  1. Assessment of spare reliability for multi-state computer networks within tolerable packet unreliability

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Huang, Cheng-Fu

    2015-04-01

    From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.

  2. Energy and time determine scaling in biological and computer designs

    PubMed Central

    Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie

    2016-01-01

    Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524

  3. Energy and time determine scaling in biological and computer designs.

    PubMed

    Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie

    2016-08-19

    Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).

  4. Gener: a minimal programming module for chemical controllers based on DNA strand displacement

    PubMed Central

    Kahramanoğulları, Ozan; Cardelli, Luca

    2015-01-01

    Summary: Gener is a development module for programming chemical controllers based on DNA strand displacement. Gener is developed with the aim of providing a simple interface that minimizes the opportunities for programming errors: Gener allows the user to test the computations of the DNA programs based on a simple two-domain strand displacement algebra, the minimal available so far. The tool allows the user to perform stepwise computations with respect to the rules of the algebra as well as exhaustive search of the computation space with different options for exploration and visualization. Gener can be used in combination with existing tools, and in particular, its programs can be exported to Microsoft Research’s DSD tool as well as to LaTeX. Availability and implementation: Gener is available for download at the Cosbi website at http://www.cosbi.eu/research/prototypes/gener as a windows executable that can be run on Mac OS X and Linux by using Mono. Contact: ozan@cosbi.eu PMID:25957353

  5. Gener: a minimal programming module for chemical controllers based on DNA strand displacement.

    PubMed

    Kahramanoğulları, Ozan; Cardelli, Luca

    2015-09-01

    : Gener is a development module for programming chemical controllers based on DNA strand displacement. Gener is developed with the aim of providing a simple interface that minimizes the opportunities for programming errors: Gener allows the user to test the computations of the DNA programs based on a simple two-domain strand displacement algebra, the minimal available so far. The tool allows the user to perform stepwise computations with respect to the rules of the algebra as well as exhaustive search of the computation space with different options for exploration and visualization. Gener can be used in combination with existing tools, and in particular, its programs can be exported to Microsoft Research's DSD tool as well as to LaTeX. Gener is available for download at the Cosbi website at http://www.cosbi.eu/research/prototypes/gener as a windows executable that can be run on Mac OS X and Linux by using Mono. ozan@cosbi.eu. © The Author 2015. Published by Oxford University Press.

  6. Impact of remote sensing upon the planning, management, and development of water resources

    NASA Technical Reports Server (NTRS)

    Loats, H. L.; Fowler, T. R.; Frech, S. L.

    1974-01-01

    A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.

  7. An opportunity cost model of subjective effort and task performance

    PubMed Central

    Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus

    2013-01-01

    Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775

  8. Adapting Buildings for Indoor Air Quality in a Changing Climate

    EPA Pesticide Factsheets

    Climate change presents many challenges, including the production of severe weather events. These events and efforts to minimize their effects through weatherization can adversely affect indoor environments.

  9. An efficient method for hybrid density functional calculation with spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui

    2018-03-01

    In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.

  10. Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort.

    PubMed

    Vassena, Eliana; Holroyd, Clay B; Alexander, William H

    2017-01-01

    In the last two decades the anterior cingulate cortex (ACC) has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.

  11. Penalized Weighted Least-Squares Approach to Sinogram Noise Reduction and Image Reconstruction for Low-Dose X-Ray Computed Tomography

    PubMed Central

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-01-01

    Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831

  12. An Investigation of the Flow Physics of Acoustic Liners by Direct Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Watson, Willie R. (Technical Monitor); Tam, Christopher

    2004-01-01

    This report concentrates on reporting the effort and status of work done on three dimensional (3-D) simulation of a multi-hole resonator in an impedance tube. This work is coordinated with a parallel experimental effort to be carried out at the NASA Langley Research Center. The outline of this report is as follows : 1. Preliminary consideration. 2. Computation model. 3. Mesh design and parallel computing. 4. Visualization. 5. Status of computer code development. 1. Preliminary Consideration.

  13. Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines

    DTIC Science & Technology

    1989-09-01

    Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer F ( Abstract In this...Projects Agency under contract number N00014-87-K-0825. Author Information Devadas : Department of Electrical Engineering and Computer Science, Room 36...MA 02139; (617) 253-0292. 0 * Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Siivas Devadas

  14. Massively parallel GPU-accelerated minimization of classical density functional theory

    NASA Astrophysics Data System (ADS)

    Stopper, Daniel; Roth, Roland

    2017-08-01

    In this paper, we discuss the ability to numerically minimize the grand potential of hard disks in two-dimensional and of hard spheres in three-dimensional space within the framework of classical density functional and fundamental measure theory on modern graphics cards. Our main finding is that a massively parallel minimization leads to an enormous performance gain in comparison to standard sequential minimization schemes. Furthermore, the results indicate that in complex multi-dimensional situations, a heavy parallel minimization of the grand potential seems to be mandatory in order to reach a reasonable balance between accuracy and computational cost.

  15. [Earth and Space Sciences Project Services for NASA HPCC

    NASA Technical Reports Server (NTRS)

    Merkey, Phillip

    2002-01-01

    This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.

  16. Efficient Reverse-Engineering of a Developmental Gene Regulatory Network

    PubMed Central

    Cicin-Sain, Damjan; Ashyraliyev, Maksat; Jaeger, Johannes

    2012-01-01

    Understanding the complex regulatory networks underlying development and evolution of multi-cellular organisms is a major problem in biology. Computational models can be used as tools to extract the regulatory structure and dynamics of such networks from gene expression data. This approach is called reverse engineering. It has been successfully applied to many gene networks in various biological systems. However, to reconstitute the structure and non-linear dynamics of a developmental gene network in its spatial context remains a considerable challenge. Here, we address this challenge using a case study: the gap gene network involved in segment determination during early development of Drosophila melanogaster. A major problem for reverse-engineering pattern-forming networks is the significant amount of time and effort required to acquire and quantify spatial gene expression data. We have developed a simplified data processing pipeline that considerably increases the throughput of the method, but results in data of reduced accuracy compared to those previously used for gap gene network inference. We demonstrate that we can infer the correct network structure using our reduced data set, and investigate minimal data requirements for successful reverse engineering. Our results show that timing and position of expression domain boundaries are the crucial features for determining regulatory network structure from data, while it is less important to precisely measure expression levels. Based on this, we define minimal data requirements for gap gene network inference. Our results demonstrate the feasibility of reverse-engineering with much reduced experimental effort. This enables more widespread use of the method in different developmental contexts and organisms. Such systematic application of data-driven models to real-world networks has enormous potential. Only the quantitative investigation of a large number of developmental gene regulatory networks will allow us to discover whether there are rules or regularities governing development and evolution of complex multi-cellular organisms. PMID:22807664

  17. Automated Estimation Of Software-Development Costs

    NASA Technical Reports Server (NTRS)

    Roush, George B.; Reini, William

    1993-01-01

    COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.

  18. Development and Implementation of a Coagulation Factor Testing Method Utilizing Autoverification in a High-volume Clinical Reference Laboratory Environment

    PubMed Central

    Riley, Paul W.; Gallea, Benoit; Valcour, Andre

    2017-01-01

    Background: Testing coagulation factor activities requires that multiple dilutions be assayed and analyzed to produce a single result. The slope of the line created by plotting measured factor concentration against sample dilution is evaluated to discern the presence of inhibitors giving rise to nonparallelism. Moreover, samples producing results on initial dilution falling outside the analytic measurement range of the assay must be tested at additional dilutions to produce reportable results. Methods: The complexity of this process has motivated a large clinical reference laboratory to develop advanced computer algorithms with automated reflex testing rules to complete coagulation factor analysis. A method was developed for autoverification of coagulation factor activity using expert rules developed with on an off the shelf commercially available data manager system integrated into an automated coagulation platform. Results: Here, we present an approach allowing for the autoverification and reporting of factor activity results with greatly diminished technologist effort. Conclusions: To the best of our knowledge, this is the first report of its kind providing a detailed procedure for implementation of autoverification expert rules as applied to coagulation factor activity testing. Advantages of this system include ease of training for new operators, minimization of technologist time spent, reduction of staff fatigue, minimization of unnecessary reflex tests, optimization of turnaround time, and assurance of the consistency of the testing and reporting process. PMID:28706751

  19. Development and Implementation of a Coagulation Factor Testing Method Utilizing Autoverification in a High-volume Clinical Reference Laboratory Environment.

    PubMed

    Riley, Paul W; Gallea, Benoit; Valcour, Andre

    2017-01-01

    Testing coagulation factor activities requires that multiple dilutions be assayed and analyzed to produce a single result. The slope of the line created by plotting measured factor concentration against sample dilution is evaluated to discern the presence of inhibitors giving rise to nonparallelism. Moreover, samples producing results on initial dilution falling outside the analytic measurement range of the assay must be tested at additional dilutions to produce reportable results. The complexity of this process has motivated a large clinical reference laboratory to develop advanced computer algorithms with automated reflex testing rules to complete coagulation factor analysis. A method was developed for autoverification of coagulation factor activity using expert rules developed with on an off the shelf commercially available data manager system integrated into an automated coagulation platform. Here, we present an approach allowing for the autoverification and reporting of factor activity results with greatly diminished technologist effort. To the best of our knowledge, this is the first report of its kind providing a detailed procedure for implementation of autoverification expert rules as applied to coagulation factor activity testing. Advantages of this system include ease of training for new operators, minimization of technologist time spent, reduction of staff fatigue, minimization of unnecessary reflex tests, optimization of turnaround time, and assurance of the consistency of the testing and reporting process.

  20. New perspectives on the pedagogy of programming in a developing country context

    NASA Astrophysics Data System (ADS)

    Apiola, Mikko; Tedre, Matti

    2012-09-01

    Programming education is a widely researched and intensely discussed topic. The literature proposes a broad variety of pedagogical viewpoints, practical approaches, learning theories, motivational vehicles, and other elements of the learning situation. However, little effort has been put on understanding cultural and contextual differences in pedagogy of programming. Pedagogical literature shows that educational design should account for differences in the ways of learning and teaching between industrialized and developing countries. However, the nature and implications of those differences are hitherto unclear. Using group interviews and quantitative surveys, we identified several crucial elements for contextualizing programming education. Our results reveal that students are facing many similar challenges to students in the west: they often lack deep level learning skills and problem-solving skills, which are required for learning computer programming, and, secondly, that from the students' viewpoint the standard learning environment does not offer enough support for gaining the requisite development. With inadequate support students may resort to surface learning and may adopt extrinsic sources of motivation. Learning is also hindered by many contextually unique factors, such as unfamiliar pedagogical approaches, language problems, and cultural differences. Our analysis suggests that challenges can be minimized by increasing the number of practical exercises, by carefully selecting between guided and minimally guided environments, by rigorously monitoring student progress, and by providing students timely help, repetitive exercises, clear guidelines, and emotional support.

  1. Computers in Physical Education.

    ERIC Educational Resources Information Center

    Sydow, James Armin

    Although computers have potential applications in the elementary and secondary physical education curriculum, current usage is minimal when compared to other disciplines. However, present trends indicate a substantial growth in the use of the computer in a supportive role in assisting the teacher in the management of instructional activities.…

  2. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. Tomore » alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.« less

  3. ENERGY AND SCIENCE: Five-Year Bibliography 1990-1994

    DTIC Science & Technology

    1995-12-01

    reviews the U.S. government’s efforts to support Venezuela’s energy sector. Sector de Energia en Venezuela: La Prodnccion Petrolera y las Condiciones... renovate existing laboratories or build new ones is often minimal. Four of the eight agencies recently started up task forces to reexamine their research...laboratory repairs. Moreover, funding to renovate existing laboratories or build new ones is often minimal. Four of the eight agencies recently started up

  4. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  5. Improved parallel data partitioning by nested dissection with applications to information retrieval.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Michael M.; Chevalier, Cedric; Boman, Erik Gunnar

    The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it ismore » a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.« less

  6. Designing a practical system for spectral imaging of skylight.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Lee, Raymond L

    2005-09-20

    In earlier work [J. Opt. Soc. Am. A 21, 13-23 (2004)], we showed that a combination of linear models and optimum Gaussian sensors obtained by an exhaustive search can recover daylight spectra reliably from broadband sensor data. Thus our algorithm and sensors could be used to design an accurate, relatively inexpensive system for spectral imaging of daylight. Here we improve our simulation of the multispectral system by (1) considering the different kinds of noise inherent in electronic devices such as change-coupled devices (CCDs) or complementary metal-oxide semiconductors (CMOS) and (2) extending our research to a different kind of natural illumination, skylight. Because exhaustive searches are expensive computationally, here we switch to a simulated annealing algorithm to define the optimum sensors for recovering skylight spectra. The annealing algorithm requires us to minimize a single cost function, and so we develop one that calculates both the spectral and colorimetric similarity of any pair of skylight spectra. We show that the simulated annealing algorithm yields results similar to the exhaustive search but with much less computational effort. Our technique lets us study the properties of optimum sensors in the presence of noise, one side effect of which is that adding more sensors may not improve the spectral recovery.

  7. Analysis and design of planar and non-planar wings for induced drag minimization

    NASA Technical Reports Server (NTRS)

    Mortara, K.; Straussfogel, Dennis M.; Maughmer, Mark D.

    1991-01-01

    The goal of the work was to develop and validate computational tools to be used for the design of planar and non-planar wing geometries for minimum induced drag. Because of the iterative nature of the design problem, it is important that, in addition to being sufficiently accurate for the problem at hand, they are reasonably fast and computationally efficient. Toward this end, a method of predicting induced drag in the presence of a non-rigid wake is coupled with a panel method. The induced drag prediction technique is based on the Kutta-Joukowski law applied at the trailing edge. Until recently, the use of this method has not been fully explored and pressure integration and Trefftz-plane calculations favored. As is shown in this report, however, the Kutta-Joukowski method is able to give better results for a given amount of effort than the more common techniques, particularly when relaxed wakes and non-planar wing geometries are considered. Using these tools, a workable design method is in place which takes into account relaxed wakes and non-planar wing geometries. It is recommended that this method be used to design a wind-tunnel experiment to verify the predicted aerodynamic benefits of non-planar wing geometries.

  8. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    NASA Astrophysics Data System (ADS)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  9. DOE planning workshop advanced biomedical technology initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-06-01

    The Department of Energy has mad major contributions in the biomedical sciences with programs in medical applications and instrumentation development, molecular biology, human genome, and computational sciences. In an effort to help determine DOE`s role in applying these capabilities to the nation`s health care needs, a planning workshop was held on January 11--12, 1994. The workshop was co-sponsored by the Department`s Office of Energy Research and Defense Programs organizations. Participants represented industry, medical research institutions, national laboratories, and several government agencies. They attempted to define the needs of the health care industry. identify DOE laboratory capabilities that address these needs,more » and determine how DOE, in cooperation with other team members, could begin an initiative with the goals of reducing health care costs while improving the quality of health care delivery through the proper application of technology and computational systems. This document is a report of that workshop. Seven major technology development thrust areas were considered. Each involves development of various aspects of imaging, optical, sensor and data processing and storage technologies. The thrust areas as prioritized for DOE are: (1) Minimally Invasive Procedures; (2) Technologies for Individual Self Care; (3) Outcomes Research; (4) Telemedicine; (5) Decision Support Systems; (6) Assistive Technology; (7) Prevention and Education.« less

  10. Adaptive-weighted Total Variation Minimization for Sparse Data toward Low-dose X-ray Computed Tomography Image Reconstruction

    PubMed Central

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-01-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, a piecewise-smooth X-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing noticeable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously-reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several noticeable gains, in terms of noise-resolution tradeoff plots and full width at half maximum values, as compared to the corresponding conventional TV-POCS algorithm. PMID:23154621

  11. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction.

    PubMed

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-12-07

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.

  12. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less

  13. Free Energy Minimization Calculation of Complex Chemical Equilibria. Reduction of Silicon Dioxide with Carbon at High Temperature.

    ERIC Educational Resources Information Center

    Wai, C. M.; Hutchinson, S. G.

    1989-01-01

    Discusses the calculation of free energy in reactions between silicon dioxide and carbon. Describes several computer programs for calculating the free energy minimization and their uses in chemistry classrooms. Lists 16 references. (YP)

  14. Interactive Graphics Analysis for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.

    1983-01-01

    Program uses higher-order far field drag minimization. Computer program WDES WDEM preliminary aerodynamic design tool for one or two interacting, subsonic lifting surfaces. Subcritical wing design code employs higher-order far-field drag minimization technique. Linearized aerodynamic theory used. Program written in FORTRAN IV.

  15. Mating programs including genomic relationships and dominance effects

    USDA-ARS?s Scientific Manuscript database

    Computer mating programs have helped breeders minimize pedigree inbreeding and avoid recessive defects by mating animals with parents that have fewer common ancestors. With genomic selection, breed associations, AI organizations, and on-farm software providers could use new programs to minimize geno...

  16. Developing and Implementing Monitoring and Evaluation Methods in the New Era of Expanded Care and Treatment of HIV/AIDS

    ERIC Educational Resources Information Center

    Wolf, R. Cameron; Bicego, George; Marconi, Katherine; Bessinger, Ruth; van Praag, Eric; Noriega-Minichiello, Shanti; Pappas, Gregory; Fronczak, Nancy; Peersman, Greet; Fiorentino, Renee K.; Rugg, Deborah; Novak, John

    2004-01-01

    The sharp rise in the HIV/AIDS burden worldwide has elicited calls for increased efforts to combat the spread and impact of HIV/AIDS. Efforts must continue with the aim to decrease new infections. At the same time, care and treatment services for those already infected can lead to longer, productive lives, thereby minimizing negative effects on…

  17. Herding Elephants: Coping with the Technological Revolution in our Schools.

    ERIC Educational Resources Information Center

    Tunison, Scott

    2002-01-01

    Considers the extent to which schools need to address computer technology, and discusses which strategies can be employed to maximize the benefits and minimize the difficulties in integrating computer technology into the current educational framework. Advocates incorporation of computer technology into educational practices. (Contains 17…

  18. Amber Plug-In for Protein Shop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliva, Ricardo

    2004-05-10

    The Amber Plug-in for ProteinShop has two main components: an AmberEngine library to compute the protein energy models, and a module to solve the energy minimization problem using an optimization algorithm in the OPTI-+ library. Together, these components allow the visualization of the protein folding process in ProteinShop. AmberEngine is a object-oriented library to compute molecular energies based on the Amber model. The main class is called ProteinEnergy. Its main interface methods are (1) "init" to initialize internal variables needed to compute the energy. (2) "eval" to evaluate the total energy given a vector of coordinates. Additional methods allow themore » user to evaluate the individual components of the energy model (bond, angle, dihedral, non-bonded-1-4, and non-bonded energies) and to obtain the energy of each individual atom. The Amber Engine library source code includes examples and test routines that illustrate the use of the library in stand alone programs. The energy minimization module uses the AmberEngine library and the nonlinear optimization library OPT++. OPT++ is open source software available under the GNU Lesser General Public License. The minimization module currently makes use of the LBFGS optimization algorithm in OPT++ to perform the energy minimization. Future releases may give the user a choice of other algorithms available in OPT++.« less

  19. Shape optimization of self-avoiding curves

    NASA Astrophysics Data System (ADS)

    Walker, Shawn W.

    2016-04-01

    This paper presents a softened notion of proximity (or self-avoidance) for curves. We then derive a sensitivity result, based on shape differential calculus, for the proximity. This is combined with a gradient-based optimization approach to compute three-dimensional, parameterized curves that minimize the sum of an elastic (bending) energy and a proximity energy that maintains self-avoidance by a penalization technique. Minimizers are computed by a sequential-quadratic-programming (SQP) method where the bending energy and proximity energy are approximated by a finite element method. We then apply this method to two problems. First, we simulate adsorbed polymer strands that are constrained to be bound to a surface and be (locally) inextensible. This is a basic model of semi-flexible polymers adsorbed onto a surface (a current topic in material science). Several examples of minimizing curve shapes on a variety of surfaces are shown. An advantage of the method is that it can be much faster than using molecular dynamics for simulating polymer strands on surfaces. Second, we apply our proximity penalization to the computation of ideal knots. We present a heuristic scheme, utilizing the SQP method above, for minimizing rope-length and apply it in the case of the trefoil knot. Applications of this method could be for generating good initial guesses to a more accurate (but expensive) knot-tightening algorithm.

  20. A Survey of Techniques for Approximate Computing

    DOE PAGES

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

  1. Application of augmented-Lagrangian methods in meteorology: Comparison of different conjugate-gradient codes for large-scale minimization

    NASA Technical Reports Server (NTRS)

    Navon, I. M.

    1984-01-01

    A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.

  2. Intravenous catheter training system: computer-based education versus traditional learning methods.

    PubMed

    Engum, Scott A; Jeffries, Pamela; Fisher, Lisa

    2003-07-01

    Virtual reality simulators allow trainees to practice techniques without consequences, reduce potential risk associated with training, minimize animal use, and help to develop standards and optimize procedures. Current intravenous (IV) catheter placement training methods utilize plastic arms, however, the lack of variability can diminish the educational stimulus for the student. This study compares the effectiveness of an interactive, multimedia, virtual reality computer IV catheter simulator with a traditional laboratory experience of teaching IV venipuncture skills to both nursing and medical students. A randomized, pretest-posttest experimental design was employed. A total of 163 participants, 70 baccalaureate nursing students and 93 third-year medical students beginning their fundamental skills training were recruited. The students ranged in age from 20 to 55 years (mean 25). Fifty-eight percent were female and 68% percent perceived themselves as having average computer skills (25% declaring excellence). The methods of IV catheter education compared included a traditional method of instruction involving a scripted self-study module which involved a 10-minute videotape, instructor demonstration, and hands-on-experience using plastic mannequin arms. The second method involved an interactive multimedia, commercially made computer catheter simulator program utilizing virtual reality (CathSim). The pretest scores were similar between the computer and the traditional laboratory group. There was a significant improvement in cognitive gains, student satisfaction, and documentation of the procedure with the traditional laboratory group compared with the computer catheter simulator group. Both groups were similar in their ability to demonstrate the skill correctly. CONCLUSIONS; This evaluation and assessment was an initial effort to assess new teaching methodologies related to intravenous catheter placement and their effects on student learning outcomes and behaviors. Technology alone is not a solution for stand alone IV catheter placement education. A traditional learning method was preferred by students. The combination of these two methods of education may further enhance the trainee's satisfaction and skill acquisition level.

  3. Nuclear measurement of subgrade moisture.

    DOT National Transportation Integrated Search

    1973-01-01

    The basic consideration in evaluating subgrade moisture conditions under pavements is the selection of a method of determining moisture contents that is sufficiently accurate and can be used with minimal effort, interference with traffic, and recalib...

  4. 34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2012-07-01 2012-07-01 false How does the Secretary compute maintenance of effort in...

  5. 34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2013-07-01 2013-07-01 false How does the Secretary compute maintenance of effort in...

  6. 34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2011-07-01 2011-07-01 false How does the Secretary compute maintenance of effort in...

  7. 34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2014-07-01 2014-07-01 false How does the Secretary compute maintenance of effort in...

  8. 34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary compute maintenance of effort in...

  9. Comparison of joint space versus task force load distribution optimization for a multiarm manipulator system

    NASA Technical Reports Server (NTRS)

    Soloway, Donald I.; Alberts, Thomas E.

    1989-01-01

    It is often proposed that the redundancy in choosing a force distribution for multiple arms grasping a single object should be handled by minimizing a quadratic performance index. The performance index may be formulated in terms of joint torques or in terms of the Cartesian space force/torque applied to the body by the grippers. The former seeks to minimize power consumption while the latter minimizes body stresses. Because the cost functions are related to each other by a joint angle dependent transformation on the weight matrix, it might be argued that either method tends to reduce power consumption, but clearly the joint space minimization is optimal. A comparison of these two options is presented with consideration given to computational cost and power consumption. Simulation results using a two arm robot system are presented to show the savings realized by employing the joint space optimization. These savings are offset by additional complexity, computation time and in some cases processor power consumption.

  10. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  11. Strategies to Minimize Antibiotic Resistance

    PubMed Central

    Lee, Chang-Ro; Cho, Ill Hwan; Jeong, Byeong Chul; Lee, Sang Hee

    2013-01-01

    Antibiotic resistance can be reduced by using antibiotics prudently based on guidelines of antimicrobial stewardship programs (ASPs) and various data such as pharmacokinetic (PK) and pharmacodynamic (PD) properties of antibiotics, diagnostic testing, antimicrobial susceptibility testing (AST), clinical response, and effects on the microbiota, as well as by new antibiotic developments. The controlled use of antibiotics in food animals is another cornerstone among efforts to reduce antibiotic resistance. All major resistance-control strategies recommend education for patients, children (e.g., through schools and day care), the public, and relevant healthcare professionals (e.g., primary-care physicians, pharmacists, and medical students) regarding unique features of bacterial infections and antibiotics, prudent antibiotic prescribing as a positive construct, and personal hygiene (e.g., handwashing). The problem of antibiotic resistance can be minimized only by concerted efforts of all members of society for ensuring the continued efficiency of antibiotics. PMID:24036486

  12. Robotics-based synthesis of human motion.

    PubMed

    Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S

    2009-01-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  13. Integrating computation into the undergraduate curriculum: A vision and guidelines for future developments

    NASA Astrophysics Data System (ADS)

    Chonacky, Norman; Winch, David

    2008-04-01

    There is substantial evidence of a need to make computation an integral part of the undergraduate physics curriculum. This need is consistent with data from surveys in both the academy and the workplace, and has been reinforced by two years of exploratory efforts by a group of physics faculty for whom computation is a special interest. We have examined past and current efforts at reform and a variety of strategic, organizational, and institutional issues involved in any attempt to broadly transform existing practice. We propose a set of guidelines for development based on this past work and discuss our vision of computationally integrated physics.

  14. Micro-video display with ocular tracking and interactive voice control

    NASA Technical Reports Server (NTRS)

    Miller, James E.

    1993-01-01

    In certain space-restricted environments, many of the benefits resulting from computer technology have been foregone because of the size, weight, inconvenience, and lack of mobility associated with existing computer interface devices. Accordingly, an effort to develop a highly miniaturized and 'wearable' computer display and control interface device, referred to as the Sensory Integrated Data Interface (SIDI), is underway. The system incorporates a micro-video display that provides data display and ocular tracking on a lightweight headset. Software commands are implemented by conjunctive eye movement and voice commands of the operator. In this initial prototyping effort, various 'off-the-shelf' components have been integrated into a desktop computer and with a customized menu-tree software application to demonstrate feasibility and conceptual capabilities. When fully developed as a customized system, the interface device will allow mobile, 'hand-free' operation of portable computer equipment. It will thus allow integration of information technology applications into those restrictive environments, both military and industrial, that have not yet taken advantage of the computer revolution. This effort is Phase 1 of Small Business Innovative Research (SBIR) Topic number N90-331 sponsored by the Naval Undersea Warfare Center Division, Newport. The prime contractor is Foster-Miller, Inc. of Waltham, MA.

  15. A random-key encoded harmony search approach for energy-efficient production scheduling with shared resources

    NASA Astrophysics Data System (ADS)

    Garcia-Santiago, C. A.; Del Ser, J.; Upton, C.; Quilligan, F.; Gil-Lopez, S.; Salcedo-Sanz, S.

    2015-11-01

    When seeking near-optimal solutions for complex scheduling problems, meta-heuristics demonstrate good performance with affordable computational effort. This has resulted in a gravitation towards these approaches when researching industrial use-cases such as energy-efficient production planning. However, much of the previous research makes assumptions about softer constraints that affect planning strategies and about how human planners interact with the algorithm in a live production environment. This article describes a job-shop problem that focuses on minimizing energy consumption across a production facility of shared resources. The application scenario is based on real facilities made available by the Irish Center for Manufacturing Research. The formulated problem is tackled via harmony search heuristics with random keys encoding. Simulation results are compared to a genetic algorithm, a simulated annealing approach and a first-come-first-served scheduling. The superior performance obtained by the proposed scheduler paves the way towards its practical implementation over industrial production chains.

  16. Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree-Fock.

    PubMed

    Tamayo-Mendoza, Teresa; Kreisbeck, Christoph; Lindh, Roland; Aspuru-Guzik, Alán

    2018-05-23

    Automatic differentiation (AD) is a powerful tool that allows calculating derivatives of implemented algorithms with respect to all of their parameters up to machine precision, without the need to explicitly add any additional functions. Thus, AD has great potential in quantum chemistry, where gradients are omnipresent but also difficult to obtain, and researchers typically spend a considerable amount of time finding suitable analytical forms when implementing derivatives. Here, we demonstrate that AD can be used to compute gradients with respect to any parameter throughout a complete quantum chemistry method. We present DiffiQult , a Hartree-Fock implementation, entirely differentiated with the use of AD tools. DiffiQult is a software package written in plain Python with minimal deviation from standard code which illustrates the capability of AD to save human effort and time in implementations of exact gradients in quantum chemistry. We leverage the obtained gradients to optimize the parameters of one-particle basis sets in the context of the floating Gaussian framework.

  17. Brown Adipose Reporting Criteria in Imaging STudies (BARCIST 1.0): Recommendations for Standardized FDG-PET/CT Experiments in Humans

    PubMed Central

    Chen, Kong Y.; Laughlin, Maren R.; Haft, Carol R.; Hu, Houchun Harry; Bredella, Miriam A.; Enerbäck, Sven; Kinahan, Paul E.; van Marken Lichtenbelt, Wouter; Lin, Frank I.; Sunderland, John J.; Virtanen, Kirsi A.; Wahl, Richard L.

    2016-01-01

    Human brown adipose tissue (BAT) presence, metabolic activity and estimated mass are typically measured by imaging [18F]fluorodeoxyglucose (FDG) uptake in response to cold exposure in regions of the body expected to contain BAT, using positron emission tomography combined with x-ray computed tomography (FDG-PET/CT). Efforts to describe the epidemiology and biology of human BAT are hampered by diverse experimental practices, making it difficult to directly compare results among laboratories. An expert panel was assembled by the National Institute of Diabetes and Digestive and Kidney Diseases on November 4, 2014 to discuss minimal requirements for conducting FDG-PET/CT experiments of human BAT, data analysis, and publication of results. This resulted in Brown Adipose Reporting Criteria in Imaging STudies (BARCIST 1.0). Since there are no fully-validated best practices at this time, panel recommendations are meant to enhance comparability across experiments, but not to constrain experimental design or the questions that can be asked. PMID:27508870

  18. Modeling and control for closed environment plant production systems

    NASA Technical Reports Server (NTRS)

    Fleisher, David H.; Ting, K. C.; Janes, H. W. (Principal Investigator)

    2002-01-01

    A computer program was developed to study multiple crop production and control in controlled environment plant production systems. The program simulates crop growth and development under nominal and off-nominal environments. Time-series crop models for wheat (Triticum aestivum), soybean (Glycine max), and white potato (Solanum tuberosum) are integrated with a model-based predictive controller. The controller evaluates and compensates for effects of environmental disturbances on crop production scheduling. The crop models consist of a set of nonlinear polynomial equations, six for each crop, developed using multivariate polynomial regression (MPR). Simulated data from DSSAT crop models, previously modified for crop production in controlled environments with hydroponics under elevated atmospheric carbon dioxide concentration, were used for the MPR fitting. The model-based predictive controller adjusts light intensity, air temperature, and carbon dioxide concentration set points in response to environmental perturbations. Control signals are determined from minimization of a cost function, which is based on the weighted control effort and squared-error between the system response and desired reference signal.

  19. Contribution of bioinformatics prediction in microRNA-based cancer therapeutics.

    PubMed

    Banwait, Jasjit K; Bastola, Dhundy R

    2015-01-01

    Despite enormous efforts, cancer remains one of the most lethal diseases in the world. With the advancement of high throughput technologies massive amounts of cancer data can be accessed and analyzed. Bioinformatics provides a platform to assist biologists in developing minimally invasive biomarkers to detect cancer, and in designing effective personalized therapies to treat cancer patients. Still, the early diagnosis, prognosis, and treatment of cancer are an open challenge for the research community. MicroRNAs (miRNAs) are small non-coding RNAs that serve to regulate gene expression. The discovery of deregulated miRNAs in cancer cells and tissues has led many to investigate the use of miRNAs as potential biomarkers for early detection, and as a therapeutic agent to treat cancer. Here we describe advancements in computational approaches to predict miRNAs and their targets, and discuss the role of bioinformatics in studying miRNAs in the context of human cancer. Published by Elsevier B.V.

  20. Towards practical control design using neural computation

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Garg, Sanjay; Mattern, Duane; Merrill, Walter

    1991-01-01

    The objective is to develop neural network based control design techniques which address the issue of performance/control effort tradeoff. Additionally, the control design needs to address the important issue if achieving adequate performance in the presence of actuator nonlinearities such as position and rate limits. These issues are discussed using the example of aircraft flight control. Given a set of pilot input commands, a feedforward net is trained to control the vehicle within the constraints imposed by the actuators. This is achieved by minimizing an objective function which is the sum of the tracking errors, control input rates and control input deflections. A tradeoff between tracking performance and control smoothness is obtained by varying, adaptively, the weights of the objective function. The neurocontroller performance is evaluated in the presence of actuator dynamics using a simulation of the vehicle. Appropriate selection of the different weights in the objective function resulted in the good tracking of the pilot commands and smooth neurocontrol. An extension of the neurocontroller design approach is proposed to enhance its practicality.

  1. Cars Thermometry in a Supersonic Combustor for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Cutler, A. D.; Danehy, P. M.; Springer, R. R.; DeLoach, R.; Capriotti, D. P.

    2002-01-01

    An experiment has been conducted to acquire data for the validation of computational fluid dynamics (CFD) codes used in the design of supersonic combustors. The primary measurement technique is coherent anti-Stokes Raman spectroscopy (CARS), although surface pressures and temperatures have also been acquired. Modern- design- of-experiment techniques have been used to maximize the quality of the data set (for the given level of effort) and minimize systematic errors. The combustor consists of a diverging duct with single downstream- angled wall injector. Nominal entrance Mach number is 2 and enthalpy nominally corresponds to Mach 7 flight. Temperature maps are obtained at several planes in the flow for two cases: in one case the combustor is piloted by injecting fuel upstream of the main injector, the second is not. Boundary conditions and uncertainties are adequately characterized. Accurate CFD calculation of the flow will ultimately require accurate modeling of the chemical kinetics and turbulence-chemistry interactions as well as accurate modeling of the turbulent mixing

  2. Manual control models of industrial management

    NASA Technical Reports Server (NTRS)

    Crossman, E. R. F. W.

    1972-01-01

    The industrial engineer is often required to design and implement control systems and organization for manufacturing and service facilities, to optimize quality, delivery, and yield, and minimize cost. Despite progress in computer science most such systems still employ human operators and managers as real-time control elements. Manual control theory should therefore be applicable to at least some aspects of industrial system design and operations. Formulation of adequate model structures is an essential prerequisite to progress in this area; since real-world production systems invariably include multilevel and multiloop control, and are implemented by timeshared human effort. A modular structure incorporating certain new types of functional element, has been developed. This forms the basis for analysis of an industrial process operation. In this case it appears that managerial controllers operate in a discrete predictive mode based on fast time modelling, with sampling interval related to plant dynamics. Successive aggregation causes reduced response bandwidth and hence increased sampling interval as a function of level.

  3. Design of a finger ring extremity dosemeter based on OSL readout of alpha-Al2O3:C.

    PubMed

    Durham, J S; Zhang, X; Payne, F; Akselrod, M S

    2002-01-01

    A finger-ring dosemeter and reader has been designed that uses OSL readout of alpha-Al2O3:C (aluminium oxide). The use of aluminium oxide is important because it allows the sensitive element of the dosemeter to be a very thin layer that reduces the beta and gamma energy dependence to acceptable levels without compromising the required sensitivity for dose measurement. OSL readout allows the ring dosemeter to be interrogated with minimal disassembly. The ring dosemeter consists of three components: aluminium oxide powder for measurement of dose, an aluminium substrate that gives structure to the ring, and an aluminised Mylar cover to prevent the aluminium oxide from exposure to light. The thicknesses of the three components have been optimised for beta response using the Monte Carlo computer code FLUKA. A reader was also designed and developed that allows the dosemeter to be read after removing the Mylar. Future efforts are discussed.

  4. PaR-PaR Laboratory Automation Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linshiz, G; Stawski, N; Poust, S

    2013-05-01

    Labor-intensive multistep biological tasks, such as the construction and cloning of DNA molecules, are prime candidates for laboratory automation. Flexible and biology-friendly operation of robotic equipment is key to its successful integration in biological laboratories, and the efforts required to operate a robot must be much smaller than the alternative manual lab work. To achieve these goals, a simple high-level biology-friendly robot programming language is needed. We have developed and experimentally validated such a language: Programming a Robot (PaR-PaR). The syntax and compiler for the language are based on computer science principles and a deep understanding of biological workflows. PaR-PaRmore » allows researchers to use liquid-handling robots effectively, enabling experiments that would not have been considered previously. After minimal training, a biologist can independently write complicated protocols for a robot within an hour. Adoption of PaR-PaR as a standard cross-platform language would enable hand-written or software-generated robotic protocols to be shared across laboratories.« less

  5. Acoustic mode measurements in the inlet of a model turbofan using a continuously rotating rake: Data collection/analysis techniques

    NASA Technical Reports Server (NTRS)

    Hall, David G.; Heidelberg, Laurence; Konno, Kevin

    1993-01-01

    The rotating microphone measurement technique and data analysis procedures are documented which are used to determine circumferential and radial acoustic mode content in the inlet of the Advanced Ducted Propeller (ADP) model. Circumferential acoustic mode levels were measured at a series of radial locations using the Doppler frequency shift produced by a rotating inlet microphone probe. Radial mode content was then computed using a least squares curve fit with the measured radial distribution for each circumferential mode. The rotating microphone technique is superior to fixed-probe techniques because it results in minimal interference with the acoustic modes generated by rotor-stator interaction. This effort represents the first experimental implementation of a measuring technique developed by T. G. Sofrin. Testing was performed in the NASA Lewis Low Speed Anechoic Wind Tunnel at a simulated takeoff condition of Mach 0.2. The design is included of the data analysis software and the performance of the rotating rake apparatus. The effect of experiment errors is also discussed.

  6. Acoustic mode measurements in the inlet of a model turbofan using a continuously rotating rake - Data collection/analysis techniques

    NASA Technical Reports Server (NTRS)

    Hall, David G.; Heidelberg, Laurence; Konno, Kevin

    1993-01-01

    The rotating microphone measurement technique and data analysis procedures are documented which are used to determine circumferential and radial acoustic mode content in the inlet of the Advanced Ducted Propeller (ADP) model. Circumferential acoustic mode levels were measured at a series of radial locations using the Doppler frequency shift produced by a rotating inlet microphone probe. Radial mode content was then computed using a least squares curve fit with the measured radial distribution for each circumferential mode. The rotating microphone technique is superior to fixed-probe techniques because it results in minimal interference with the acoustic modes generated by rotor-stator interaction. This effort represents the first experimental implementation of a measuring technique developed by T. G. Sofrin. Testing was performed in the NASA Lewis Low Speed Anechoic Wind Tunnel at a simulated takeoff condition of Mach 0.2. The design is included of the data analysis software and the performance of the rotating rake apparatus. The effect of experiment errors is also discussed.

  7. Shifter: Containers for HPC

    NASA Astrophysics Data System (ADS)

    Gerhardt, Lisa; Bhimji, Wahid; Canon, Shane; Fasel, Markus; Jacobsen, Doug; Mustafa, Mustafa; Porter, Jeff; Tsulaia, Vakho

    2017-10-01

    Bringing HEP computing to HPC can be difficult. Software stacks are often very complicated with numerous dependencies that are difficult to get installed on an HPC system. To address this issue, NERSC has created Shifter, a framework that delivers Docker-like functionality to HPC. It works by extracting images from native formats and converting them to a common format that is optimally tuned for the HPC environment. We have used Shifter to deliver the CVMFS software stack for ALICE, ATLAS, and STAR on the supercomputers at NERSC. As well as enabling the distribution multi-TB sized CVMFS stacks to HPC, this approach also offers performance advantages. Software startup times are significantly reduced and load times scale with minimal variation to 1000s of nodes. We profile several successful examples of scientists using Shifter to make scientific analysis easily customizable and scalable. We will describe the Shifter framework and several efforts in HEP and NP to use Shifter to deliver their software on the Cori HPC system.

  8. Detection of magnetized quark-nuggets, a candidate for dark matter.

    PubMed

    VanDevender, J Pace; VanDevender, Aaron P; Sloan, T; Swaim, Criss; Wilson, Peter; Schmitt, Robert G; Zakirov, Rinat; Blum, Josh; Cross, James L; McGinley, Niall

    2017-08-18

    Quark nuggets are theoretical objects composed of approximately equal numbers of up, down, and strange quarks and are also called strangelets and nuclearites. They have been proposed as a candidate for dark matter, which constitutes ~85% of the universe's mass and which has been a mystery for decades. Previous efforts to detect quark nuggets assumed that the nuclear-density core interacts directly with the surrounding matter so the stopping power is minimal. Tatsumi found that quark nuggets could well exist as a ferromagnetic liquid with a ~10 12 -T magnetic field. We find that the magnetic field produces a magnetopause with surrounding plasma, as the earth's magnetic field produces a magnetopause with the solar wind, and substantially increases their energy deposition rate in matter. We use the magnetopause model to compute the energy deposition as a function of quark-nugget mass and to analyze testing the quark-nugget hypothesis for dark matter by observations in air, water, and land. We conclude the water option is most promising.

  9. PaR-PaR laboratory automation platform.

    PubMed

    Linshiz, Gregory; Stawski, Nina; Poust, Sean; Bi, Changhao; Keasling, Jay D; Hillson, Nathan J

    2013-05-17

    Labor-intensive multistep biological tasks, such as the construction and cloning of DNA molecules, are prime candidates for laboratory automation. Flexible and biology-friendly operation of robotic equipment is key to its successful integration in biological laboratories, and the efforts required to operate a robot must be much smaller than the alternative manual lab work. To achieve these goals, a simple high-level biology-friendly robot programming language is needed. We have developed and experimentally validated such a language: Programming a Robot (PaR-PaR). The syntax and compiler for the language are based on computer science principles and a deep understanding of biological workflows. PaR-PaR allows researchers to use liquid-handling robots effectively, enabling experiments that would not have been considered previously. After minimal training, a biologist can independently write complicated protocols for a robot within an hour. Adoption of PaR-PaR as a standard cross-platform language would enable hand-written or software-generated robotic protocols to be shared across laboratories.

  10. Multimodal Neurodiagnostic Tool for Exploration Missions

    NASA Technical Reports Server (NTRS)

    Lee, Yong Jin

    2015-01-01

    Linea Research Corporation has developed a neurodiagnostic tool that detects behavioral stress markers for astronauts on long-duration space missions. Lightweight and compact, the device is unobtrusive and requires minimal time and effort for the crew to use. The system provides a real-time functional imaging of cortical activity during normal activities. In Phase I of the project, Linea Research successfully monitored cortical activity using multiparameter sensor modules. Using electroencephalography (EEG) and functional near-infrared spectroscopy signals, the company obtained photoplethysmography and electrooculography signals to compute the heart rate and frequency of eye movement. The company also demonstrated the functionality of an algorithm that automatically classifies the varying degrees of cognitive loading based on physiological parameters. In Phase II, Linea Research developed the flight-capable neurodiagnostic device. Worn unobtrusively on the head, the device detects and classifies neurophysiological markers associated with decrements in behavior state and cognition. An automated algorithm identifies key decrements and provides meaningful and actionable feedback to the crew and ground-based medical staff.

  11. Simplified Models for the Study of Postbuckled Hat-Stiffened Composite Panels

    NASA Technical Reports Server (NTRS)

    Vescovini, Riccardo; Davila, Carlos G.; Bisagni, Chiara

    2012-01-01

    The postbuckling response and failure of multistringer stiffened panels is analyzed using models with three levels of approximation. The first model uses a relatively coarse mesh to capture the global postbuckling response of a five-stringer panel. The second model can predict the nonlinear response as well as the debonding and crippling failure mechanisms in a single stringer compression specimen (SSCS). The third model consists of a simplified version of the SSCS that is designed to minimize the computational effort. The simplified model is well-suited to perform sensitivity analyses for studying the phenomena that lead to structural collapse. In particular, the simplified model is used to obtain a deeper understanding of the role played by geometric and material modeling parameters such as mesh size, inter-laminar strength, fracture toughness, and fracture mode mixity. Finally, a global/local damage analysis method is proposed in which a detailed local model is used to scan the global model to identify the locations that are most critical for damage tolerance.

  12. Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree–Fock

    PubMed Central

    2018-01-01

    Automatic differentiation (AD) is a powerful tool that allows calculating derivatives of implemented algorithms with respect to all of their parameters up to machine precision, without the need to explicitly add any additional functions. Thus, AD has great potential in quantum chemistry, where gradients are omnipresent but also difficult to obtain, and researchers typically spend a considerable amount of time finding suitable analytical forms when implementing derivatives. Here, we demonstrate that AD can be used to compute gradients with respect to any parameter throughout a complete quantum chemistry method. We present DiffiQult, a Hartree–Fock implementation, entirely differentiated with the use of AD tools. DiffiQult is a software package written in plain Python with minimal deviation from standard code which illustrates the capability of AD to save human effort and time in implementations of exact gradients in quantum chemistry. We leverage the obtained gradients to optimize the parameters of one-particle basis sets in the context of the floating Gaussian framework.

  13. Measuring and Modeling Change in Examinee Effort on Low-Stakes Tests across Testing Occasions

    ERIC Educational Resources Information Center

    Sessoms, John; Finney, Sara J.

    2015-01-01

    Because schools worldwide use low-stakes tests to make important decisions, value-added indices computed from test scores must accurately reflect student learning, which requires equal test-taking effort across testing occasions. Evaluating change in effort assumes effort is measured equivalently across occasions. We evaluated the longitudinal…

  14. Development and experimental assessment of a numerical modelling code to aid the design of profile extrusion cooling tools

    NASA Astrophysics Data System (ADS)

    Carneiro, O. S.; Rajkumar, A.; Fernandes, C.; Ferrás, L. L.; Habla, F.; Nóbrega, J. M.

    2017-10-01

    On the extrusion of thermoplastic profiles, upon the forming stage that takes place in the extrusion die, the profile must be cooled in a metallic calibrator. This stage must be done at a high rate, to assure increased productivity, but avoiding the development of high temperature gradients, in order to minimize the level of induced thermal residual stresses. In this work, we present a new coupled numerical solver, developed in the framework of the OpenFOAM® computational library, that computes the temperature distribution in both domains simultaneously (metallic calibrator and plastic profile), whose implementation aimed the minimization of the computational time. The new solver was experimentally assessed with an industrial case study.

  15. Computation and analysis for a constrained entropy optimization problem in finance

    NASA Astrophysics Data System (ADS)

    He, Changhong; Coleman, Thomas F.; Li, Yuying

    2008-12-01

    In [T. Coleman, C. He, Y. Li, Calibrating volatility function bounds for an uncertain volatility model, Journal of Computational Finance (2006) (submitted for publication)], an entropy minimization formulation has been proposed to calibrate an uncertain volatility option pricing model (UVM) from market bid and ask prices. To avoid potential infeasibility due to numerical error, a quadratic penalty function approach is applied. In this paper, we show that the solution to the quadratic penalty problem can be obtained by minimizing an objective function which can be evaluated via solving a Hamilton-Jacobian-Bellman (HJB) equation. We prove that the implicit finite difference solution of this HJB equation converges to its viscosity solution. In addition, we provide computational examples illustrating accuracy of calibration.

  16. Materials Genome Initiative

    NASA Technical Reports Server (NTRS)

    Vickers, John

    2015-01-01

    The Materials Genome Initiative (MGI) project element is a cross-Center effort that is focused on the integration of computational tools to simulate manufacturing processes and materials behavior. These computational simulations will be utilized to gain understanding of processes and materials behavior to accelerate process development and certification to more efficiently integrate new materials in existing NASA projects and to lead to the design of new materials for improved performance. This NASA effort looks to collaborate with efforts at other government agencies and universities working under the national MGI. MGI plans to develop integrated computational/experimental/ processing methodologies for accelerating discovery and insertion of materials to satisfy NASA's unique mission demands. The challenges include validated design tools that incorporate materials properties, processes, and design requirements; and materials process control to rapidly mature emerging manufacturing methods and develop certified manufacturing processes

  17. CCOMP: An efficient algorithm for complex roots computation of determinantal equations

    NASA Astrophysics Data System (ADS)

    Zouros, Grigorios P.

    2018-01-01

    In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.

  18. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  19. The Use of Chest Computed Tomographic Angiography in Blunt Trauma Pediatric Population.

    PubMed

    Hasadia, Rabea; DuBose, Joseph; Peleg, Kobi; Stephenson, Jacob; Givon, Adi; Kessel, Boris

    2018-02-05

    Blunt chest trauma in children is common. Although rare, associated major thoracic vascular injuries (TVIs) are lethal potential sequelae of these mechanisms. The preferred study for definitive diagnosis of TVI in stable patients is computed tomographic angiography imaging of the chest. This imaging modality is, however, associated with high doses of ionizing radiation that represent significant carcinogenic risk for pediatric patients. The aim of the present investigation was to define the incidence of TVI among blunt pediatric trauma patients in an effort to better elucidate the usefulness of computed tomographic angiography use in this population. A retrospective cohort study was conducted including all blunt pediatric (age < 14 y) trauma victims registered in Israeli National Trauma Registry maintained by Gertner Institute for Epidemiology and Health Policy Research between the years 1997 and 2015. Data collected included age, sex, mechanism of injury, Glasgow Coma Scale, Injury Severity Score, and incidence of chest named vessel injuries. Statistical analysis was performed using SAS statistical software version 9.2 (SAS Institute Inc, Cary, NC). Among 433,325 blunt trauma victims, 119,821patients were younger than 14 years. Twelve (0.0001%, 12/119821) of these children were diagnosed with TVI. The most common mechanism in this group was pedestrian hit by a car. Mortality was 41.7% (5/12). Thoracic vascular injury is exceptionally rare among pediatric blunt trauma victims but does contribute to the high morbidity and mortality seen with blunt chest trauma. Computed tomographic angiography, with its associated radiation exposure risk, should not be used as a standard tool after trauma in injured children. Clinical protocols are needed in this population to minimize radiation risk while allowing prompt identification of life-threatening injuries.

  20. Multiphysics Thrust Chamber Modeling for Nuclear Thermal Propulsion

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Cheng, Gary; Chen, Yen-Sen

    2006-01-01

    The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation. A two-pronged approach is employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of heat transfer on thrust performance. Preliminary results on both aspects are presented.

  1. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  2. High speed television camera system processes photographic film data for digital computer analysis

    NASA Technical Reports Server (NTRS)

    Habbal, N. A.

    1970-01-01

    Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luther, Erik Paul; Leckie, Rafael M.; Dombrowski, David E.

    This supplemental report describes fuel fabrication efforts conducted for the Idaho National Laboratory Trade Study for the TREAT Conversion project that is exploring the replacement of the HEU (Highly Enriched Uranium) fuel core of the TREAT reactor with LEU (Low Enriched Uranium) fuel. Previous reports have documented fabrication of fuel by the “upgrade” process developed at Los Alamos National Laboratory. These experiments supplement an earlier report that describes efforts to increase the graphite content of extruded fuel and minimize cracking.

  4. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning

    NASA Astrophysics Data System (ADS)

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-08-01

    {{\\ell }2,1} -minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the {{\\ell }2,1} -based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the {{\\ell }2,1} -minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the {{\\ell }2,1} -minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the {{\\ell }2,1} -minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  5. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning.

    PubMed

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-07-20

    [Formula: see text]-minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the [Formula: see text]-based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the [Formula: see text]-minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the [Formula: see text]-minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the [Formula: see text]-minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  6. Task allocation model for minimization of completion time in distributed computer systems

    NASA Astrophysics Data System (ADS)

    Wang, Jai-Ping; Steidley, Carl W.

    1993-08-01

    A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.

  7. A Series of Molecular Dynamics and Homology Modeling Computer Labs for an Undergraduate Molecular Modeling Course

    ERIC Educational Resources Information Center

    Elmore, Donald E.; Guayasamin, Ryann C.; Kieffer, Madeleine E.

    2010-01-01

    As computational modeling plays an increasingly central role in biochemical research, it is important to provide students with exposure to common modeling methods in their undergraduate curriculum. This article describes a series of computer labs designed to introduce undergraduate students to energy minimization, molecular dynamics simulations,…

  8. Accelerating Use of Sustainable Materials in Transportation Infrastructure

    DOT National Transportation Integrated Search

    2016-05-01

    With the push towards sustainable design of highway infrastructure systems owners have shown interest in leveraging materials with minimal environmental impacts and extended service lives. Within this emphasis most efforts on sustainable material des...

  9. Construction Monitoring of Soft Ground Rapid Transit Tunnels : Volume 2. Appendixes.

    DOT National Transportation Integrated Search

    1974-11-01

    The Urban Mass Transportation Administration (UMTA) Tunneling Program Concentrates its efforts on reducing tunneling costs, minimizing environmental impact and enhancing safety as it applies to the planning, organization, design, construction and mai...

  10. 34 CFR 403.185 - How does the Secretary compute maintenance of effort in the event of a waiver?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOCATIONAL AND APPLIED TECHNOLOGY EDUCATION PROGRAM What Financial Conditions Must Be Met by a State? § 403... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary compute maintenance of effort in the event of a waiver? 403.185 Section 403.185 Education Regulations of the Offices of the Department...

  11. Challenges and solutions for realistic room simulation

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    2002-05-01

    Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.

  12. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    NASA Astrophysics Data System (ADS)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  13. Computational Fluid Dynamics Technology for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2003-01-01

    Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.

  14. Extending Landauer's bound from bit erasure to arbitrary computation

    NASA Astrophysics Data System (ADS)

    Wolpert, David

    The minimal thermodynamic work required to erase a bit, known as Landauer's bound, has been extensively investigated both theoretically and experimentally. However, when viewed as a computation that maps inputs to outputs, bit erasure has a very special property: the output does not depend on the input. Existing analyses of thermodynamics of bit erasure implicitly exploit this property, and thus cannot be directly extended to analyze the computation of arbitrary input-output maps. Here we show how to extend these earlier analyses of bit erasure to analyze the thermodynamics of arbitrary computations. Doing this establishes a formal connection between the thermodynamics of computers and much of theoretical computer science. We use this extension to analyze the thermodynamics of the canonical ``general purpose computer'' considered in computer science theory: a universal Turing machine (UTM). We consider a UTM which maps input programs to output strings, where inputs are drawn from an ensemble of random binary sequences, and prove: i) The minimal work needed by a UTM to run some particular input program X and produce output Y is the Kolmogorov complexity of Y minus the log of the ``algorithmic probability'' of Y. This minimal amount of thermodynamic work has a finite upper bound, which is independent of the output Y, depending only on the details of the UTM. ii) The expected work needed by a UTM to compute some given output Y is infinite. As a corollary, the overall expected work to run a UTM is infinite. iii) The expected work needed by an arbitrary Turing machine T (not necessarily universal) to compute some given output Y can either be infinite or finite, depending on Y and the details of T. To derive these results we must combine ideas from nonequilibrium statistical physics with fundamental results from computer science, such as Levin's coding theorem and other theorems about universal computation. I would like to ackowledge the Santa Fe Institute, Grant No. TWCF0079/AB47 from the Templeton World Charity Foundation, Grant No. FQXi-RHl3-1349 from the FQXi foundation, and Grant No. CHE-1648973 from the U.S. National Science Foundation.

  15. Equal Time for Women.

    ERIC Educational Resources Information Center

    Kolata, Gina

    1984-01-01

    Examines social influences which discourage women from pursuing studies in computer science, including monopoly of computer time by boys at the high school level, sexual harassment in college, movies, and computer games. Describes some initial efforts to encourage females of all ages to study computer science. (JM)

  16. Technique minimizes the effects of dropouts on telemetry records

    NASA Technical Reports Server (NTRS)

    Anderson, T. O.; Hurd, W. J.

    1972-01-01

    Recorder deficiencies are minimized by using two-channel system to prepare two tapes, each having noise, wow and flutter, and dropout characteristics of channel on which it was made. Processing tapes by computer and combining signals from two channels produce single tape free of dropouts caused by recording process.

  17. Cognitive capacity limitations and Need for Cognition differentially predict reward-induced cognitive effort expenditure.

    PubMed

    Sandra, Dasha A; Otto, A Ross

    2018-03-01

    While psychological, economic, and neuroscientific accounts of behavior broadly maintain that people minimize expenditure of cognitive effort, empirical work reveals how reward incentives can mobilize increased cognitive effort expenditure. Recent theories posit that the decision to expend effort is governed, in part, by a cost-benefit tradeoff whereby the potential benefits of mental effort can offset the perceived costs of effort exertion. Taking an individual differences approach, the present study examined whether one's executive function capacity, as measured by Stroop interference, predicts the extent to which reward incentives reduce switch costs in a task-switching paradigm, which indexes additional expenditure of cognitive effort. In accordance with the predictions of a cost-benefit account of effort, we found that a low executive function capacity-and, relatedly, a low intrinsic motivation to expend effort (measured by Need for Cognition)-predicted larger increase in cognitive effort expenditure in response to monetary reward incentives, while individuals with greater executive function capacity-and greater intrinsic motivation to expend effort-were less responsive to reward incentives. These findings suggest that an individual's cost-benefit tradeoff is constrained by the perceived costs of exerting cognitive effort. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Reid, Terry V.; Schifer, Nicholas A.; Briggs, Maxwell H.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. Microporous bulk insulation is used in the ground support test hardware to minimize the loss of thermal energy from the electric heat source to the environment. The insulation package is characterized before operation to predict how much heat will be absorbed by the convertor and how much will be lost to the environment during operation. In an effort to validate these predictions, numerous tasks have been performed, which provided a more accurate value for net heat input into the ASCs. This test and modeling effort included: (a) making thermophysical property measurements of test setup materials to provide inputs to the numerical models, (b) acquiring additional test data that was collected during convertor tests to provide numerical models with temperature profiles of the test setup via thermocouple and infrared measurements, (c) using multidimensional numerical models (computational fluid dynamics code) to predict net heat input of an operating convertor, and (d) using validation test hardware to provide direct comparison of numerical results and validate the multidimensional numerical models used to predict convertor net heat input. This effort produced high fidelity ASC net heat input predictions, which were successfully validated using specially designed test hardware enabling measurement of heat transferred through a simulated Stirling cycle. The overall effort and results are discussed.

  19. Inferring the Minimal Genome of Mesoplasma florum by Comparative Genomics and Transposon Mutagenesis.

    PubMed

    Baby, Vincent; Lachance, Jean-Christophe; Gagnon, Jules; Lucier, Jean-François; Matteau, Dominick; Knight, Tom; Rodrigue, Sébastien

    2018-01-01

    The creation and comparison of minimal genomes will help better define the most fundamental mechanisms supporting life. Mesoplasma florum is a near-minimal, fast-growing, nonpathogenic bacterium potentially amenable to genome reduction efforts. In a comparative genomic study of 13 M. florum strains, including 11 newly sequenced genomes, we have identified the core genome and open pangenome of this species. Our results show that all of the strains have approximately 80% of their gene content in common. Of the remaining 20%, 17% of the genes were found in multiple strains and 3% were unique to any given strain. On the basis of random transposon mutagenesis, we also estimated that ~290 out of 720 genes are essential for M. florum L1 in rich medium. We next evaluated different genome reduction scenarios for M. florum L1 by using gene conservation and essentiality data, as well as comparisons with the first working approximation of a minimal organism, Mycoplasma mycoides JCVI-syn3.0. Our results suggest that 409 of the 473 M. mycoides JCVI-syn3.0 genes have orthologs in M. florum L1. Conversely, 57 putatively essential M. florum L1 genes have no homolog in M. mycoides JCVI-syn3.0. This suggests differences in minimal genome compositions, even for these evolutionarily closely related bacteria. IMPORTANCE The last years have witnessed the development of whole-genome cloning and transplantation methods and the complete synthesis of entire chromosomes. Recently, the first minimal cell, Mycoplasma mycoides JCVI-syn3.0, was created. Despite these milestone achievements, several questions remain to be answered. For example, is the composition of minimal genomes virtually identical in phylogenetically related species? On the basis of comparative genomics and transposon mutagenesis, we investigated this question by using an alternative model, Mesoplasma florum, that is also amenable to genome reduction efforts. Our results suggest that the creation of additional minimal genomes could help reveal different gene compositions and strategies that can support life, even within closely related species.

  20. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  1. Intercell scheduling: A negotiation approach using multi-agent coalitions

    NASA Astrophysics Data System (ADS)

    Tian, Yunna; Li, Dongni; Zheng, Dan; Jia, Yunde

    2016-10-01

    Intercell scheduling problems arise as a result of intercell transfers in cellular manufacturing systems. Flexible intercell routes are considered in this article, and a coalition-based scheduling (CBS) approach using distributed multi-agent negotiation is developed. Taking advantage of the extended vision of the coalition agents, the global optimization is improved and the communication cost is reduced. The objective of the addressed problem is to minimize mean tardiness. Computational results show that, compared with the widely used combinatorial rules, CBS provides better performance not only in minimizing the objective, i.e. mean tardiness, but also in minimizing auxiliary measures such as maximum completion time, mean flow time and the ratio of tardy parts. Moreover, CBS is better than the existing intercell scheduling approach for the same problem with respect to the solution quality and computational costs.

  2. Combining Computational and Social Effort for Collaborative Problem Solving

    PubMed Central

    Wagy, Mark D.; Bongard, Josh C.

    2015-01-01

    Rather than replacing human labor, there is growing evidence that networked computers create opportunities for collaborations of people and algorithms to solve problems beyond either of them. In this study, we demonstrate the conditions under which such synergy can arise. We show that, for a design task, three elements are sufficient: humans apply intuitions to the problem, algorithms automatically determine and report back on the quality of designs, and humans observe and innovate on others’ designs to focus creative and computational effort on good designs. This study suggests how such collaborations should be composed for other domains, as well as how social and computational dynamics mutually influence one another during collaborative problem solving. PMID:26544199

  3. Computing Cluster for Large Scale Turbulence Simulations and Applications in Computational Aeroacoustics

    NASA Astrophysics Data System (ADS)

    Lele, Sanjiva K.

    2002-08-01

    Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware which was purchased and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after this summary describes the computer cluster, its setup, and some cluster performance benchmark results. The second section explains ongoing research efforts which have benefited from the cluster hardware, and presents highlights of those efforts since installation of the cluster.

  4. Environmental considerations for 3D seismic in Louisianna wetlands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Browning, G.; Dillane, T.; Baaren, P. van

    1996-11-01

    Louisiana swamps have been host to seismic crews for many years. Results from recent 3D surveys indicate that well planned and executed seismic operations have a minimal and short term impact in these environmentally sensitive wetlands. Pre-planning identifies challenges that require use of improved technology and work procedures. These include multi-channel radio telemetry recording systems, ramming of dynamite and hydrophones as opposed to drilling, DGPS positioning and coordinated use of Airboats, buggies and helicopters. In addition to minimal environmental impact, increased data quality, reduced cost and shorter project duration have been achieved as a result of these efforts. Unlike 2Dmore » surveys, where profile positioning is flexible, 3D surveys involve high density coverage over many square miles operated by numerous personnel. Survey design includes minimizing repeated traffic and crossing points. Survey operations require environmental participation and commitment from every person involved in the project. This includes a thorough orientation and training program with strong emphasis on environmental sensitivity and awareness. Close co-ordination between regulatory agencies, clients and the contractor is a key factor in all aspects of the survey planning and operation. Benefits from these efforts are significant, measurable and continue to improve.« less

  5. Revitalization model of tapioca industry through environmental awareness reinforcement for minimizing water body contamination

    NASA Astrophysics Data System (ADS)

    Banowati, E.; Indriyanti, D. R.; Juhadi

    2018-03-01

    Tapioca industry in Margoyoso District is a household industry which positively contributes to the growth of the region's economy as it is able to absorb 6,61% of productive age populationor absorb 3,300 workers.On the other hand, the industry impacts contamination of river water in the form of pollutants dissolved materials and particulates into water bodies so that the quality of water decreases even does not work anymore in accordance with the allocation for irrigation or run off of agriculture. The purpose of this research is to: strengthen environmental awareness; calculate the success of the reinforcement action and minimize water body contamination. The research was conducted in two villages of tapioca industry center in Margoyoso district - Pati Regency Administration Area. The determination coefficient of R Square is 0.802 which indicates a successful effort of 80.2%. Regression equation Y = 34.097 + 0.608 X. Industrial entrepreneur's concern increased on 8.45 from total indicator or position to 70.72 so that the gradual effort showed success to minimize water contamination of Suwatu River. The business community of tapioca should build installation of wastewater treatment.

  6. Pulmonary Venous Anatomy Imaging with Low-Dose, Prospectively ECG-Triggered, High-Pitch 128-Slice Dual Source Computed Tomography

    PubMed Central

    Thai, Wai-ee; Wai, Bryan; Lin, Kaity; Cheng, Teresa; Heist, E. Kevin; Hoffmann, Udo; Singh, Jagmeet; Truong, Quynh A.

    2012-01-01

    Background Efforts to reduce radiation from cardiac computed tomography (CT) are essential. Using a prospectively triggered, high-pitch dual source CT (DSCT) protocol, we aim to determine the radiation dose and image quality (IQ) in patients undergoing pulmonary vein (PV) imaging. Methods and Results In 94 patients (61±9 years, 71% male) who underwent 128-slice DSCT (pitch 3.4), radiation dose and IQ were assessed and compared between 69 patients in sinus rhythm (SR) and 25 in atrial fibrillation (AF). Radiation dose was compared in a subset of 19 patients with prior retrospective or prospectively triggered CT PV scans without high-pitch. In a subset of 18 patients with prior magnetic resonance imaging (MRI) for PV assessment, PV anatomy and scan duration were compared to high-pitch CT. Using the high-pitch protocol, total effective radiation dose was 1.4 [1.3, 1.9] mSv, with no difference between SR and AF (1.4 vs 1.5 mSv, p=0.22). No high-pitch CT scans were non-diagnostic or had poor IQ. Radiation dose was reduced with high-pitch (1.6 mSv) compared to standard protocols (19.3 mSv, p<0.0001). This radiation dose reduction was seen with SR (1.5 vs 16.7 mSv, p<0.0001) but was more profound with AF (1.9 vs 27.7 mSv, p=0.039). There was excellent agreement of PV anatomy (kappa 0.84, p<0.0001), and a shorter CT scan duration (6 minutes) compared to MRI (41 minutes, p<0.0001). Conclusions Using a high-pitch DSCT protocol, PV imaging can be performed with minimal radiation dose, short scan acquisition, and excellent IQ in patients with SR or AF. This protocol highlights the success of new cardiac CT technology to minimize radiation exposure, giving clinicians a new low-dose imaging alternative to assess PV anatomy. PMID:22586259

  7. Estimation of the effective heating systems radius as a method of the reliability improving and energy efficiency

    NASA Astrophysics Data System (ADS)

    Akhmetova, I. G.; Chichirova, N. D.

    2017-11-01

    When conducting an energy survey of heat supply enterprise operating several boilers located not far from each other, it is advisable to assess the degree of heat supply efficiency from individual boiler, the possibility of energy consumption reducing in the whole enterprise by switching consumers to a more efficient source, to close in effective boilers. It is necessary to consider the temporal dynamics of perspective load connection, conditions in the market changes. To solve this problem the radius calculation of the effective heat supply from the thermal energy source can be used. The disadvantage of existing methods is the high complexity, the need to collect large amounts of source data and conduct a significant amount of computational efforts. When conducting an energy survey of heat supply enterprise operating a large number of thermal energy sources, rapid assessment of the magnitude of the effective heating radius requires. Taking into account the specifics of conduct and objectives of the energy survey method of calculation of effective heating systems radius, to use while conducting the energy audit should be based on data available heat supply organization in open access, minimize efforts, but the result should be to match the results obtained by other methods. To determine the efficiency radius of Kazan heat supply system were determined share of cost for generation and transmission of thermal energy, capital investment to connect new consumers. The result were compared with the values obtained with the previously known methods. The suggested Express-method allows to determine the effective radius of the centralized heat supply from heat sources, in conducting energy audits with the effort minimum and the required accuracy.

  8. Scattering Amplitudes, the AdS/CFT Correspondence, Minimal Surfaces, and Integrability

    DOE PAGES

    Alday, Luis F.

    2010-01-01

    We focus on the computation of scattering amplitudes of planar maximally supersymmetric Yang-Mill in four dimensions at strong coupling by means of the AdS/CFT correspondence and explain how the problem boils down to the computation of minimal surfaces in AdS in the first part of this paper. In the second part of this review we explain how integrability allows to give a solution to the problem in terms of a set of integral equations. The intention of the review is to give a pedagogical, rather than very detailed, exposition.

  9. Eric Bonnema | NREL

    Science.gov Websites

    contributes to the research efforts for commercial buildings. This effort is dedicated to studying the , commercial sector whole-building energy simulation, scientific computing, and software configuration and

  10. LOX, GOX and Pressure Relief

    NASA Technical Reports Server (NTRS)

    McLeod, Ken; Stoltzfus, Joel

    2006-01-01

    Oxygen relief systems present a serious fire hazard risk with often severe consequences. This presentation offers a risk management solution strategy which encourages minimizing ignition hazards, maximizing best materials, and utilizing good practices. Additionally, the relief system should be designed for cleanability and ballistic flow. The use of the right metals, softgoods, and lubricants, along with the best assembly techniques, is stressed. Materials should also be tested if data is not available and a full hazard analysis should be conducted in an effort to minimize risk and harm.

  11. The Preventive Control of a Dengue Disease Using Pontryagin Minimum Principal

    NASA Astrophysics Data System (ADS)

    Ratna Sari, Eminugroho; Insani, Nur; Lestari, Dwi

    2017-06-01

    Behaviour analysis for host-vector model without control of dengue disease is based on the value of basic reproduction number obtained using next generation matrices. Furthermore, the model is further developed involving a preventive control to minimize the contact between host and vector. The purpose is to obtain an optimal preventive strategy with minimal cost. The Pontryagin Minimum Principal is used to find the optimal control analytically. The derived optimality model is then solved numerically to investigate control effort to reduce infected class.

  12. 7 CFR 3430.36 - Procedures to minimize or eliminate duplication of effort.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    .... Business processes may include the review of the Current and Pending Support Form; documented CRIS searches... participation in Federal Government-wide and other committees, taskforces, or groups that seek to solve problems...

  13. 7 CFR 3430.36 - Procedures to minimize or eliminate duplication of effort.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... Business processes may include the review of the Current and Pending Support Form; documented CRIS searches... participation in Federal Government-wide and other committees, taskforces, or groups that seek to solve problems...

  14. 7 CFR 3430.36 - Procedures to minimize or eliminate duplication of effort.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    .... Business processes may include the review of the Current and Pending Support Form; documented CRIS searches... participation in Federal Government-wide and other committees, taskforces, or groups that seek to solve problems...

  15. Multi-Physics Computational Grains (MPCGs): Newly-Developed Accurate and Efficient Numerical Methods for Micromechanical Modeling of Multifunctional Materials and Composites

    NASA Astrophysics Data System (ADS)

    Bishay, Peter L.

    This study presents a new family of highly accurate and efficient computational methods for modeling the multi-physics of multifunctional materials and composites in the micro-scale named "Multi-Physics Computational Grains" (MPCGs). Each "mathematical grain" has a random polygonal/polyhedral geometrical shape that resembles the natural shapes of the material grains in the micro-scale where each grain is surrounded by an arbitrary number of neighboring grains. The physics that are incorporated in this study include: Linear Elasticity, Electrostatics, Magnetostatics, Piezoelectricity, Piezomagnetism and Ferroelectricity. However, the methods proposed here can be extended to include more physics (thermo-elasticity, pyroelectricity, electric conduction, heat conduction, etc.) in their formulation, different analysis types (dynamics, fracture, fatigue, etc.), nonlinearities, different defect shapes, and some of the 2D methods can also be extended to 3D formulation. We present "Multi-Region Trefftz Collocation Grains" (MTCGs) as a simple and efficient method for direct and inverse problems, "Trefftz-Lekhnitskii Computational Gains" (TLCGs) for modeling porous and composite smart materials, "Hybrid Displacement Computational Grains" (HDCGs) as a general method for modeling multifunctional materials and composites, and finally "Radial-Basis-Functions Computational Grains" (RBFCGs) for modeling functionally-graded materials, magneto-electro-elastic (MEE) materials and the switching phenomena in ferroelectric materials. The first three proposed methods are suitable for direct numerical simulation (DNS) of the micromechanics of smart composite/porous materials with non-symmetrical arrangement of voids/inclusions, and provide minimal effort in meshing and minimal time in computations, since each grain can represent the matrix of a composite and can include a pore or an inclusion. The last three methods provide stiffness matrix in their formulation and hence can be readily implemented in a finite element routine. Several numerical examples are provided to show the ability and accuracy of the proposed methods to determine the effective material properties of different types of piezo-composites, and detect the damage-prone sites in a microstructure under certain loading types. The last method (RBFCGs) is also suitable for modeling the switching phenomena in ferro-materials (ferroelectric, ferromagnetic, etc.) after incorporating a certain nonlinear constitutive model and a switching criterion. Since the interaction between grains during loading cycles has a profound influence on the switching phenomena, it is important to simulate the grains with geometrical shapes that are similar to the real shapes of grains as seen in lab experiments. Hence the use of the 3D RBFCGs, which allow for the presence of all the six variants of the constitutive relations, together with the randomly generated crystallographic axes in each grain, as done in the present study, is considered to be the most realistic model that can be used for the direct mesoscale numerical simulation (DMNS) of polycrystalline ferro-materials.

  16. High-Performance Polyimide Powder Coatings

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Much of the infrastructure at Kennedy Space Center and other NASA sites has been subjected to outside weathering effects for more than 40 years. Because much of this infrastructure has metallic surfaces, considerable effort is continually devoted to developing methods to minimize the effects of corrosion on these surfaces. These efforts are especially intense at KSC, where offshore salt spray and exhaust from Solid Rocket Boosters accelerate corrosion. Coatings of various types have traditionally been the choice for minimizing corrosion, and improved corrosion control methods are constantly being researched. Recent work at KSC on developing an improved method for repairing Kapton (polyimide)-based electrical wire insulation has identified polyimides with much lower melting points than traditional polyimides used for insulation. These lower melting points and the many other outstanding physical properties of polyimides (thermal stability, chemical resistance, and electrical properties) led us to investigate whether they could be used in powder coatings.

  17. The Design & Development of the Ocean Color Instrument Precision Superduplex Hybrid Bearing Cartridge

    NASA Technical Reports Server (NTRS)

    Schepis, Joseph; Woodard, Timothy; Hakun, Claef; Bergandy, Konrad; Church, Joseph; Ward, Peter; Lee, Michael; Conti, Alfred; Guzek, Jeffrey

    2018-01-01

    A high precision, high-resolution Ocean Color Imaging (OCI) instrument is under development for the Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) mission which requires a pair of medium speed mechanisms to scan the ocean surface continuously. The design of the rotating telescope (RT) mechanism operating at 360 RPM and the half-angle mirror (HAM) mechanism synchronized at 180 RPM was concern for maintaining pointing precision over the required life and continuous operations. An effort was undertaken with the manufacturer to design and analyze a special bearing configuration to minimize axial and radial runout, minimize torque, and maintain nominal contact stresses and stiffness over the operating temperature range and to maximize life. The bearing design, development effort, analysis and testing will be discussed as will the technical challenges that this specific design imposed upon the mechanism engineers. Bearing performance, runout as achieved and verified during encoder installation and operating torque will be described.

  18. New Frameworks for Detecting and Minimizing Information Leakage in Anonymized Network Data

    DTIC Science & Technology

    2011-10-01

    researcher the exact extent to which a particular utility is affected by the anonymization. For instance, Karr et al.’s use of the Kullback - Leibler ...technical, legal, policy, and privacy issues limit the ability of operators to produce data sets for information security testing . In an effort to...technical, legal, policy, and privacy issues limit the ability of operators to produce datasets for information security testing . In an effort to help

  19. Aerosolized intranasal midazolam for safe and effective sedation for quality computed tomography imaging in infants and children.

    PubMed

    Mekitarian Filho, Eduardo; de Carvalho, Werther Brunow; Gilio, Alfredo Elias; Robinson, Fay; Mason, Keira P

    2013-10-01

    This pilot study introduces the aerosolized route for midazolam as an option for infant and pediatric sedation for computed tomography imaging. This technique produced predictable and effective sedation for quality computed tomography imaging studies with minimal artifact and no significant adverse events. Copyright © 2013 Mosby, Inc. All rights reserved.

  20. Essential Computer Competence for Beginning Teachers.

    ERIC Educational Resources Information Center

    Kay, Patricia M.; Byrne, Michael M.

    The rapid proliferation of the use of computers in schools has led to questions about how teachers should be educated to deal with the new technology and what all teachers should be required to demonstrate in the way of minimal essential computer skills as a condition of certification. This paper addresses these questions by proposing a set of…

  1. Study of the Use of Time-Mean Vortices to Generate Lift for MAV Applications

    DTIC Science & Technology

    2011-05-31

    microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters (geometry, frequency, amplitude of oscillation, etc...issue involved. Towards this end, a suspended microplate was fabricated via MEMS technology and driven to in-plane resonance via Lorentz force...force to drive the suspended MEMS-based microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters

  2. Computer program user's manual for FIREFINDER digital topographic data verification library dubbing system

    NASA Astrophysics Data System (ADS)

    Ceres, M.; Heselton, L. R., III

    1981-11-01

    This manual describes the computer programs for the FIREFINDER Digital Topographic Data Verification-Library-Dubbing System (FFDTDVLDS), and will assist in the maintenance of these programs. The manual contains detailed flow diagrams and associated descriptions for each computer program routine and subroutine. Complete computer program listings are also included. This information should be used when changes are made in the computer programs. The operating system has been designed to minimize operator intervention.

  3. A General Approach to Measuring Test-Taking Effort on Computer-Based Tests

    ERIC Educational Resources Information Center

    Wise, Steven L.; Gao, Lingyun

    2017-01-01

    There has been an increased interest in the impact of unmotivated test taking on test performance and score validity. This has led to the development of new ways of measuring test-taking effort based on item response time. In particular, Response Time Effort (RTE) has been shown to provide an assessment of effort down to the level of individual…

  4. 2016 Institutional Computing Progress Report for w14_firetec

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Judith W.; Linn, Rodman

    2016-07-14

    This is a computing progress report for w14_firetec. FIRETEC simulations will explore the prescribed fire ignition methods to achieve burning objectives (understory reduction and ecosystem health) but at the same time minimize the risk of escaped fire.

  5. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  6. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Astrophysics Data System (ADS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  7. Increasing the impact of medical image computing using community-based open-access hackathons: The NA-MIC and 3D Slicer experience.

    PubMed

    Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron

    2016-10-01

    The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. 1999 NCCS Highlights

    NASA Technical Reports Server (NTRS)

    Bennett, Jerome (Technical Monitor)

    2002-01-01

    The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.

  9. Condition-dependent reproductive effort in frogs infected by a widespread pathogen

    PubMed Central

    Roznik, Elizabeth A.; Sapsford, Sarah J.; Pike, David A.; Schwarzkopf, Lin; Alford, Ross A.

    2015-01-01

    To minimize the negative effects of an infection on fitness, hosts can respond adaptively by altering their reproductive effort or by adjusting their timing of reproduction. We studied effects of the pathogenic fungus Batrachochytrium dendrobatidis on the probability of calling in a stream-breeding rainforest frog (Litoria rheocola). In uninfected frogs, calling probability was relatively constant across seasons and body conditions, but in infected frogs, calling probability differed among seasons (lowest in winter, highest in summer) and was strongly and positively related to body condition. Infected frogs in poor condition were up to 40% less likely to call than uninfected frogs, whereas infected frogs in good condition were up to 30% more likely to call than uninfected frogs. Our results suggest that frogs employed a pre-existing, plastic, life-history strategy in response to infection, which may have complex evolutionary implications. If infected males in good condition reproduce at rates equal to or greater than those of uninfected males, selection on factors affecting disease susceptibility may be minimal. However, because reproductive effort in infected males is positively related to body condition, there may be selection on mechanisms that limit the negative effects of infections on hosts. PMID:26063847

  10. Quality improvement in hospitals: how much does it reduce healthcare costs?

    PubMed

    Jones, S B

    1995-01-01

    The philosophy of W.E. Deming suggests that continuous quality improvement efforts, when properly applied, ultimately will lead to financial dividends and will help ensure business longevity. Reducing hospital charges can be exciting for the participants and can provide an impetus for expanding quality improvement efforts. Americans, however, tend to demand almost instant gratification and have limited patience for longer-term results. This factor, coupled with minimal knowledge of actual operational costs and inaccurate charge accounting systems, may lead hospital managers to misinterpret the potential net long-term effects of their quality improvement efforts. In the approaching environment of capitated reimbursement, such mistakes may have serious consequences.

  11. An Open-Source Toolbox for Surrogate Modeling of Joint Contact Mechanics

    PubMed Central

    Eskinazi, Ilan

    2016-01-01

    Goal Incorporation of elastic joint contact models into simulations of human movement could facilitate studying the interactions between muscles, ligaments, and bones. Unfortunately, elastic joint contact models are often too expensive computationally to be used within iterative simulation frameworks. This limitation can be overcome by using fast and accurate surrogate contact models that fit or interpolate input-output data sampled from existing elastic contact models. However, construction of surrogate contact models remains an arduous task. The aim of this paper is to introduce an open-source program called Surrogate Contact Modeling Toolbox (SCMT) that facilitates surrogate contact model creation, evaluation, and use. Methods SCMT interacts with the third party software FEBio to perform elastic contact analyses of finite element models and uses Matlab to train neural networks that fit the input-output contact data. SCMT features sample point generation for multiple domains, automated sampling, sample point filtering, and surrogate model training and testing. Results An overview of the software is presented along with two example applications. The first example demonstrates creation of surrogate contact models of artificial tibiofemoral and patellofemoral joints and evaluates their computational speed and accuracy, while the second demonstrates the use of surrogate contact models in a forward dynamic simulation of an open-chain leg extension-flexion motion. Conclusion SCMT facilitates the creation of computationally fast and accurate surrogate contact models. Additionally, it serves as a bridge between FEBio and OpenSim musculoskeletal modeling software. Significance Researchers may now create and deploy surrogate models of elastic joint contact with minimal effort. PMID:26186761

  12. Evaluating MRI based vascular wall motion as a biomarker of Fontan hemodynamic performance

    NASA Astrophysics Data System (ADS)

    Menon, Prahlad G.; Hong, Haifa

    2015-03-01

    The Fontan procedure for single-ventricle heart disease involves creation of pathways to divert venous blood from the superior & inferior venacavae (SVC, IVC) directly into the pulmonary arteries (PA), bypassing the right ventricle. For optimal surgical outcomes, venous flow energy loss in the resulting vascular construction must be minimized and ensuring close to equal flow distribution from the Fontan conduit connecting IVC to the left & right PA is paramount. This requires patient-specific hemodynamic evaluation using computational fluid dynamics (CFD) simulations which are often time and resource intensive, limiting applicability for real-time patient management in the clinic. In this study, we report preliminary efforts at identifying a new non-invasive imaging based surrogate for CFD simulated hemodynamics. We establish correlations between computed hemodynamic criteria from CFD modeling and cumulative wall displacement characteristics of the Fontan conduit quantified from cine cardiovascular MRI segmentations over time (i.e. 20 cardiac phases gated from the start of ventricular systole), in 5 unique Fontan surgical connections. To focus our attention on diameter variations while discounting side-to-side swaying motion of the Fontan conduit, the difference between its instantaneous regional expansion and inward contraction (averaged across the conduit) was computed and analyzed. Maximum Fontan conduit-average expansion over the cardiac cycle correlated with the anatomy-specific diametric offset between the axis of the IVC and SVC (r2=0.13, p=0.55) - a known factor correlated with Fontan energy loss and IVC-to-PA flow distribution. Investigation in a larger study cohort is needed to establish stronger statistical correlations.

  13. Early classification of pathological heartbeats on wireless body sensor nodes.

    PubMed

    Braojos, Rubén; Beretta, Ivan; Ansaloni, Giovanni; Atienza, David

    2014-11-27

    Smart Wireless Body Sensor Nodes (WBSNs) are a novel class of unobtrusive, battery-powered devices allowing the continuous monitoring and real-time interpretation of a subject's bio-signals, such as the electrocardiogram (ECG). These low-power platforms, while able to perform advanced signal processing to extract information on heart conditions, are usually constrained in terms of computational power and transmission bandwidth. It is therefore essential to identify in the early stages which parts of an ECG are critical for the diagnosis and, only in these cases, activate on demand more detailed and computationally intensive analysis algorithms. In this work, we present a comprehensive framework for real-time automatic classification of normal and abnormal heartbeats, targeting embedded and resource-constrained WBSNs. In particular, we provide a comparative analysis of different strategies to reduce the heartbeat representation dimensionality, and therefore the required computational effort. We then combine these techniques with a neuro-fuzzy classification strategy, which effectively discerns normal and pathological heartbeats with a minimal run time and memory overhead. We prove that, by performing a detailed analysis only on the heartbeats that our classifier identifies as abnormal, a WBSN system can drastically reduce its overall energy consumption. Finally, we assess the choice of neuro-fuzzy classification by comparing its performance and workload with respect to other state-of-the-art strategies. Experimental results using the MIT-BIH Arrhythmia database show energy savings of as much as 60% in the signal processing stage, and 63% in the subsequent wireless transmission, when a neuro-fuzzy classification structure is employed, coupled with a dimensionality reduction technique based on random projections.

  14. Early Classification of Pathological Heartbeats on Wireless Body Sensor Nodes

    PubMed Central

    Braojos, Rubén; Beretta, Ivan; Ansaloni, Giovanni; Atienza, David

    2014-01-01

    Smart Wireless Body Sensor Nodes (WBSNs) are a novel class of unobtrusive, battery-powered devices allowing the continuous monitoring and real-time interpretation of a subject's bio-signals, such as the electrocardiogram (ECG). These low-power platforms, while able to perform advanced signal processing to extract information on heart conditions, are usually constrained in terms of computational power and transmission bandwidth. It is therefore essential to identify in the early stages which parts of an ECG are critical for the diagnosis and, only in these cases, activate on demand more detailed and computationally intensive analysis algorithms. In this work, we present a comprehensive framework for real-time automatic classification of normal and abnormal heartbeats, targeting embedded and resource-constrained WBSNs. In particular, we provide a comparative analysis of different strategies to reduce the heartbeat representation dimensionality, and therefore the required computational effort. We then combine these techniques with a neuro-fuzzy classification strategy, which effectively discerns normal and pathological heartbeats with a minimal run time and memory overhead. We prove that, by performing a detailed analysis only on the heartbeats that our classifier identifies as abnormal, a WBSN system can drastically reduce its overall energy consumption. Finally, we assess the choice of neuro-fuzzy classification by comparing its performance and workload with respect to other state-of-the-art strategies. Experimental results using the MIT-BIH Arrhythmia database show energy savings of as much as 60% in the signal processing stage, and 63% in the subsequent wireless transmission, when a neuro-fuzzy classification structure is employed, coupled with a dimensionality reduction technique based on random projections. PMID:25436654

  15. Development of type transfer functions for regional-scale nonpoint source groundwater vulnerability assessments

    NASA Astrophysics Data System (ADS)

    Stewart, Iris T.; Loague, Keith

    2003-12-01

    Groundwater vulnerability assessments of nonpoint source agrochemical contamination at regional scales are either qualitative in nature or require prohibitively costly computational efforts. By contrast, the type transfer function (TTF) modeling approach for vadose zone pesticide leaching presented here estimates solute concentrations at a depth of interest, only uses available soil survey, climatic, and irrigation information, and requires minimal computational cost for application. TTFs are soil texture based travel time probability density functions that describe a characteristic leaching behavior for soil profiles with similar soil hydraulic properties. Seven sets of TTFs, representing different levels of upscaling, were developed for six loam soil textural classes with the aid of simulated breakthrough curves from synthetic data sets. For each TTF set, TTFs were determined from a group or subgroup of breakthrough curves for each soil texture by identifying the effective parameters of the function that described the average leaching behavior of the group. The grouping of the breakthrough curves was based on the TTF index, a measure of the magnitude of the peak concentration, the peak arrival time, and the concentration spread. Comparison to process-based simulations show that the TTFs perform well with respect to mass balance, concentration magnitude, and the timing of concentration peaks. Sets of TTFs based on individual soil textures perform better for all the evaluation criteria than sets that span all textures. As prediction accuracy and computational cost increase with the number of TTFs in a set, the selection of a TTF set is determined by a given application.

  16. Applying Minimal Manual Principles for Documentation of Graphical User Interfaces.

    ERIC Educational Resources Information Center

    Nowaczyk, Ronald H.; James, E. Christopher

    1993-01-01

    Investigates the need to include computer screens in documentation for software using a graphical user interface. Describes the uses and purposes of "minimal manuals" and their principles. Studies student reaction to their use of one of three on-screen manuals: screens, icon, and button. Finds some benefit for including icon and button…

  17. Probabilistic inspection strategies for minimizing service failures

    NASA Technical Reports Server (NTRS)

    Brot, Abraham

    1994-01-01

    The INSIM computer program is described which simulates the 'limited fatigue life' environment in which aircraft structures generally operate. The use of INSIM to develop inspection strategies which aim to minimize service failures is demonstrated. Damage-tolerance methodology, inspection thresholds and customized inspections are simulated using the probability of failure as the driving parameter.

  18. Geometric modeling of space-optimal unit-cell-based tissue engineering scaffolds

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Srinivasan; Lu, Lichun; Yaszemski, Michael J.; Robb, Richard A.

    2005-04-01

    Tissue engineering involves regenerating damaged or malfunctioning organs using cells, biomolecules, and synthetic or natural scaffolds. Based on their intended roles, scaffolds can be injected as space-fillers or be preformed and implanted to provide mechanical support. Preformed scaffolds are biomimetic "trellis-like" structures which, on implantation and integration, act as tissue/organ surrogates. Customized, computer controlled, and reproducible preformed scaffolds can be fabricated using Computer Aided Design (CAD) techniques and rapid prototyping devices. A curved, monolithic construct with minimal surface area constitutes an efficient substrate geometry that promotes cell attachment, migration and proliferation. However, current CAD approaches do not provide such a biomorphic construct. We address this critical issue by presenting one of the very first physical realizations of minimal surfaces towards the construction of efficient unit-cell based tissue engineering scaffolds. Mask programmability, and optimal packing density of triply periodic minimal surfaces are used to construct the optimal pore geometry. Budgeted polygonization, and progressive minimal surface refinement facilitate the machinability of these surfaces. The efficient stress distributions, as deduced from the Finite Element simulations, favor the use of these scaffolds for orthopedic applications.

  19. HABITAT MODELING APPROACHES FOR RESTORATION SITE SELECTION

    EPA Science Inventory

    Numerous modeling approaches have been used to develop predictive models of species-environment and species-habitat relationships. These models have been used in conservation biology and habitat or species management, but their application to restoration efforts has been minimal...

  20. EPA and States' Collective Efforts Lead to Regulatory Action on Dicamba

    EPA Pesticide Factsheets

    Agreement with Monsanto, BASF and DuPont on measures to further minimize the potential for drift to damage neighboring crops from the use of dicamba formulations used to control weeds in genetically modified cotton and soybeans.

  1. Molgenis-impute: imputation pipeline in a box.

    PubMed

    Kanterakis, Alexandros; Deelen, Patrick; van Dijk, Freerk; Byelas, Heorhiy; Dijkstra, Martijn; Swertz, Morris A

    2015-08-19

    Genotype imputation is an important procedure in current genomic analysis such as genome-wide association studies, meta-analyses and fine mapping. Although high quality tools are available that perform the steps of this process, considerable effort and expertise is required to set up and run a best practice imputation pipeline, particularly for larger genotype datasets, where imputation has to scale out in parallel on computer clusters. Here we present MOLGENIS-impute, an 'imputation in a box' solution that seamlessly and transparently automates the set up and running of all the steps of the imputation process. These steps include genome build liftover (liftovering), genotype phasing with SHAPEIT2, quality control, sample and chromosomal chunking/merging, and imputation with IMPUTE2. MOLGENIS-impute builds on MOLGENIS-compute, a simple pipeline management platform for submission and monitoring of bioinformatics tasks in High Performance Computing (HPC) environments like local/cloud servers, clusters and grids. All the required tools, data and scripts are downloaded and installed in a single step. Researchers with diverse backgrounds and expertise have tested MOLGENIS-impute on different locations and imputed over 30,000 samples so far using the 1,000 Genomes Project and new Genome of the Netherlands data as the imputation reference. The tests have been performed on PBS/SGE clusters, cloud VMs and in a grid HPC environment. MOLGENIS-impute gives priority to the ease of setting up, configuring and running an imputation. It has minimal dependencies and wraps the pipeline in a simple command line interface, without sacrificing flexibility to adapt or limiting the options of underlying imputation tools. It does not require knowledge of a workflow system or programming, and is targeted at researchers who just want to apply best practices in imputation via simple commands. It is built on the MOLGENIS compute workflow framework to enable customization with additional computational steps or it can be included in other bioinformatics pipelines. It is available as open source from: https://github.com/molgenis/molgenis-imputation.

  2. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.

  3. Safeguarding Databases Basic Concepts Revisited.

    ERIC Educational Resources Information Center

    Cardinali, Richard

    1995-01-01

    Discusses issues of database security and integrity, including computer crime and vandalism, human error, computer viruses, employee and user access, and personnel policies. Suggests some precautions to minimize system vulnerability such as careful personnel screening, audit systems, passwords, and building and software security systems. (JKP)

  4. Computerizing the Accounting Curriculum.

    ERIC Educational Resources Information Center

    Nash, John F.; England, Thomas G.

    1986-01-01

    Discusses the use of computers in college accounting courses. Argues that the success of new efforts in using computers in teaching accounting is dependent upon increasing instructors' computer skills, and choosing appropriate hardware and software, including commercially available business software packages. (TW)

  5. Computers in Schools: White Boys Only?

    ERIC Educational Resources Information Center

    Hammett, Roberta F.

    1997-01-01

    Discusses the role of computers in today's world and the construction of computer use attitudes, such as gender gaps. Suggests how schools might close the gaps. Includes a brief explanation about how facility with computers is important for women in their efforts to gain equitable treatment in all aspects of their lives. (PA)

  6. Computers and Instruction: Implications of the Rising Tide of Criticism for Reading Education.

    ERIC Educational Resources Information Center

    Balajthy, Ernest

    1988-01-01

    Examines two major reasons that schools have adopted computers without careful prior examination and planning. Surveys a variety of criticisms targeted toward some aspects of computer-based instruction in reading in an effort to direct attention to the beneficial implications of computers in the classroom. (MS)

  7. Preparing Future Secondary Computer Science Educators

    ERIC Educational Resources Information Center

    Ajwa, Iyad

    2007-01-01

    Although nearly every college offers a major in computer science, many computer science teachers at the secondary level have received little formal training. This paper presents details of a project that could make a significant contribution to national efforts to improve computer science education by combining teacher education and professional…

  8. Multivariable frequency domain identification via 2-norm minimization

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1992-01-01

    The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.

  9. On the lower bound of monitor solutions of maximally permissive supervisors for a subclass α-S3PR of flexible manufacturing systems

    NASA Astrophysics Data System (ADS)

    Chao, Daniel Yuh

    2015-01-01

    Recently, a novel and computationally efficient method - based on a vector covering approach - to design optimal control places and an iteration approach that computes the reachability graph to obtain a maximally permissive liveness enforcing supervisor for FMS (flexible manufacturing systems) have been reported. However, it is unclear as to the relationship between the structure of the net and the minimal number of monitors required. This paper develops a theory to show that the minimal number of monitors required cannot be less than that of basic siphons in α-S3PR (systems of simple sequential processes with resources). This confirms that two of the three controlled systems by Chen et al. are of a minimal monitor configuration since they belong to α-S3PR and their number in each example equals that of basic siphons.

  10. How do price minimizing behaviors impact smoking cessation? Findings from the International Tobacco Control (ITC) Four Country Survey.

    PubMed

    Licht, Andrea S; Hyland, Andrew J; O'Connor, Richard J; Chaloupka, Frank J; Borland, Ron; Fong, Geoffrey T; Nargis, Nigar; Cummings, K Michael

    2011-05-01

    This paper examines how price minimizing behaviors impact efforts to stop smoking. Data on 4,988 participants from the International Tobacco Control Policy Evaluation (ITC) Four-Country Survey who were smokers at baseline (wave 5) and interviewed at a 1 year follow-up were used. We examined whether price minimizing behaviors at baseline predicted: (1) cessation, (2) quit attempts, and (3) successful quit attempts at one year follow up using multivariate logistic regression modeling. A subset analysis included 3,387 participants who were current smokers at waves 5 and 6 and were followed through wave 7 to explore effects of changing purchase patterns on cessation. Statistical tests for interaction were performed to examine the joint effect of SES and price/tax avoidance behaviors on cessation outcomes. Smokers who engaged in any price/tax avoidance behaviors were 28% less likely to report cessation. Persons using low/untaxed sources were less likely to quit at follow up, those purchasing cartons were less likely to make quit attempts and quit, and those using discount cigarettes were less likely to succeed, conditional on making attempts. Respondents who utilized multiple behaviors simultaneously were less likely to make quit attempts and to succeed. SES did not modify the effects of price minimizing behaviors on cessation outcomes. The data from this paper indicate that the availability of lower priced cigarette alternatives may attenuate public health efforts aimed at to reduce reducing smoking prevalence through price and tax increases among all SES groups.

  11. OxMaR: open source free software for online minimization and randomization for clinical trials.

    PubMed

    O'Callaghan, Christopher A

    2014-01-01

    Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.

  12. An Outcome and Cost Analysis Comparing Single-Level Minimally Invasive Transforaminal Lumbar Interbody Fusion Using Intraoperative Fluoroscopy versus Computed Tomography-Guided Navigation.

    PubMed

    Khanna, Ryan; McDevitt, Joseph L; Abecassis, Zachary A; Smith, Zachary A; Koski, Tyler R; Fessler, Richard G; Dahdaleh, Nader S

    2016-10-01

    Minimally invasive transforaminal lumbar interbody fusion (TLIF) has undergone significant evolution since its conception as a fusion technique to treat lumbar spondylosis. Minimally invasive TLIF is commonly performed using intraoperative two-dimensional fluoroscopic x-rays. However, intraoperative computed tomography (CT)-based navigation during minimally invasive TLIF is gaining popularity for improvements in visualizing anatomy and reducing intraoperative radiation to surgeons and operating room staff. This is the first study to compare clinical outcomes and cost between these 2 imaging techniques during minimally invasive TILF. For comparison, 28 patients who underwent single-level minimally invasive TLIF using fluoroscopy were matched to 28 patients undergoing single-level minimally invasive TLIF using CT navigation based on race, sex, age, smoking status, payer type, and medical comorbidities (Charlson Comorbidity Index). The minimum follow-up time was 6 months. The 2 groups were compared in regard to clinical outcomes and hospital reimbursement from the payer perspective. Average surgery time, anesthesia time, and hospital length of stay were similar for both groups, but average estimated blood loss was lower in the fluoroscopy group compared with the CT navigation group (154 mL vs. 262 mL; P = 0.016). Oswestry Disability Index, back visual analog scale, and leg visual analog scale scores similarly improved in both groups (P > 0.05) at 6-month follow-up. Cost analysis showed that average hospital payments were similar in the fluoroscopy versus the CT navigation groups ($32,347 vs. $32,656; P = 0.925) as well as payments for the operating room (P = 0.868). Single minimally invasive TLIF performed with fluoroscopy versus CT navigation showed similar clinical outcomes and cost at 6 months. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Computers for the Faculty: How on a Limited Budget.

    ERIC Educational Resources Information Center

    Arman, Hal; Kostoff, John

    An informal investigation of the use of computers at Delta College (DC) in Michigan revealed reasonable use of computers by faculty in disciplines such as mathematics, business, and technology, but very limited use in the humanities and social sciences. In an effort to increase faculty computer usage, DC decided to make computers available to any…

  14. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Treesearch

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  15. Biomechanics of Head, Neck, and Chest Injury Prevention for Soldiers: Phase 2 and 3

    DTIC Science & Technology

    2016-08-01

    understanding of the biomechanics of the head and brain. Task 2.3 details the computational modeling efforts conducted to evaluate the response of the...section also details the progress made on the development of a testing apparatus to evaluate cervical spine implants in survivable loading scenarios...computational modeling efforts conducted to evaluate the response of the cervical spine and the effects of cervical arthrodesis and arthroplasty during

  16. Limits on fundamental limits to computation.

    PubMed

    Markov, Igor L

    2014-08-14

    An indispensable part of our personal and working lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the past fifty years. Such Moore scaling now requires ever-increasing efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and increase our understanding of integrated-circuit scaling, here I review fundamental limits to computation in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, I recapitulate how some limits were circumvented, and compare loose and tight limits. Engineering difficulties encountered by emerging technologies may indicate yet unknown limits.

  17. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C C

    The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less

  18. Chi-squared and C statistic minimization for low count per bin data

    NASA Astrophysics Data System (ADS)

    Nousek, John A.; Shue, David R.

    1989-07-01

    Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.

  19. Improved pressure-velocity coupling algorithm based on minimization of global residual norm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatwani, A.U.; Turan, A.

    1991-01-01

    In this paper an improved pressure velocity coupling algorithm is proposed based on the minimization of the global residual norm. The procedure is applied to SIMPLE and SIMPLEC algorithms to automatically select the pressure underrelaxation factor to minimize the global residual norm at each iteration level. Test computations for three-dimensional turbulent, isothermal flow is a toroidal vortex combustor indicate that velocity underrelaxation factors as high as 0.7 can be used to obtain a converged solution in 300 iterations.

  20. Chi-squared and C statistic minimization for low count per bin data. [sampling in X ray astronomy

    NASA Technical Reports Server (NTRS)

    Nousek, John A.; Shue, David R.

    1989-01-01

    Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.

Top