Sample records for control computable weak

  1. FAA computer security : concerns remain due to personnel and other continuing weaknesses

    DOT National Transportation Integrated Search

    2000-08-01

    FAA has a history of computer security weaknesses in a number of areas, including its physical security management at facilities that house air traffic control (ATC) systems, systems security for both operational and future systems, management struct...

  2. Air Traffic Control: Weak Computer Security Practices Jeopardize Flight Safety

    DOT National Transportation Integrated Search

    1998-05-01

    Given the paramount importance of computer security of Air Traffic Control (ATC) systems, Congress asked the General Accounting Office to determine (1) whether the Fedcral Aviation Administration (FAA) is effectively managing physical security at ATC...

  3. Education Financial Management: Weak Internal Controls Led to Instances of Fraud and Other Improper Payments. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    Calbom, Linda M.

    This report to Congressional Requesters is concerned with internal control problems found in the U.S. Department of Education. Significant internal control weaknesses in the U.S. Department of Education's payment processes and poor physical control over its computer assets made the department vulnerable to (and in some cases resulted in) fraud,…

  4. Education Financial Management: Weak Internal Controls Led to Instances of Fraud and Other Improper Payments. Testimony before the Subcommittee on Select Education, Committee on Education and the Workforce, House of Representatives.

    ERIC Educational Resources Information Center

    Calbom, Linda

    This testimony summarizes a report generated by the U.S. General Accounting Office concerned with internal control problems found in the U.S. Department of Education. Significant internal control weaknesses in the U.S. Department of Education's payment processes and poor physical control over its computer assets made the department vulnerable to…

  5. Emulating weak localization using a solid-state quantum circuit.

    PubMed

    Chen, Yu; Roushan, P; Sank, D; Neill, C; Lucero, Erik; Mariantoni, Matteo; Barends, R; Chiaro, B; Kelly, J; Megrant, A; Mutus, J Y; O'Malley, P J J; Vainsencher, A; Wenner, J; White, T C; Yin, Yi; Cleland, A N; Martinis, John M

    2014-10-14

    Quantum interference is one of the most fundamental physical effects found in nature. Recent advances in quantum computing now employ interference as a fundamental resource for computation and control. Quantum interference also lies at the heart of sophisticated condensed matter phenomena such as Anderson localization, phenomena that are difficult to reproduce in numerical simulations. Here, employing a multiple-element superconducting quantum circuit, with which we manipulate a single microwave photon, we demonstrate that we can emulate the basic effects of weak localization. By engineering the control sequence, we are able to reproduce the well-known negative magnetoresistance of weak localization as well as its temperature dependence. Furthermore, we can use our circuit to continuously tune the level of disorder, a parameter that is not readily accessible in mesoscopic systems. Demonstrating a high level of control, our experiment shows the potential for employing superconducting quantum circuits as emulators for complex quantum phenomena.

  6. Symmetry structure in discrete models of biochemical systems: natural subsystems and the weak control hierarchy in a new model of computation driven by interactions.

    PubMed

    Nehaniv, Chrystopher L; Rhodes, John; Egri-Nagy, Attila; Dini, Paolo; Morris, Eric Rothstein; Horváth, Gábor; Karimi, Fariba; Schreckling, Daniel; Schilstra, Maria J

    2015-07-28

    Interaction computing is inspired by the observation that cell metabolic/regulatory systems construct order dynamically, through constrained interactions between their components and based on a wide range of possible inputs and environmental conditions. The goals of this work are to (i) identify and understand mathematically the natural subsystems and hierarchical relations in natural systems enabling this and (ii) use the resulting insights to define a new model of computation based on interactions that is useful for both biology and computation. The dynamical characteristics of the cellular pathways studied in systems biology relate, mathematically, to the computational characteristics of automata derived from them, and their internal symmetry structures to computational power. Finite discrete automata models of biological systems such as the lac operon, the Krebs cycle and p53-mdm2 genetic regulation constructed from systems biology models have canonically associated algebraic structures (their transformation semigroups). These contain permutation groups (local substructures exhibiting symmetry) that correspond to 'pools of reversibility'. These natural subsystems are related to one another in a hierarchical manner by the notion of 'weak control'. We present natural subsystems arising from several biological examples and their weak control hierarchies in detail. Finite simple non-Abelian groups are found in biological examples and can be harnessed to realize finitary universal computation. This allows ensembles of cells to achieve any desired finitary computational transformation, depending on external inputs, via suitably constrained interactions. Based on this, interaction machines that grow and change their structure recursively are introduced and applied, providing a natural model of computation driven by interactions.

  7. Yearbook Production: Yearbook Staffs Can Now "Blame" Strengths, Weaknesses on Computer as They Take More Control of Their Publications.

    ERIC Educational Resources Information Center

    Hall, H. L.

    1988-01-01

    Reports on the advantages and disadvantages of desktop publishing, using the Apple Macintosh and "Pagemaker" software, to produce a high school yearbook. Asserts that while desktop publishing may be initially more time consuming for those unfamiliar with computers, desktop publishing gives high school journalism staffs more control over…

  8. Computer assisted audit techniques for UNIX (UNIX-CAATS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polk, W.T.

    1991-12-31

    Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General`s Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less

  9. Computer assisted audit techniques for UNIX (UNIX-CAATS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polk, W.T.

    1991-01-01

    Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General's Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less

  10. Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients

    DOE PAGES

    Bruno, Mattia; Lehner, Christoph; Soni, Amarjit

    2018-04-20

    Here, we propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C 1 and C 2, related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.

  11. Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients

    NASA Astrophysics Data System (ADS)

    Bruno, Mattia; Lehner, Christoph; Soni, Amarjit; Rbc; Ukqcd Collaborations

    2018-04-01

    We propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C1 and C2 , related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.

  12. Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruno, Mattia; Lehner, Christoph; Soni, Amarjit

    Here, we propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C 1 and C 2, related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.

  13. Foreign Language Teaching and the Computer.

    ERIC Educational Resources Information Center

    Garrett, Nina; Hart, Robert S.

    1988-01-01

    A review of the APPLE MACINTOSH-compatible software "Conjugate! Spanish," intended to drill Spanish verb forms, points out its strengths (error feedback, user manual, user interface, and feature control) and its weaknesses (pedagogical approach). (CB)

  14. The near optimality of the stabilizing control in a weakly nonlinear system with state-dependent coefficients

    NASA Astrophysics Data System (ADS)

    Dmitriev, Mikhail G.; Makarov, Dmitry A.

    2016-08-01

    We carried out analysis of near optimality of one computationally effective nonlinear stabilizing control built for weakly nonlinear systems with coefficients depending on the state and the formal small parameter. First investigation of that problem was made in [M. G. Dmitriev, and D. A. Makarov, "The suboptimality of stabilizing regulator in a quasi-linear system with state-depended coefficients," in 2016 International Siberian Conference on Control and Communications (SIBCON) Proceedings, National Research University, Moscow, 2016]. In this paper, another optimal control and gain matrix representations were used and theoretical results analogous to cited work above were obtained. Also as in the cited work above the form of quality criterion on which this close-loop control is optimal was constructed.

  15. Nonlinear dynamics of mini-satellite respinup by weak internal controllable torques

    NASA Astrophysics Data System (ADS)

    Somov, Yevgeny

    2014-12-01

    Contemporary space engineering advanced new problem before theoretical mechanics and motion control theory: a spacecraft directed respinup by the weak restricted control internal forces. The paper presents some results on this problem, which is very actual for energy supply of information mini-satellites (for communication, geodesy, radio- and opto-electronic observation of the Earth et al.) with electro-reaction plasma thrusters and gyro moment cluster based on the reaction wheels or the control moment gyros. The solution achieved is based on the methods for synthesis of nonlinear robust control and on rigorous analytical proof for the required spacecraft rotation stability by Lyapunov function method. These results were verified by a computer simulation of strongly nonlinear oscillatory processes at respinuping of a flexible spacecraft.

  16. Nonlinear dynamics of mini-satellite respinup by weak internal controllable torques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somov, Yevgeny, E-mail: e-somov@mail.ru

    Contemporary space engineering advanced new problem before theoretical mechanics and motion control theory: a spacecraft directed respinup by the weak restricted control internal forces. The paper presents some results on this problem, which is very actual for energy supply of information mini-satellites (for communication, geodesy, radio- and opto-electronic observation of the Earth et al.) with electro-reaction plasma thrusters and gyro moment cluster based on the reaction wheels or the control moment gyros. The solution achieved is based on the methods for synthesis of nonlinear robust control and on rigorous analytical proof for the required spacecraft rotation stability by Lyapunov functionmore » method. These results were verified by a computer simulation of strongly nonlinear oscillatory processes at respinuping of a flexible spacecraft.« less

  17. Singular Perturbations and Time-Scale Methods in Control Theory: Survey 1976-1982.

    DTIC Science & Technology

    1982-12-01

    established in the 1960s, when they first became a means for simplified computation of optimal trajectories. It was soon recognized that singular...null-space of P(ao). The asymptotic values of the invariant zeros and associated invariant-zero directions as € O are the values computed from the...49 ’ 49 7. WEAK COUPLING AND TIME SCALES The need for model simplification with a reduction (or distribution) of computational effort is

  18. Federal Family Education Loan Information System. Weak Computer Controls Increase Risk of Unauthorized Access to Sensitive Data. Report to the Secretary of Education.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Accounting and Information Management Div.

    This report presents an evaluation of the general controls over the Federal Family Education Loan Program (FFELP) information system maintained and operated by a contractor for the U.S. Department of Education (ED), which administers FFELP. The evaluation found that ED's general controls over the FFELP information system did not adequately protect…

  19. Computer-Controlled System for Plasma Ion Energy Auto-Analyzer

    NASA Astrophysics Data System (ADS)

    Wu, Xian-qiu; Chen, Jun-fang; Jiang, Zhen-mei; Zhong, Qing-hua; Xiong, Yu-ying; Wu, Kai-hua

    2003-02-01

    A computer-controlled system for plasma ion energy auto-analyzer was technically studied for rapid and online measurement of plasma ion energy distribution. The system intelligently controls all the equipments via a RS-232 port, a printer port and a home-built circuit. The software designed by Lab VIEW G language automatically fulfils all of the tasks such as system initializing, adjustment of scanning-voltage, measurement of weak-current, data processing, graphic export, etc. By using the system, a few minutes are taken to acquire the whole ion energy distribution, which rapidly provides important parameters of plasma process techniques based on semiconductor devices and microelectronics.

  20. Finite element solution of optimal control problems with state-control inequality constraints

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.

    1992-01-01

    It is demonstrated that the weak Hamiltonian finite-element formulation is amenable to the solution of optimal control problems with inequality constraints which are functions of both state and control variables. Difficult problems can be treated on account of the ease with which algebraic equations can be generated before having to specify the problem. These equations yield very accurate solutions. Owing to the sparse structure of the resulting Jacobian, computer solutions can be obtained quickly when the sparsity is exploited.

  1. Quantum computation based on photonic systems with two degrees of freedom assisted by the weak cross-Kerr nonlinearity

    PubMed Central

    Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong

    2016-01-01

    Most of previous quantum computations only take use of one degree of freedom (DoF) of photons. An experimental system may possess various DoFs simultaneously. In this paper, with the weak cross-Kerr nonlinearity, we investigate the parallel quantum computation dependent on photonic systems with two DoFs. We construct nearly deterministic controlled-not (CNOT) gates operating on the polarization spatial DoFs of the two-photon or one-photon system. These CNOT gates show that two photonic DoFs can be encoded as independent qubits without auxiliary DoF in theory. Only the coherent states are required. Thus one half of quantum simulation resources may be saved in quantum applications if more complicated circuits are involved. Hence, one may trade off the implementation complexity and simulation resources by using different photonic systems. These CNOT gates are also used to complete various applications including the quantum teleportation and quantum superdense coding. PMID:27424767

  2. Quantum computation based on photonic systems with two degrees of freedom assisted by the weak cross-Kerr nonlinearity.

    PubMed

    Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong

    2016-07-18

    Most of previous quantum computations only take use of one degree of freedom (DoF) of photons. An experimental system may possess various DoFs simultaneously. In this paper, with the weak cross-Kerr nonlinearity, we investigate the parallel quantum computation dependent on photonic systems with two DoFs. We construct nearly deterministic controlled-not (CNOT) gates operating on the polarization spatial DoFs of the two-photon or one-photon system. These CNOT gates show that two photonic DoFs can be encoded as independent qubits without auxiliary DoF in theory. Only the coherent states are required. Thus one half of quantum simulation resources may be saved in quantum applications if more complicated circuits are involved. Hence, one may trade off the implementation complexity and simulation resources by using different photonic systems. These CNOT gates are also used to complete various applications including the quantum teleportation and quantum superdense coding.

  3. Finite element solution of optimal control problems with inequality constraints

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.

    1990-01-01

    A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.

  4. Non-neural Muscle Weakness Has Limited Influence on Complexity of Motor Control during Gait

    PubMed Central

    Goudriaan, Marije; Shuman, Benjamin R.; Steele, Katherine M.; Van den Hauwe, Marleen; Goemans, Nathalie; Molenaers, Guy; Desloovere, Kaat

    2018-01-01

    Cerebral palsy (CP) and Duchenne muscular dystrophy (DMD) are neuromuscular disorders characterized by muscle weakness. Weakness in CP has neural and non-neural components, whereas in DMD, weakness can be considered as a predominantly non-neural problem. Despite the different underlying causes, weakness is a constraint for the central nervous system when controlling gait. CP demonstrates decreased complexity of motor control during gait from muscle synergy analysis, which is reflected by a higher total variance accounted for by one synergy (tVAF1). However, it remains unclear if weakness directly contributes to higher tVAF1 in CP, or whether altered tVAF1 reflects mainly neural impairments. If muscle weakness directly contributes to higher tVAF1, then tVAF1 should also be increased in DMD. To examine the etiology of increased tVAF1, muscle activity data of gluteus medius, rectus femoris, medial hamstrings, medial gastrocnemius, and tibialis anterior were measured at self-selected walking speed, and strength data from knee extensors, knee flexors, dorsiflexors and plantar flexors, were analyzed in 15 children with CP [median (IQR) age: 8.9 (2.2)], 15 boys with DMD [8.7 (3.1)], and 15 typical developing (TD) children [8.6 (2.7)]. We computed tVAF1 from 10 concatenated steps with non-negative matrix factorization, and compared tVAF1 between the three groups with a Mann-Whiney U-test. Spearman's rank correlation coefficients were used to determine if weakness in specific muscle groups contributed to altered tVAF1. No significant differences in tVAF1 were found between DMD [tVAF1: 0.60 (0.07)] and TD children [0.65 (0.07)], while tVAF1 was significantly higher in CP [(0.74 (0.09)] than in the other groups (both p < 0.005). In CP, weakness in the plantar flexors was related to higher tVAF1 (r = −0.72). In DMD, knee extensor weakness related to increased tVAF1 (r = −0.50). These results suggest that the non-neural weakness in DMD had limited influence on complexity of motor control during gait and that the higher tVAF1 in children with CP is mainly related to neural impairments caused by the brain lesion. PMID:29445330

  5. Analyzing Dynamics of Cooperating Spacecraft

    NASA Technical Reports Server (NTRS)

    Hughes, Stephen P.; Folta, David C.; Conway, Darrel J.

    2004-01-01

    A software library has been developed to enable high-fidelity computational simulation of the dynamics of multiple spacecraft distributed over a region of outer space and acting with a common purpose. All of the modeling capabilities afforded by this software are available independently in other, separate software systems, but have not previously been brought together in a single system. A user can choose among several dynamical models, many high-fidelity environment models, and several numerical-integration schemes. The user can select whether to use models that assume weak coupling between spacecraft, or strong coupling in the case of feedback control or tethering of spacecraft to each other. For weak coupling, spacecraft orbits are propagated independently, and are synchronized in time by controlling the step size of the integration. For strong coupling, the orbits are integrated simultaneously. Among the integration schemes that the user can choose are Runge-Kutta Verner, Prince-Dormand, Adams-Bashforth-Moulton, and Bulirsh- Stoer. Comparisons of performance are included for both the weak- and strongcoupling dynamical models for all of the numerical integrators.

  6. Quantum evolution: The case of weak localization for a 3D alloy-type Anderson model and application to Hamiltonian based quantum computation

    NASA Astrophysics Data System (ADS)

    Cao, Zhenwei

    Over the years, people have found Quantum Mechanics to be extremely useful in explaining various physical phenomena from a microscopic point of view. Anderson localization, named after physicist P. W. Anderson, states that disorder in a crystal can cause non-spreading of wave packets, which is one possible mechanism (at single electron level) to explain metal-insulator transitions. The theory of quantum computation promises to bring greater computational power over classical computers by making use of some special features of Quantum Mechanics. The first part of this dissertation considers a 3D alloy-type model, where the Hamiltonian is the sum of the finite difference Laplacian corresponding to free motion of an electron and a random potential generated by a sign-indefinite single-site potential. The result shows that localization occurs in the weak disorder regime, i.e., when the coupling parameter lambda is very small, for energies E ≤ --Clambda 2. The second part of this dissertation considers adiabatic quantum computing (AQC) algorithms for the unstructured search problem to the case when the number of marked items is unknown. In an ideal situation, an explicit quantum algorithm together with a counting subroutine are given that achieve the optimal Grover speedup over classical algorithms, i.e., roughly speaking, reduce O(2n) to O(2n/2), where n is the size of the problem. However, if one considers more realistic settings, the result shows this quantum speedup is achievable only under a very rigid control precision requirement (e.g., exponentially small control error).

  7. Symmetric weak ternary quantum homomorphic encryption schemes

    NASA Astrophysics Data System (ADS)

    Wang, Yuqi; She, Kun; Luo, Qingbin; Yang, Fan; Zhao, Chao

    2016-03-01

    Based on a ternary quantum logic circuit, four symmetric weak ternary quantum homomorphic encryption (QHE) schemes were proposed. First, for a one-qutrit rotation gate, a QHE scheme was constructed. Second, in view of the synthesis of a general 3 × 3 unitary transformation, another one-qutrit QHE scheme was proposed. Third, according to the one-qutrit scheme, the two-qutrit QHE scheme about generalized controlled X (GCX(m,n)) gate was constructed and further generalized to the n-qutrit unitary matrix case. Finally, the security of these schemes was analyzed in two respects. It can be concluded that the attacker can correctly guess the encryption key with a maximum probability pk = 1/33n, thus it can better protect the privacy of users’ data. Moreover, these schemes can be well integrated into the future quantum remote server architecture, and thus the computational security of the users’ private quantum information can be well protected in a distributed computing environment.

  8. Gamow-Teller strength and lepton captures rates on 66-71Ni in stellar matter

    NASA Astrophysics Data System (ADS)

    Nabi, Jameel-Un; Majid, Muhammad

    Charge-changing transitions play a significant role in stellar weak-decay processes. The fate of the massive stars is decided by these weak-decay rates including lepton (positron and electron) captures rates, which play a consequential role in the dynamics of core collapse. As per previous simulation results, weak interaction rates on nickel (Ni) isotopes have significant influence on the stellar core vis-à-vis controlling the lepton content of stellar matter throughout the silicon shell burning phases of high mass stars up to the presupernova stages. In this paper, we perform a microscopic calculation of Gamow-Teller (GT) charge-changing transitions, in the β-decay and electron capture (EC) directions, for neutron-rich Ni isotopes (66-71Ni). We further compute the associated weak-decay rates for these selected Ni isotopes in stellar environment. The computations are accomplished by employing the deformed proton-neutron quasiparticle random phase approximation (pn-QRPA) model. A recent study showed that the deformed pn-QRPA theory is well suited for the estimation of GT transitions. The astral weak-decay rates are determined over densities in the range of 10-1011g/cm3 and temperatures in the range of 0.01 × 109-30 × 109K. The calculated lepton capture rates are compared with the previous calculation of Pruet and Fuller (PF). The overall comparison demonstrates that, at low stellar densities and high temperatures, our EC rates are bigger by as much as two orders of magnitude. Our results show that, at higher temperatures, the lepton capture rates are the dominant mode for the stellar weak rates and the corresponding lepton emission rates may be neglected.

  9. Ground Support Strategies at the Turquoise Ridge Joint Venture, Nevada

    NASA Astrophysics Data System (ADS)

    Sandbak, L. A.; Rai, A. R.

    2013-05-01

    Weak rock masses of high grade Carlin-trend gold mineralization are encountered in the Turquoise Ridge Joint Venture underground mine. The sediments consist of very weak and altered limestone, mudstone, and carbon-rich clays. The rock mass ratings are described as very poor to poor (Bieniawski in Proceedings of the symposium on exploration for rock engineering, Johannesburg, South Africa, pp. 97-106, 1976). The undercut and fill or boxes stoping mining methods are used because of the low dipping ore body geometry, complex geology, and weak rock mass. Design criteria are chosen to keep openings in weak rock as small as possible to prevent unraveling and to minimize supplementary support. Typical ground support for drifting includes the use of bolts, mesh, spiling, and shotcrete. Quality control of cemented rock fill (CRF) through sampling and aggregate sieve testing is necessary to insure high support strength. Specific support may include shotcrete arches with steel ring sets and CRF "arches" as a replacement of weak rock masses around long-term mine openings. Movement monitoring is utilized in problem areas and is needed to quantify and validate computer modeling.

  10. Department of Defense Progress in Financial Management Reform

    DTIC Science & Technology

    2000-05-09

    financial reporting , incomplete documentation, and weak internal controls, including computer controls, continue to prevent the government from accurately reporting a significant portion of its assets, liabilities, and costs. Material financial management deficiencies identified at DOD, taken together, continue to represent the single largest obstacle that must be effectively addressed to achieve an unqualified opinion on the U.S. government’s consolidated financial statements. DOD’s vast operations--with an estimated $1 trillion in assets, nearly $1

  11. Coherent quantum dynamics in steady-state manifolds of strongly dissipative systems.

    PubMed

    Zanardi, Paolo; Campos Venuti, Lorenzo

    2014-12-12

    Recently, it has been realized that dissipative processes can be harnessed and exploited to the end of coherent quantum control and information processing. In this spirit, we consider strongly dissipative quantum systems admitting a nontrivial manifold of steady states. We show how one can enact adiabatic coherent unitary manipulations, e.g., quantum logical gates, inside this steady-state manifold by adding a weak, time-rescaled, Hamiltonian term into the system's Liouvillian. The effective long-time dynamics is governed by a projected Hamiltonian which results from the interplay between the weak unitary control and the fast relaxation process. The leakage outside the steady-state manifold entailed by the Hamiltonian term is suppressed by an environment-induced symmetrization of the dynamics. We present applications to quantum-computation in decoherence-free subspaces and noiseless subsystems and numerical analysis of nonadiabatic errors.

  12. Finite element method for optimal guidance of an advanced launch vehicle

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.; Calise, Anthony J.; Leung, Martin

    1992-01-01

    A temporal finite element based on a mixed form of Hamilton's weak principle is summarized for optimal control problems. The resulting weak Hamiltonian finite element method is extended to allow for discontinuities in the states and/or discontinuities in the system equations. An extension of the formulation to allow for control inequality constraints is also presented. The formulation does not require element quadrature, and it produces a sparse system of nonlinear algebraic equations. To evaluate its feasibility for real-time guidance applications, this approach is applied to the trajectory optimization of a four-state, two-stage model with inequality constraints for an advanced launch vehicle. Numerical results for this model are presented and compared to results from a multiple-shooting code. The results show the accuracy and computational efficiency of the finite element method.

  13. Robust analysis of an underwater navigational strategy in electrically heterogeneous corridors.

    PubMed

    Dimble, Kedar D; Ranganathan, Badri N; Keshavan, Jishnu; Humbert, J Sean

    2016-08-01

    Obstacles and other global stimuli provide relevant navigational cues to a weakly electric fish. In this work, robust analysis of a control strategy based on electrolocation for performing obstacle avoidance in electrically heterogeneous corridors is presented and validated. Static output feedback control is shown to achieve the desired goal of reflexive obstacle avoidance in such environments in simulation and experimentation. The proposed approach is computationally inexpensive and readily implementable on a small scale underwater vehicle, making underwater autonomous navigation feasible in real-time.

  14. Multimodel methods for optimal control of aeroacoustics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guoquan; Collis, Samuel Scott

    2005-01-01

    A new multidomain/multiphysics computational framework for optimal control of aeroacoustic noise has been developed based on a near-field compressible Navier-Stokes solver coupled with a far-field linearized Euler solver both based on a discontinuous Galerkin formulation. In this approach, the coupling of near- and far-field domains is achieved by weakly enforcing continuity of normal fluxes across a coupling surface that encloses all nonlinearities and noise sources. For optimal control, gradient information is obtained by the solution of an appropriate adjoint problem that involves the propagation of adjoint information from the far-field to the near-field. This computational framework has been successfully appliedmore » to study optimal boundary-control of blade-vortex interaction, which is a significant noise source for helicopters on approach to landing. In the model-problem presented here, the noise propagated toward the ground is reduced by 12dB.« less

  15. Data Structures in Natural Computing: Databases as Weak or Strong Anticipatory Systems

    NASA Astrophysics Data System (ADS)

    Rossiter, B. N.; Heather, M. A.

    2004-08-01

    Information systems anticipate the real world. Classical databases store, organise and search collections of data of that real world but only as weak anticipatory information systems. This is because of the reductionism and normalisation needed to map the structuralism of natural data on to idealised machines with von Neumann architectures consisting of fixed instructions. Category theory developed as a formalism to explore the theoretical concept of naturality shows that methods like sketches arising from graph theory as only non-natural models of naturality cannot capture real-world structures for strong anticipatory information systems. Databases need a schema of the natural world. Natural computing databases need the schema itself to be also natural. Natural computing methods including neural computers, evolutionary automata, molecular and nanocomputing and quantum computation have the potential to be strong. At present they are mainly at the stage of weak anticipatory systems.

  16. Real-time fuzzy inference based robot path planning

    NASA Technical Reports Server (NTRS)

    Pacini, Peter J.; Teichrow, Jon S.

    1990-01-01

    This project addresses the problem of adaptive trajectory generation for a robot arm. Conventional trajectory generation involves computing a path in real time to minimize a performance measure such as expended energy. This method can be computationally intensive, and it may yield poor results if the trajectory is weakly constrained. Typically some implicit constraints are known, but cannot be encoded analytically. The alternative approach used here is to formulate domain-specific knowledge, including implicit and ill-defined constraints, in terms of fuzzy rules. These rules utilize linguistic terms to relate input variables to output variables. Since the fuzzy rulebase is determined off-line, only high-level, computationally light processing is required in real time. Potential applications for adaptive trajectory generation include missile guidance and various sophisticated robot control tasks, such as automotive assembly, high speed electrical parts insertion, stepper alignment, and motion control for high speed parcel transfer systems.

  17. Symmetry-protected coherent relaxation of open quantum systems

    NASA Astrophysics Data System (ADS)

    van Caspel, Moos; Gritsev, Vladimir

    2018-05-01

    We compute the effect of Markovian bulk dephasing noise on the staggered magnetization of the spin-1/2 XXZ Heisenberg chain, as the system evolves after a Néel quench. For sufficiently weak system-bath coupling, the unitary dynamics are found to be preserved up to a single exponential damping factor. This is a consequence of the interplay between PT symmetry and weak symmetries, which strengthens previous predictions for PT -symmetric Liouvillian dynamics. Requirements are a nondegenerate PT -symmetric generator of time evolution L ̂, a weak parity symmetry, and an observable that is antisymmetric under this parity transformation. The spectrum of L ̂ then splits up into symmetry sectors, yielding the same decay rate for all modes that contribute to the observable's time evolution. This phenomenon may be realized in trapped ion experiments and has possible implications for the control of decoherence in out-of-equilibrium many-body systems.

  18. Complementary code and digital filtering for detection of weak VHF radar signals from the mesoscale. [SOUSY-VHF radar, Harz Mountains, Germany

    NASA Technical Reports Server (NTRS)

    Schmidt, G.; Ruster, R.; Czechowsky, P.

    1983-01-01

    The SOUSY-VHF-Radar operates at a frequency of 53.5 MHz in a valley in the Harz mountains, Germany, 90 km from Hanover. The radar controller, which is programmed by a 16-bit computer holds 1024 program steps in core and controls, via 8 channels, the whole radar system: in particular the master oscillator, the transmitter, the transmit-receive-switch, the receiver, the analog to digital converter, and the hardware adder. The high-sensitivity receiver has a dynamic range of 70 dB and a video bandwidth of 1 MHz. Phase coding schemes are applied, in particular for investigations at mesospheric heights, in order to carry out measurements with the maximum duty cycle and the maximum height resolution. The computer takes the data from the adder to store it in magnetic tape or disc. The radar controller is programmed by the computer using simple FORTRAN IV statements. After the program has been loaded and the computer has started the radar controller, it runs automatically, stopping at the program end. In case of errors or failures occurring during the radar operation, the radar controller is shut off caused either by a safety circuit or by a power failure circuit or by a parity check system.

  19. Exploratory Lattice QCD Study of the Rare Kaon Decay K^{+}→π^{+}νν[over ¯].

    PubMed

    Bai, Ziyuan; Christ, Norman H; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T

    2017-06-23

    We report a first, complete lattice QCD calculation of the long-distance contribution to the K^{+}→π^{+}νν[over ¯] decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.

  20. Exploratory Lattice QCD Study of the Rare Kaon Decay K+→π+ν ν ¯

    NASA Astrophysics Data System (ADS)

    Bai, Ziyuan; Christ, Norman H.; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T.; Rbc-Ukqcd Collaboration

    2017-06-01

    We report a first, complete lattice QCD calculation of the long-distance contribution to the K+→π+ν ν ¯ decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.

  1. A Web-Based Monitoring System for Multidisciplinary Design Projects

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Salas, Andrea O.; Weston, Robert P.

    1998-01-01

    In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary computational environments, is defined as a hardware and software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, displaying, monitoring, and controlling the design process. The objective of this research is to explore how Web technology, integrated with an existing framework, can improve these areas of weakness. This paper describes a Web-based system that optimizes and controls the execution sequence of design processes; and monitors the project status and results. The three-stage evolution of the system with increasingly complex problems demonstrates the feasibility of this approach.

  2. Floating-point scaling technique for sources separation automatic gain control

    NASA Astrophysics Data System (ADS)

    Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.

    2012-07-01

    Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.

  3. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  4. LASER BIOLOGY: Optomechanical tests of hydrated biological tissues subjected to laser shaping

    NASA Astrophysics Data System (ADS)

    Omel'chenko, A. I.; Sobol', E. N.

    2008-03-01

    The mechanical properties of a matrix are studied upon changing the size and shape of biological tissues during dehydration caused by weak laser-induced heating. The cartilage deformation, dehydration dynamics, and hydraulic conductivity are measured upon laser heating. The hydrated state and the shape of samples of separated fascias and cartilaginous tissues were controlled by using computer-aided processing of tissue images in polarised light.

  5. Weak periodic solutions of xẍ + 1 = 0 and the Harmonic Balance Method

    NASA Astrophysics Data System (ADS)

    García-Saldaña, J. D.; Gasull, A.

    2017-02-01

    We prove that the differential equation xẍ + 1 = 0 has continuous weak periodic solutions and compute their periods. Then, we use the Harmonic Balance Method until order six to approximate these periods and to illustrate how the accuracy of the method increases with the order. Our computations rely on the Gröbner basis approach.

  6. CCM Continuity Constraint Method: A finite-element computational fluid dynamics algorithm for incompressible Navier-Stokes fluid flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, P. T.

    1993-09-01

    As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Provingmore » this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H 1 Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.« less

  7. dc Resistivity of Quantum Critical, Charge Density Wave States from Gauge-Gravity Duality

    NASA Astrophysics Data System (ADS)

    Amoretti, Andrea; Areán, Daniel; Goutéraux, Blaise; Musso, Daniele

    2018-04-01

    In contrast to metals with weak disorder, the resistivity of weakly pinned charge density waves (CDWs) is not controlled by irrelevant processes relaxing momentum. Instead, the leading contribution is governed by incoherent, diffusive processes which do not drag momentum and can be evaluated in the clean limit. We compute analytically the dc resistivity for a family of holographic charge density wave quantum critical phases and discuss its temperature scaling. Depending on the critical exponents, the ground state can be conducting or insulating. We connect our results to dc electrical transport in underdoped cuprate high Tc superconductors. We conclude by speculating on the possible relevance of unstable, semilocally critical CDW states to the strange metallic region.

  8. dc Resistivity of Quantum Critical, Charge Density Wave States from Gauge-Gravity Duality.

    PubMed

    Amoretti, Andrea; Areán, Daniel; Goutéraux, Blaise; Musso, Daniele

    2018-04-27

    In contrast to metals with weak disorder, the resistivity of weakly pinned charge density waves (CDWs) is not controlled by irrelevant processes relaxing momentum. Instead, the leading contribution is governed by incoherent, diffusive processes which do not drag momentum and can be evaluated in the clean limit. We compute analytically the dc resistivity for a family of holographic charge density wave quantum critical phases and discuss its temperature scaling. Depending on the critical exponents, the ground state can be conducting or insulating. We connect our results to dc electrical transport in underdoped cuprate high T_{c} superconductors. We conclude by speculating on the possible relevance of unstable, semilocally critical CDW states to the strange metallic region.

  9. Numerical Computation of Sensitivities and the Adjoint Approach

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1997-01-01

    We discuss the numerical computation of sensitivities via the adjoint approach in optimization problems governed by differential equations. We focus on the adjoint problem in its weak form. We show how one can avoid some of the problems with the adjoint approach, such as deriving suitable boundary conditions for the adjoint equation. We discuss the convergence of numerical approximations of the costate computed via the weak form of the adjoint problem and show the significance for the discrete adjoint problem.

  10. The internet worm

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    In November 1988 a worm program invaded several thousand UNIX-operated Sun workstations and VAX computers attached to the Research Internet, seriously disrupting service for several days but damaging no files. An analysis of the work's decompiled code revealed a battery of attacks by a knowledgeable insider, and demonstrated a number of security weaknesses. The attack occurred in an open network, and little can be inferred about the vulnerabilities of closed networks used for critical operations. The attack showed that passwork protection procedures need review and strengthening. It showed that sets of mutually trusting computers need to be carefully controlled. Sharp public reaction crystalized into a demand for user awareness and accountability in a networked world.

  11. Neural Computation and the Computational Theory of Cognition

    ERIC Educational Resources Information Center

    Piccinini, Gualtiero; Bahar, Sonya

    2013-01-01

    We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism--neural processes are computations in the…

  12. Mirror neurons and imitation: a computationally guided review.

    PubMed

    Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael

    2006-04-01

    Neurophysiology reveals the properties of individual mirror neurons in the macaque while brain imaging reveals the presence of 'mirror systems' (not individual neurons) in the human. Current conceptual models attribute high level functions such as action understanding, imitation, and language to mirror neurons. However, only the first of these three functions is well-developed in monkeys. We thus distinguish current opinions (conceptual models) on mirror neuron function from more detailed computational models. We assess the strengths and weaknesses of current computational models in addressing the data and speculations on mirror neurons (macaque) and mirror systems (human). In particular, our mirror neuron system (MNS), mental state inference (MSI) and modular selection and identification for control (MOSAIC) models are analyzed in more detail. Conceptual models often overlook the computational requirements for posited functions, while too many computational models adopt the erroneous hypothesis that mirror neurons are interchangeable with imitation ability. Our meta-analysis underlines the gap between conceptual and computational models and points out the research effort required from both sides to reduce this gap.

  13. The jamming avoidance response in the weakly electric fish Eigenmannia

    NASA Astrophysics Data System (ADS)

    Heiligenberg, Walter

    1980-10-01

    This study analyzes the algorithm by which the animal's nervous system evaluates spatially distributed temporal patterns of electroreceptive information. The outcome of this evaluation controls the jamming avoidance response, which is a shift in the animal's electric organ discharge frequency away from similar foreign frequencies. The encoding of “behaviorally relevant” stimulus variables by electroreceptors and the central computation of their messages are investigated by combined behavioral and neurophysiological strategies.

  14. Investigating Mental Workload Changes in a Long Duration Supervisory Control Task

    DTIC Science & Technology

    2015-05-06

    attention to local and global target features. Brain Cogn ., 81, 370–375. Derosière, G., Mandrick, K., Dray, G., Ward, T.E. and Perrey, S. (2013) NIRS...measured prefrontal cortex activity in neuroer- gonomics: strengths and weaknesses. Front. Hum. Neurosci ., 7, 583. Durantin, G., Gagnon, J.-F., Tremblay...Foundations of Augmented Cognition. Neuroergonomics and Operational Neuroscience , San Diego, CA. Interacting with Computers, Vol. 27 No. 5, 2015 by

  15. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  16. Practical scheme for error control using feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarovar, Mohan; Milburn, Gerard J.; Ahn, Charlene

    2004-05-01

    We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn et al. Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.

  17. Titration Curves: Fact and Fiction.

    ERIC Educational Resources Information Center

    Chamberlain, John

    1997-01-01

    Discusses ways in which datalogging equipment can enable titration curves to be measured accurately and how computing power can be used to predict the shape of curves. Highlights include sources of error, use of spreadsheets to generate titration curves, titration of a weak acid with a strong alkali, dibasic acids, weak acid and weak base, and…

  18. Supramolecular features of 2-(chlorophenyl)-3-[(chlorobenzylidene)-amino]-2,3-dihydroquinazolin-4(1H)-ones: A combined experimental and computational study

    NASA Astrophysics Data System (ADS)

    Mandal, Arkalekha; Patel, Bhisma K.

    2018-03-01

    The molecular structures of two isomeric 2-(chlorophenyl)-3-[(chlorobenzylidene)-amino] substituted 2,3-dihydroquinazolin-4(1H)-ones have been determined via single crystal XRD. Both isomers contain chloro substitutions on each of the phenyl rings and as a result a broad spectrum of halogen mediated weak interactions are viable in their crystal structures. The crystal packing of these compounds is stabilized by strong N-H⋯O hydrogen bond and various weak, non-classical hydrogen bonds acting synergistically. Both the molecules contain a chiral center and the weak interactions observed in them are either chiral self-discriminatory or chiral self-recognizing in nature. The weak interactions and spectral features of the compounds have been studied through experimental as well as computational methods including DFT, MEP, NBO and Hiresfeld surface analyses. In addition, the effect of different weak interactions to dictate either chiral self-recognition or self-discrimination in crystal packing has been elucidated.

  19. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1993-01-01

    Computations are presented for one-dimensional, strong shock waves that are typical of those that form in front of a reentering spacecraft. The fluid mechanics and thermochemistry are modeled using two different approaches. The first employs traditional continuum techniques in solving the Navier-Stokes equations. The second-approach employs a particle simulation technique (the direct simulation Monte Carlo method, DSMC). The thermochemical models employed in these two techniques are quite different. The present investigation presents an evaluation of thermochemical models for nitrogen under hypersonic flow conditions. Four separate cases are considered. The cases are governed, respectively, by the following: vibrational relaxation; weak dissociation; strong dissociation; and weak ionization. In near-continuum, hypersonic flow, the nonequilibrium thermochemical models employed in continuum and particle simulations produce nearly identical solutions. Further, the two approaches are evaluated successfully against available experimental data for weakly and strongly dissociating flows.

  20. 0-π phase-controllable thermal Josephson junction

    NASA Astrophysics Data System (ADS)

    Fornieri, Antonio; Timossi, Giuliano; Virtanen, Pauli; Solinas, Paolo; Giazotto, Francesco

    2017-05-01

    Two superconductors coupled by a weak link support an equilibrium Josephson electrical current that depends on the phase difference ϕ between the superconducting condensates. Yet, when a temperature gradient is imposed across the junction, the Josephson effect manifests itself through a coherent component of the heat current that flows opposite to the thermal gradient for |ϕ| < π/2 (refs 2-4). The direction of both the Josephson charge and heat currents can be inverted by adding a π shift to ϕ. In the static electrical case, this effect has been obtained in a few systems, for example via a ferromagnetic coupling or a non-equilibrium distribution in the weak link. These structures opened new possibilities for superconducting quantum logic and ultralow-power superconducting computers. Here, we report the first experimental realization of a thermal Josephson junction whose phase bias can be controlled from 0 to π. This is obtained thanks to a superconducting quantum interferometer that allows full control of the direction of the coherent energy transfer through the junction. This possibility, in conjunction with the completely superconducting nature of our system, provides temperature modulations with an unprecedented amplitude of ∼100 mK and transfer coefficients exceeding 1 K per flux quantum at 25 mK. Then, this quantum structure represents a fundamental step towards the realization of caloritronic logic components such as thermal transistors, switches and memory devices. These elements, combined with heat interferometers and diodes, would complete the thermal conversion of the most important phase-coherent electronic devices and benefit cryogenic microcircuits requiring energy management, such as quantum computing architectures and radiation sensors.

  1. FAA computer security : recommendations to address continuing weaknesses

    DOT National Transportation Integrated Search

    2000-12-01

    In September, testimony before the Committee on Science, House of Representatives, focused on the Federal Aviation Administration's (FAA) computer security program. In brief, we reported that FAA's agency-wide computer security program has serious, p...

  2. Structure of weakly 2-dependent siphons

    NASA Astrophysics Data System (ADS)

    Chao, Daniel Yuh; Chen, Jiun-Ting

    2013-09-01

    Deadlocks arising from insufficiently marked siphons in flexible manufacturing systems can be controlled by adding monitors to each siphon - too many for large systems. Li and Zhou add monitors to elementary siphons only while controlling the rest of (called dependent) siphons by adjusting control depth variables of elementary siphons. Only a linear number of monitors are required. The control of weakly dependent siphons (WDSs) is rather conservative since only positive terms were considered. The structure for strongly dependent siphons (SDSs) has been studied earlier. Based on this structure, the optimal sequence of adding monitors has been discovered earlier. Better controllability has been discovered to achieve faster and more permissive control. The results have been extended earlier to S3PGR2 (systems of simple sequential processes with general resource requirements). This paper explores the structures for WDSs, which, as found in this paper, involve elementary resource circuits interconnecting at more than (for SDSs, exactly) one resource place. This saves the time to compute compound siphons, their complementary sets and T-characteristic vectors. Also it allows us (1) to improve the controllability of WDSs and control siphons and (2) to avoid the time to find independent vectors for elementary siphons. We propose a sufficient and necessary test for adjusting control depth variables in S3PR (systems of simple sequential processes with resources) to avoid the sufficient-only time-consuming linear integer programming test (LIP) (Nondeterministic Polynomial (NP) time complete problem) required previously for some cases.

  3. An effective and secure key-management scheme for hierarchical access control in E-medicine system.

    PubMed

    Odelu, Vanga; Das, Ashok Kumar; Goswami, Adrijit

    2013-04-01

    Recently several hierarchical access control schemes are proposed in the literature to provide security of e-medicine systems. However, most of them are either insecure against 'man-in-the-middle attack' or they require high storage and computational overheads. Wu and Chen proposed a key management method to solve dynamic access control problems in a user hierarchy based on hybrid cryptosystem. Though their scheme improves computational efficiency over Nikooghadam et al.'s approach, it suffers from large storage space for public parameters in public domain and computational inefficiency due to costly elliptic curve point multiplication. Recently, Nikooghadam and Zakerolhosseini showed that Wu-Chen's scheme is vulnerable to man-in-the-middle attack. In order to remedy this security weakness in Wu-Chen's scheme, they proposed a secure scheme which is again based on ECC (elliptic curve cryptography) and efficient one-way hash function. However, their scheme incurs huge computational cost for providing verification of public information in the public domain as their scheme uses ECC digital signature which is costly when compared to symmetric-key cryptosystem. In this paper, we propose an effective access control scheme in user hierarchy which is only based on symmetric-key cryptosystem and efficient one-way hash function. We show that our scheme reduces significantly the storage space for both public and private domains, and computational complexity when compared to Wu-Chen's scheme, Nikooghadam-Zakerolhosseini's scheme, and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against different attacks and also man-in-the-middle attack. Moreover, dynamic access control problems in our scheme are also solved efficiently compared to other related schemes, making our scheme is much suitable for practical applications of e-medicine systems.

  4. A weak Hamiltonian finite element method for optimal guidance of an advanced launch vehicle

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Calise, Anthony J.; Bless, Robert R.; Leung, Martin

    1989-01-01

    A temporal finite-element method based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables, which are expanded in terms of nodal values and simple shape functions. Time derivatives of the states and costates do not appear in the governing variational equation; the only quantities whose time derivatives appear therein are virtual states and virtual costates. Numerical results are presented for an elementary trajectory optimization problem; they show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The feasibility of this approach for real-time guidance applications is evaluated. A simplified model for an advanced launch vehicle application that is suitable for finite-element solution is presented.

  5. A cross-sectional evaluation of computer literacy among medical students at a tertiary care teaching hospital in Mumbai, Bombay.

    PubMed

    Panchabhai, T S; Dangayach, N S; Mehta, V S; Patankar, C V; Rege, N N

    2011-01-01

    Computer usage capabilities of medical students for introduction of computer-aided learning have not been adequately assessed. Cross-sectional study to evaluate computer literacy among medical students. Tertiary care teaching hospital in Mumbai, India. Participants were administered a 52-question questionnaire, designed to study their background, computer resources, computer usage, activities enhancing computer skills, and attitudes toward computer-aided learning (CAL). The data was classified on the basis of sex, native place, and year of medical school, and the computer resources were compared. The computer usage and attitudes toward computer-based learning were assessed on a five-point Likert scale, to calculate Computer usage score (CUS - maximum 55, minimum 11) and Attitude score (AS - maximum 60, minimum 12). The quartile distribution among the groups with respect to the CUS and AS was compared by chi-squared tests. The correlation between CUS and AS was then tested. Eight hundred and seventy-five students agreed to participate in the study and 832 completed the questionnaire. One hundred and twenty eight questionnaires were excluded and 704 were analyzed. Outstation students had significantly lesser computer resources as compared to local students (P<0.0001). The mean CUS for local students (27.0±9.2, Mean±SD) was significantly higher than outstation students (23.2±9.05). No such difference was observed for the AS. The means of CUS and AS did not differ between males and females. The CUS and AS had positive, but weak correlations for all subgroups. The weak correlation between AS and CUS for all students could be explained by the lack of computer resources or inadequate training to use computers for learning. Providing additional resources would benefit the subset of outstation students with lesser computer resources. This weak correlation between the attitudes and practices of all students needs to be investigated. We believe that this gap can be bridged with a structured computer learning program.

  6. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  7. Weak mixing below the weak scale in dark-matter direct detection

    NASA Astrophysics Data System (ADS)

    Brod, Joachim; Grinstein, Benjamin; Stamou, Emmanuel; Zupan, Jure

    2018-02-01

    If dark matter couples predominantly to the axial-vector currents with heavy quarks, the leading contribution to dark-matter scattering on nuclei is either due to one-loop weak corrections or due to the heavy-quark axial charges of the nucleons. We calculate the effects of Higgs and weak gauge-boson exchanges for dark matter coupling to heavy-quark axial-vector currents in an effective theory below the weak scale. By explicit computation, we show that the leading-logarithmic QCD corrections are important, and thus resum them to all orders using the renormalization group.

  8. An assessment of PERT as a technique for schedule planning and control

    NASA Technical Reports Server (NTRS)

    Sibbers, C. W.

    1982-01-01

    The PERT technique including the types of reports which can be computer generated using the NASA/LaRC PPARS System is described. An assessment is made of the effectiveness of PERT on various types of efforts as well as for specific purposes, namely, schedule planning, schedule analysis, schedule control, monitoring contractor schedule performance, and management reporting. This assessment is based primarily on the author's knowledge of the usage of PERT by NASA/LaRC personnel since the early 1960's. Both strengths and weaknesses of the technique for various applications are discussed. It is intended to serve as a reference guide for personnel performing project planning and control functions and technical personnel whose responsibilities either include schedule planning and control or require a general knowledge of the subject.

  9. Demonstration of entanglement of electrostatically coupled singlet-triplet qubits.

    PubMed

    Shulman, M D; Dial, O E; Harvey, S P; Bluhm, H; Umansky, V; Yacoby, A

    2012-04-13

    Quantum computers have the potential to solve certain problems faster than classical computers. To exploit their power, it is necessary to perform interqubit operations and generate entangled states. Spin qubits are a promising candidate for implementing a quantum processor because of their potential for scalability and miniaturization. However, their weak interactions with the environment, which lead to their long coherence times, make interqubit operations challenging. We performed a controlled two-qubit operation between singlet-triplet qubits using a dynamically decoupled sequence that maintains the two-qubit coupling while decoupling each qubit from its fluctuating environment. Using state tomography, we measured the full density matrix of the system and determined the concurrence and the fidelity of the generated state, providing proof of entanglement.

  10. Optimal state transfer of a single dissipative two-level system

    NASA Astrophysics Data System (ADS)

    Jirari, Hamza; Wu, Ning

    2016-04-01

    Optimal state transfer of a single two-level system (TLS) coupled to an Ohmic boson bath via off-diagonal TLS-bath coupling is studied by using optimal control theory. In the weak system-bath coupling regime where the time-dependent Bloch-Redfield formalism is applicable, we obtain the Bloch equation to probe the evolution of the dissipative TLS in the presence of a time-dependent external control field. By using the automatic differentiation technique to compute the gradient for the cost functional, we calculate the optimal transfer integral profile that can achieve an ideal transfer within a dimer system in the Fenna-Matthews-Olson (FMO) model. The robustness of the control profile against temperature variation is also analyzed.

  11. General relativistic corrections to the weak lensing convergence power spectrum

    NASA Astrophysics Data System (ADS)

    Giblin, John T.; Mertens, James B.; Starkman, Glenn D.; Zentner, Andrew R.

    2017-11-01

    We compute the weak lensing convergence power spectrum, Cℓκκ, in a dust-filled universe using fully nonlinear general relativistic simulations. The spectrum is then compared to more standard, approximate calculations by computing the Bardeen (Newtonian) potentials in linearized gravity and partially utilizing the Born approximation. We find corrections to the angular power spectrum amplitude of order ten percent at very large angular scales, ℓ˜2 - 3 , and percent-level corrections at intermediate angular scales of ℓ˜20 - 30 .

  12. An Educational Approach to Computationally Modeling Dynamical Systems

    ERIC Educational Resources Information Center

    Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl

    2009-01-01

    Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…

  13. Understanding the interplay of weak forces in [3,3]-sigmatropic rearrangement for stereospecific synthesis of diamines.

    PubMed

    So, Soon Mog; Mui, Leo; Kim, Hyunwoo; Chin, Jik

    2012-08-21

    Chiral diamines are important building blocks for constructing stereoselective catalysts, including transition metal based catalysts and organocatalysts that facilitate oxidation, reduction, hydrolysis, and C-C bond forming reactions. These molecules are also critical components in the synthesis of drugs, including antiviral agents such as Tamiflu and Relenza and anticancer agents such as oxaliplatin and nutlin-3. The diaza-Cope rearrangement reaction provides one of the most versatile methods for rapidly generating a wide variety of chiral diamines stereospecifically and under mild conditions. Weak forces such as hydrogen bonding, electronic, steric, oxyanionic, and conjugation effects can drive this equilibrium process to completion. In this Account, we examine the effect of these individual weak forces on the value of the equilibrium constant for the diaza-Cope rearrangement reaction using both computational and experimental methods. The availability of a wide variety of aldehydes and diamines allows for the facile synthesis of the diimines needed to study the weak forces. Furthermore, because the reaction generally takes place cleanly at ambient temperature, we can easily measure equilibrium constants for rearrangement of the diimines. We use the Hammett equation to further examine the electronic and oxyanionic effects. In addition, computations and experiments provide us with new insights into the origin and extent of stereospecificity for this rearrangement reaction. The diaza-Cope rearrangement, with its unusual interplay between weak forces and the equilibrium constant of the reaction, provides a rare opportunity to study the effects of the fundamental weak forces on a chemical reaction. Among these many weak forces that affect the diaza-Cope rearrangement, the anion effect is the strongest (10.9 kcal/mol) followed by the resonance-assisted hydrogen-bond effect (7.1 kcal/mol), the steric effect (5.7 kcal/mol), the conjugation effect (5.5 kcal/mol), and the electronic effect (3.2 kcal/mol). Based on both computation and experimental data, the effects of these weak forces are additive. Understanding the interplay of the weak forces in the [3,3]-sigmatropic reaction is interesting in its own right and also provides valuable insights for the synthesis of chiral diamine based drugs and catalysts in excellent yield and enantiopurity.

  14. Learning and tuning fuzzy logic controllers through reinforcements.

    PubMed

    Berenji, H R; Khedkar, P

    1992-01-01

    A method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. It is shown that: the generalized approximate-reasoning-based intelligent control (GARIC) architecture learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.

  15. An Novel Continuation Power Flow Method Based on Line Voltage Stability Index

    NASA Astrophysics Data System (ADS)

    Zhou, Jianfang; He, Yuqing; He, Hongbin; Jiang, Zhuohan

    2018-01-01

    An novel continuation power flow method based on line voltage stability index is proposed in this paper. Line voltage stability index is used to determine the selection of parameterized lines, and constantly updated with the change of load parameterized lines. The calculation stages of the continuation power flow decided by the angle changes of the prediction of development trend equation direction vector are proposed in this paper. And, an adaptive step length control strategy is used to calculate the next prediction direction and value according to different calculation stages. The proposed method is applied clear physical concept, and the high computing speed, also considering the local characteristics of voltage instability which can reflect the weak nodes and weak area in a power system. Due to more fully to calculate the PV curves, the proposed method has certain advantages on analysing the voltage stability margin to large-scale power grid.

  16. Advanced information processing system: Hosting of advanced guidance, navigation and control algorithms on AIPS using ASTER

    NASA Technical Reports Server (NTRS)

    Brenner, Richard; Lala, Jaynarayan H.; Nagle, Gail A.; Schor, Andrei; Turkovich, John

    1994-01-01

    This program demonstrated the integration of a number of technologies that can increase the availability and reliability of launch vehicles while lowering costs. Availability is increased with an advanced guidance algorithm that adapts trajectories in real-time. Reliability is increased with fault-tolerant computers and communication protocols. Costs are reduced by automatically generating code and documentation. This program was realized through the cooperative efforts of academia, industry, and government. The NASA-LaRC coordinated the effort, while Draper performed the integration. Georgia Institute of Technology supplied a weak Hamiltonian finite element method for optimal control problems. Martin Marietta used MATLAB to apply this method to a launch vehicle (FENOC). Draper supplied the fault-tolerant computing and software automation technology. The fault-tolerant technology includes sequential and parallel fault-tolerant processors (FTP & FTPP) and authentication protocols (AP) for communication. Fault-tolerant technology was incrementally incorporated. Development culminated with a heterogeneous network of workstations and fault-tolerant computers using AP. Draper's software automation system, ASTER, was used to specify a static guidance system based on FENOC, navigation, flight control (GN&C), models, and the interface to a user interface for mission control. ASTER generated Ada code for GN&C and C code for models. An algebraic transform engine (ATE) was developed to automatically translate MATLAB scripts into ASTER.

  17. Decomposition of algebraic sets and applications to weak centers of cubic systems

    NASA Astrophysics Data System (ADS)

    Chen, Xingwu; Zhang, Weinian

    2009-10-01

    There are many methods such as Gröbner basis, characteristic set and resultant, in computing an algebraic set of a system of multivariate polynomials. The common difficulties come from the complexity of computation, singularity of the corresponding matrices and some unnecessary factors in successive computation. In this paper, we decompose algebraic sets, stratum by stratum, into a union of constructible sets with Sylvester resultants, so as to simplify the procedure of elimination. Applying this decomposition to systems of multivariate polynomials resulted from period constants of reversible cubic differential systems which possess a quadratic isochronous center, we determine the order of weak centers and discuss the bifurcation of critical periods.

  18. Reliability of chemotherapy preparation processes: Evaluating independent double-checking and computer-assisted gravimetric control.

    PubMed

    Carrez, Laurent; Bouchoud, Lucie; Fleury-Souverain, Sandrine; Combescure, Christophe; Falaschi, Ludivine; Sadeghipour, Farshid; Bonnabry, Pascal

    2017-03-01

    Background and objectives Centralized chemotherapy preparation units have established systematic strategies to avoid errors. Our work aimed to evaluate the accuracy of manual preparations associated with different control methods. Method A simulation study in an operational setting used phenylephrine and lidocaine as markers. Each operator prepared syringes that were controlled using a different method during each of three sessions (no control, visual double-checking, and gravimetric control). Eight reconstitutions and dilutions were prepared in each session, with variable doses and volumes, using different concentrations of stock solutions. Results were analyzed according to qualitative (choice of stock solution) and quantitative criteria (accurate, <5% deviation from the target concentration; weakly accurate, 5%-10%; inaccurate, 10%-30%; wrong, >30% deviation). Results Eleven operators carried out 19 sessions. No final preparation (n = 438) contained a wrong drug. The protocol involving no control failed to detect 1 of 3 dose errors made and double-checking failed to detect 3 of 7 dose errors. The gravimetric control method detected all 5 out of 5 dose errors. The accuracy of the doses measured was equivalent across the control methods ( p = 0.63 Kruskal-Wallis). The final preparations ranged from 58% to 60% accurate, 25% to 27% weakly accurate, 14% to 17% inaccurate and 0.9% wrong. A high variability was observed between operators. Discussion Gravimetric control was the only method able to detect all dose errors, but it did not improve dose accuracy. A dose accuracy with <5% deviation cannot always be guaranteed using manual production. Automation should be considered in the future.

  19. The Relationship between Sources of Self-Efficacy in Classroom Environments and the Strength of Computer Self-Efficacy Beliefs

    ERIC Educational Resources Information Center

    Srisupawong, Yuwarat; Koul, Ravinder; Neanchaleay, Jariya; Murphy, Elizabeth; Francois, Emmanuel Jean

    2018-01-01

    Motivation and success in computer-science courses are influenced by the strength of students' self-efficacy (SE) beliefs in their learning abilities. Students with weak SE may struggle to be successful in a computer-science course. This study investigated the factors that enhance or impede the computer self-efficacy (CSE) of computer-science…

  20. Bridging online and offline social networks: Multiplex analysis

    NASA Astrophysics Data System (ADS)

    Filiposka, Sonja; Gajduk, Andrej; Dimitrova, Tamara; Kocarev, Ljupco

    2017-04-01

    We show that three basic actor characteristics, namely normalized reciprocity, three cycles, and triplets, can be expressed using an unified framework that is based on computing the similarity index between two sets associated with the actor: the set of her/his friends and the set of those considering her/him as a friend. These metrics are extended to multiplex networks and then computed for two friendship networks generated by collecting data from two groups of undergraduate students. We found that in offline communication strong and weak ties are (almost) equally presented, while in online communication weak ties are dominant. Moreover, weak ties are much less reciprocal than strong ties. However, across different layers of the multiplex network reciprocities are preserved, while triads (measured with normalized three cycles and triplets) are not significant.

  1. Analysis of Modified SMI Method for Adaptive Array Weight Control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dilsavor, Ronald Louis

    1989-01-01

    An adaptive array is used to receive a desired signal in the presence of weak interference signals which need to be suppressed. A modified sample matrix inversion (SMI) algorithm controls the array weights. The modification leads to increased interference suppression by subtracting a fraction of the noise power from the diagonal elements of the covariance matrix. The modified algorithm maximizes an intuitive power ratio criterion. The expected values and variances of the array weights, output powers, and power ratios as functions of the fraction and the number of snapshots are found and compared to computer simulation and real experimental array performance. Reduced-rank covariance approximations and errors in the estimated covariance are also described.

  2. Computers in medical education 2. Use of a computer package to supplement the clinical experience in a surgical clerkship: an objective evaluation.

    PubMed

    Devitt, P; Cehic, D; Palmer, E

    1998-06-01

    Student teaching of surgery has been devolved from the university in an effort to increase and broaden undergraduate clinical experience. In order to ensure uniformity of learning we have defined learning objectives and provided a computer-based package to supplement clinical teaching. A study was undertaken to evaluate the place of computer-based learning in a clinical environment. Twelve modules were provided for study during a 6-week attachment. These covered clinical problems related to cardiology, neurosurgery and gastrointestinal haemorrhage. Eighty-four fourth-year students undertook a pre- and post-test assessment on these three topics as well as acute abdominal pain. No extra learning material on the latter topic was provided during the attachment. While all students showed significant improvement in performance in the post-test assessment, those who had access to the computer material performed significantly better than did the controls. Within the topics, students in both groups performed equally well on the post-test assessment of acute abdominal pain but the control group's performance was significantly lacking on the topic of gastrointestinal haemorrhage, suggesting that the bulk of learning on this subject came from the computer material and little from the clinical attachment. This type of learning resource can be used to supplement the student's clinical experience and at the same time monitor what they learn during clinical clerkships and identify areas of weakness.

  3. Controllability of control and mixture weakly dependent siphons in S3PR

    NASA Astrophysics Data System (ADS)

    Hong, Liang; Chao, Daniel Y.

    2013-08-01

    Deadlocks in a flexible manufacturing system modelled by Petri nets arise from insufficiently marked siphons. Monitors are added to control these siphons to avoid deadlocks rendering the system too complicated since the total number of monitors grows exponentially. Li and Zhou propose to add monitors only to elementary siphons while controlling the other (strongly or weakly) dependent siphons by adjusting control depth variables. To avoid generating new siphons, the control arcs are ended at source transitions of process nets. This disturbs the original model more and hence loses more live states. Negative terms in the controllability make the control policy for weakly dependent siphons rather conservative. We studied earlier on the controllability of strongly dependent siphons and proposed to add monitors in the order of basic, compound, control, partial mixture and full mixture (strongly dependent) siphons to reduce the number of mixed integer programming iterations and redundant monitors. This article further investigates the controllability of siphons derived from weakly 2-compound siphons. We discover that the controllability for weakly and strongly compound siphons is similar. It no longer holds for control and mixture siphons. Some control and mixture siphons, derived from strongly 2-compound siphons are not redundant - no longer so for those derived from weakly 2-compound siphons; that is all control and mixture siphons are redundant. They do not need to be the conservative one as proposed by Li and Zhou. Thus, we can adopt the maximally permissive control policy even though new siphons are generated.

  4. Scalable service architecture for providing strong service guarantees

    NASA Astrophysics Data System (ADS)

    Christin, Nicolas; Liebeherr, Joerg

    2002-07-01

    For the past decade, a lot of Internet research has been devoted to providing different levels of service to applications. Initial proposals for service differentiation provided strong service guarantees, with strict bounds on delays, loss rates, and throughput, but required high overhead in terms of computational complexity and memory, both of which raise scalability concerns. Recently, the interest has shifted to service architectures with low overhead. However, these newer service architectures only provide weak service guarantees, which do not always address the needs of applications. In this paper, we describe a service architecture that supports strong service guarantees, can be implemented with low computational complexity, and only requires to maintain little state information. A key mechanism of the proposed service architecture is that it addresses scheduling and buffer management in a single algorithm. The presented architecture offers no solution for controlling the amount of traffic that enters the network. Instead, we plan on exploiting feedback mechanisms of TCP congestion control algorithms for the purpose of regulating the traffic entering the network.

  5. Scalable photonic quantum computing assisted by quantum-dot spin in double-sided optical microcavity.

    PubMed

    Wei, Hai-Rui; Deng, Fu-Guo

    2013-07-29

    We investigate the possibility of achieving scalable photonic quantum computing by the giant optical circular birefringence induced by a quantum-dot spin in a double-sided optical microcavity as a result of cavity quantum electrodynamics. We construct a deterministic controlled-not gate on two photonic qubits by two single-photon input-output processes and the readout on an electron-medium spin confined in an optical resonant microcavity. This idea could be applied to multi-qubit gates on photonic qubits and we give the quantum circuit for a three-photon Toffoli gate. High fidelities and high efficiencies could be achieved when the side leakage to the cavity loss rate is low. It is worth pointing out that our devices work in both the strong and the weak coupling regimes.

  6. Catastrophic photometric redshift errors: Weak-lensing survey requirements

    DOE PAGES

    Bernstein, Gary; Huterer, Dragan

    2010-01-11

    We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number N spec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of N spec is ~10 6 we findmore » that using only the photometric redshifts with z ≤ 2.5 leads to a drastic reduction in N spec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the z s – z p distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less

  7. Enhanced fluorescence microscope and its application

    NASA Astrophysics Data System (ADS)

    Wang, Susheng; Li, Qin; Yu, Xin

    1997-12-01

    A high gain fluorescence microscope is developed to meet the needs in medical and biological research. By the help of an image intensifier with luminance gain of 4 by 104 the sensitivity of the system can achieve 10-6 1x level and be 104 times higher than ordinary fluorescence microscope. Ultra-weak fluorescence image can be detected by it. The concentration of fluorescent label and emitting light intensity of the system are decreased as much as possible, therefore, the natural environment of the detected call can be kept. The CCD image acquisition set-up controlled by computer obtains the quantitative data of each point according to the gray scale. The relation between luminous intensity and output of CCD is obtained by using a wide range weak photometry. So the system not only shows the image of ultra-weak fluorescence distribution but also gives the intensity of fluorescence of each point. Using this system, we obtained the images of distribution of hypocrellin A (HA) in Hela cell, the images of Hela cell being protected by antioxidant reagent Vit. E, SF and BHT. The images show that the digitized ultra-sensitive fluorescence microscope is a useful tool for medical and biological research.

  8. Implementation of ternary Shor’s algorithm based on vibrational states of an ion in anharmonic potential

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, Shu-Ming; Zhang, Jian; Wu, Chun-Wang; Wu, Wei; Chen, Ping-Xing

    2015-03-01

    It is widely believed that Shor’s factoring algorithm provides a driving force to boost the quantum computing research. However, a serious obstacle to its binary implementation is the large number of quantum gates. Non-binary quantum computing is an efficient way to reduce the required number of elemental gates. Here, we propose optimization schemes for Shor’s algorithm implementation and take a ternary version for factorizing 21 as an example. The optimized factorization is achieved by a two-qutrit quantum circuit, which consists of only two single qutrit gates and one ternary controlled-NOT gate. This two-qutrit quantum circuit is then encoded into the nine lower vibrational states of an ion trapped in a weakly anharmonic potential. Optimal control theory (OCT) is employed to derive the manipulation electric field for transferring the encoded states. The ternary Shor’s algorithm can be implemented in one single step. Numerical simulation results show that the accuracy of the state transformations is about 0.9919. Project supported by the National Natural Science Foundation of China (Grant No. 61205108) and the High Performance Computing (HPC) Foundation of National University of Defense Technology, China.

  9. Geography Students Assess Their Learning Using Computer-Marked Tests.

    ERIC Educational Resources Information Center

    Hogg, Jim

    1997-01-01

    Reports on a pilot study designed to assess the potential of computer-marked tests for allowing students to monitor their learning. Students' answers to multiple choice tests were fed into a computer that provided a full analysis of their strengths and weaknesses. Students responded favorably to the feedback. (MJP)

  10. CAI at CSDF: Organizational Strategies.

    ERIC Educational Resources Information Center

    Irwin, Margaret G.

    1982-01-01

    The computer assisted instruction (CAI) program at the California School for the Deaf, at Fremont, features individual Apple computers in classrooms as well as in CAI labs. When the whole class uses computers simultaneously, the teacher can help individuals, identify group weaknesses, note needs of the materials, and help develop additional CAI…

  11. Structural Stability of Mathematical Models of National Economy

    NASA Astrophysics Data System (ADS)

    Ashimov, Abdykappar A.; Sultanov, Bahyt T.; Borovskiy, Yuriy V.; Adilov, Zheksenbek M.; Ashimov, Askar A.

    2011-12-01

    In the paper we test robustness of particular dynamic systems in a compact regions of a plane and a weak structural stability of one dynamic system of high order in a compact region of its phase space. The test was carried out based on the fundamental theory of dynamical systems on a plane and based on the conditions for weak structural stability of high order dynamic systems. A numerical algorithm for testing the weak structural stability of high order dynamic systems has been proposed. Based on this algorithm we assess the weak structural stability of one computable general equilibrium model.

  12. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Cosmic Reionization On Computers: Numerical and Physical Convergence

    DOE PAGES

    Gnedin, Nickolay Y.

    2016-04-01

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less

  14. Cosmic Reionization On Computers: Numerical and Physical Convergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y.

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less

  15. Learning and tuning fuzzy logic controllers through reinforcements

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Khedkar, Pratap

    1992-01-01

    A new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. In particular, our Generalized Approximate Reasoning-based Intelligent Control (GARIC) architecture: (1) learns and tunes a fuzzy logic controller even when only weak reinforcements, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto, Sutton, and Anderson to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and has demonstrated significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.

  16. Asymptotically suboptimal control of weakly interconnected dynamical systems

    NASA Astrophysics Data System (ADS)

    Dmitruk, N. M.; Kalinin, A. I.

    2016-10-01

    Optimal control problems for a group of systems with weak dynamical interconnections between its constituent subsystems are considered. A method for decentralized control is proposed which distributes the control actions between several controllers calculating in real time control inputs only for theirs subsystems based on the solution of the local optimal control problem. The local problem is solved by asymptotic methods that employ the representation of the weak interconnection by a small parameter. Combination of decentralized control and asymptotic methods allows to significantly reduce the dimension of the problems that have to be solved in the course of the control process.

  17. Simulation of Nonlinear Instabilities in an Attachment-Line Boundary Layer

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.

    1996-01-01

    The linear and the nonlinear stability of disturbances that propagate along the attachment line of a three-dimensional boundary layer is considered. The spatially evolving disturbances in the boundary layer are computed by direct numerical simulation (DNS) of the unsteady, incompressible Navier-Stokes equations. Disturbances are introduced either by forcing at the in ow or by applying suction and blowing at the wall. Quasi-parallel linear stability theory and a nonparallel theory yield notably different stability characteristics for disturbances near the critical Reynolds number; the DNS results con rm the latter theory. Previously, a weakly nonlinear theory and computations revealed a high wave-number region of subcritical disturbance growth. More recent computations have failed to achieve this subcritical growth. The present computational results indicate the presence of subcritically growing disturbances; the results support the weakly nonlinear theory. Furthermore, an explanation is provided for the previous theoretical and computational discrepancy. In addition, the present results demonstrate that steady suction can be used to stabilize disturbances that otherwise grow subcritically along the attachment line.

  18. Standard model anatomy of WIMP dark matter direct detection. I. Weak-scale matching

    NASA Astrophysics Data System (ADS)

    Hill, Richard J.; Solon, Mikhail P.

    2015-02-01

    We present formalism necessary to determine weak-scale matching coefficients in the computation of scattering cross sections for putative dark matter candidates interacting with the Standard Model. We pay particular attention to the heavy-particle limit. A consistent renormalization scheme in the presence of nontrivial residual masses is implemented. Two-loop diagrams appearing in the matching to gluon operators are evaluated. Details are given for the computation of matching coefficients in the universal limit of WIMP-nucleon scattering for pure states of arbitrary quantum numbers, and for singlet-doublet and doublet-triplet mixed states.

  19. Dual control active superconductive devices

    DOEpatents

    Martens, Jon S.; Beyer, James B.; Nordman, James E.; Hohenwarter, Gert K. G.

    1993-07-20

    A superconducting active device has dual control inputs and is constructed such that the output of the device is effectively a linear mix of the two input signals. The device is formed of a film of superconducting material on a substrate and has two main conduction channels, each of which includes a weak link region. A first control line extends adjacent to the weak link region in the first channel and a second control line extends adjacent to the weak link region in the second channel. The current flowing from the first channel flows through an internal control line which is also adjacent to the weak link region of the second channel. The weak link regions comprise small links of superconductor, separated by voids, through which the current flows in each channel. Current passed through the control lines causes magnetic flux vortices which propagate across the weak link regions and control the resistance of these regions. The output of the device taken across the input to the main channels and the output of the second main channel and the internal control line will constitute essentially a linear mix of the two input signals imposed on the two control lines. The device is especially suited to microwave applications since it has very low input capacitance, and is well suited to being formed of high temperature superconducting materials since all of the structures may be formed coplanar with one another on a substrate.

  20. Manipulatives and the Computer: A Powerful Partnership for Learners of All Ages.

    ERIC Educational Resources Information Center

    Perl, Teri

    1990-01-01

    Discussed is the concept of mirroring in which computer programs are used to enhance the use of mathematics manipulatives. The strengths and weaknesses of this approach are presented. The uses of the computer in modeling and as a manipulative are also described. Several software packages are suggested. (CW)

  1. Overcoming Microsoft Excel's Weaknesses for Crop Model Building and Simulations

    ERIC Educational Resources Information Center

    Sung, Christopher Teh Boon

    2011-01-01

    Using spreadsheets such as Microsoft Excel for building crop models and running simulations can be beneficial. Excel is easy to use, powerful, and versatile, and it requires the least proficiency in computer programming compared to other programming platforms. Excel, however, has several weaknesses: it does not directly support loops for iterative…

  2. Mass univariate analysis of event-related brain potentials/fields I: a critical tutorial review.

    PubMed

    Groppe, David M; Urbach, Thomas P; Kutas, Marta

    2011-12-01

    Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on mean activity in a priori windows. Advances in computing power and statistics have produced an alternative, mass univariate analyses consisting of thousands of statistical tests and powerful corrections for multiple comparisons. Such analyses are most useful when one has little a priori knowledge of effect locations or latencies, and for delineating effect boundaries. Mass univariate analyses complement and, at times, obviate traditional analyses. Here we review this approach as applied to ERP/ERF data and four methods for multiple comparison correction: strong control of the familywise error rate (FWER) via permutation tests, weak control of FWER via cluster-based permutation tests, false discovery rate control, and control of the generalized FWER. We end with recommendations for their use and introduce free MATLAB software for their implementation. Copyright © 2011 Society for Psychophysiological Research.

  3. Can cloud computing benefit health services? - a SWOT analysis.

    PubMed

    Kuo, Mu-Hsing; Kushniruk, Andre; Borycki, Elizabeth

    2011-01-01

    In this paper, we discuss cloud computing, the current state of cloud computing in healthcare, and the challenges and opportunities of adopting cloud computing in healthcare. A Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis was used to evaluate the feasibility of adopting this computing model in healthcare. The paper concludes that cloud computing could have huge benefits for healthcare but there are a number of issues that will need to be addressed before its widespread use in healthcare.

  4. Scada Malware, a Proof of Concept

    NASA Astrophysics Data System (ADS)

    Carcano, Andrea; Fovino, Igor Nai; Masera, Marcelo; Trombetta, Alberto

    Critical Infrastructures are nowadays exposed to new kind of threats. The cause of such threats is related to the large number of new vulnerabilities and architectural weaknesses introduced by the extensive use of ICT and Network technologies into such complex critical systems. Of particular interest are the set of vulnerabilities related to the class of communication protocols normally known as “SCADA” protocols, under which fall all the communication protocols used to remotely control the RTU devices of an industrial system. In this paper we present a proof of concept of the potential effects of a set of computer malware specifically designed and created in order to impact, by taking advantage of some vulnerabilities of the ModBUS protocol, on a typical Supervisory Control and Data Acquisition system.

  5. The Computer Bulletin Board. Modified Gran Plots of Very Weak Acids on a Spreadsheet.

    ERIC Educational Resources Information Center

    Chau, F. T.; And Others

    1990-01-01

    Presented are two applications of computer technology to chemistry instruction: the use of a spreadsheet program to analyze acid-base titration curves and the use of database software to catalog stockroom inventories. (CW)

  6. Ephaptic coupling rescues conduction failure in weakly coupled cardiac tissue with voltage-gated gap junctions

    NASA Astrophysics Data System (ADS)

    Weinberg, S. H.

    2017-09-01

    Electrical conduction in cardiac tissue is usually considered to be primarily facilitated by gap junctions, providing a pathway between the intracellular spaces of neighboring cells. However, recent studies have highlighted the role of coupling via extracellular electric fields, also known as ephaptic coupling, particularly in the setting of reduced gap junction expression. Further, in the setting of reduced gap junctional coupling, voltage-dependent gating of gap junctions, an oft-neglected biophysical property in computational studies, produces a positive feedback that promotes conduction failure. We hypothesized that ephaptic coupling can break the positive feedback loop and rescue conduction failure in weakly coupled cardiac tissue. In a computational tissue model incorporating voltage-gated gap junctions and ephaptic coupling, we demonstrate that ephaptic coupling can rescue conduction failure in weakly coupled tissue. Further, ephaptic coupling increased conduction velocity in weakly coupled tissue, and importantly, reduced the minimum gap junctional coupling necessary for conduction, most prominently at fast pacing rates. Finally, we find that, although neglecting gap junction voltage-gating results in negligible differences in well coupled tissue, more significant differences occur in weakly coupled tissue, greatly underestimating the minimal gap junctional coupling that can maintain conduction. Our study suggests that ephaptic coupling plays a conduction-preserving role, particularly at rapid heart rates.

  7. A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence

    NASA Astrophysics Data System (ADS)

    Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.

    2018-04-01

    Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.

  8. METCAN simulation of candidate metal matrix composites for high temperature applications

    NASA Technical Reports Server (NTRS)

    Lee, Ho-Jun

    1990-01-01

    The METCAN (Metal Matrix Composite Analyzer) computer code is used to simulate the nonlinear behavior of select metal matrix composites in order to assess their potential for high temperature structural applications. Material properties for seven composites are generated at a fiber volume ratio of 0.33 for two bonding conditions (a perfect bond and a weak interphase case) at various temperatures. A comparison of the two bonding conditions studied shows a general reduction in value of all properties (except CTE) for the weak interphase case from the perfect bond case. However, in the weak interphase case, the residual stresses that develop are considerably less than those that form in the perfect bond case. Results of the computational simulation indicate that among the metal matrix composites examined, SiC/NiAl is the best candidate for high temperature applications at the given fiber volume ratio.

  9. Utilizing Weak Indicators to Detect Anomalous Behaviors in Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egid, Adin

    We consider the use of a novel weak in- dicator alongside more commonly used weak indicators to help detect anomalous behavior in a large computer network. The data of the network which we are studying in this research paper concerns remote log-in information (Virtual Private Network, or VPN sessions) from the internal network of Los Alamos National Laboratory (LANL). The novel indicator we are utilizing is some- thing which, while novel in its application to data science/cyber security research, is a concept borrowed from the business world. The Her ndahl-Hirschman Index (HHI) is a computationally trivial index which provides amore » useful heuristic for regulatory agencies to ascertain the relative competitiveness of a particular industry. Using this index as a lagging indicator in the monthly format we have studied could help to detect anomalous behavior by a particular or small set of users on the network.« less

  10. Time-domain finite elements in optimal control with application to launch-vehicle guidance. PhD. Thesis

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.

    1991-01-01

    A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.

  11. Considering High-Tech Exhibits?

    ERIC Educational Resources Information Center

    Routman, Emily

    1994-01-01

    Discusses a variety of high-tech exhibit media used in The Living World, an educational facility operated by The Saint Louis Zoo. Considers the strengths and weaknesses of holograms, video, animatronics, video-equipped microscopes, and computer interactives. Computer interactives are treated with special attention. (LZ)

  12. A Study of Attrition and the Use of Student Learning Communities in the Computer Science Introductory Programming Sequence

    ERIC Educational Resources Information Center

    Howles, Trudy

    2009-01-01

    Student attrition and low graduation rates are critical problems in computer science education. Disappointing graduation rates and declining student interest have caught the attention of business leaders, researchers and universities. With weak graduation rates and little interest in scientific computing, many are concerned about the USA's ability…

  13. An Investigation of Human-Computer Interaction Approaches Beneficial to Weak Learners in Complex Animation Learning

    ERIC Educational Resources Information Center

    Yeh, Yu-Fang

    2016-01-01

    Animation is one of the useful contemporary educational technologies in teaching complex subjects. There is a growing interest in proper use of learner-technology interaction to promote learning quality for different groups of learner needs. The purpose of this study is to investigate if an interaction approach supports weak learners, who have…

  14. Propagation Characteristics Of Weakly Guiding Optical Fibers

    NASA Technical Reports Server (NTRS)

    Manshadi, Farzin

    1992-01-01

    Report discusses electromagnetic propagation characteristics of weakly guiding optical-fiber structures having complicated shapes with cross-sectional dimensions of order of wavelength. Coupling, power-dividing, and transition dielectric-waveguide structures analyzed. Basic data computed by scalar-wave, fast-Fourier-transform (SW-FFT) technique, based on numerical solution of scalar version of wave equation by forward-marching fast-Fourier-transform method.

  15. Λ N → NN EFT potentials and hypertriton non-mesonic weak decay

    NASA Astrophysics Data System (ADS)

    Pérez-Obiol, Axel; Entem, David R.; Nogga, Andreas

    2018-05-01

    The potential for the Λ N → NN weak transition, the main responsible for the non-mesonic weak decay of hypernuclei, has been developed within the framework of effective field theory (EFT) up to next-to-leading order (NLO). The leading order (LO) and NLO contributions have been calculated in both momentum and coordinate space, and have been organised into the different operators which mediate the N → NN transition. We compare the ranges of the one-meson and two-pion exchanges for each operator. The non-mesonic weak decay of the hypertriton has been computed within the plane-wave approximation using the LO weak potential and modern strong EFT NN potentials. Formally, two methods to calculate the final state interactions among the decay products are presented. We briefly comment on the calculation of the {}{{Λ }}{}3H{\\to }3 He+{π }- mesonic weak decay.

  16. Analysis of problem solving on project based learning with resource based learning approach computer-aided program

    NASA Astrophysics Data System (ADS)

    Kuncoro, K. S.; Junaedi, I.; Dwijanto

    2018-03-01

    This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.

  17. Fast-Acquisition/Weak-Signal-Tracking GPS Receiver for HEO

    NASA Technical Reports Server (NTRS)

    Wintemitz, Luke; Boegner, Greg; Sirotzky, Steve

    2004-01-01

    A report discusses the technical background and design of the Navigator Global Positioning System (GPS) receiver -- . a radiation-hardened receiver intended for use aboard spacecraft. Navigator is capable of weak signal acquisition and tracking as well as much faster acquisition of strong or weak signals with no a priori knowledge or external aiding. Weak-signal acquisition and tracking enables GPS use in high Earth orbits (HEO), and fast acquisition allows for the receiver to remain without power until needed in any orbit. Signal acquisition and signal tracking are, respectively, the processes of finding and demodulating a signal. Acquisition is the more computationally difficult process. Previous GPS receivers employ the method of sequentially searching the two-dimensional signal parameter space (code phase and Doppler). Navigator exploits properties of the Fourier transform in a massively parallel search for the GPS signal. This method results in far faster acquisition times [in the lab, 12 GPS satellites have been acquired with no a priori knowledge in a Low-Earth-Orbit (LEO) scenario in less than one second]. Modeling has shown that Navigator will be capable of acquiring signals down to 25 dB-Hz, appropriate for HEO missions. Navigator is built using the radiation-hardened ColdFire microprocessor and housing the most computationally intense functions in dedicated field-programmable gate arrays. The high performance of the algorithm and of the receiver as a whole are made possible by optimizing computational efficiency and carefully weighing tradeoffs among the sampling rate, data format, and data-path bit width.

  18. Suppression of anomalous synchronization and nonstationary behavior of neural network under small-world topology

    NASA Astrophysics Data System (ADS)

    Boaretto, B. R. R.; Budzinski, R. C.; Prado, T. L.; Kurths, J.; Lopes, S. R.

    2018-05-01

    It is known that neural networks under small-world topology can present anomalous synchronization and nonstationary behavior for weak coupling regimes. Here, we propose methods to suppress the anomalous synchronization and also to diminish the nonstationary behavior occurring in weakly coupled neural network under small-world topology. We consider a network of 2000 thermally sensitive identical neurons, based on the model of Hodgkin-Huxley in a small-world topology, with the probability of adding non local connection equal to p = 0 . 001. Based on experimental protocols to suppress anomalous synchronization, as well as nonstationary behavior of the neural network dynamics, we make use of (i) external stimulus (pulsed current); (ii) biologic parameters changing (neuron membrane conductance changes); and (iii) body temperature changes. Quantification analysis to evaluate phase synchronization makes use of the Kuramoto's order parameter, while recurrence quantification analysis, particularly the determinism, computed over the easily accessible mean field of network, the local field potential (LFP), is used to evaluate nonstationary states. We show that the methods proposed can control the anomalous synchronization and nonstationarity occurring for weak coupling parameter without any effect on the individual neuron dynamics, neither in the expected asymptotic synchronized states occurring for large values of the coupling parameter.

  19. Weakly Supervised Dictionary Learning

    NASA Astrophysics Data System (ADS)

    You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub

    2018-05-01

    We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.

  20. Fierz Convergence Criterion: A Controlled Approach to Strongly Interacting Systems with Small Embedded Clusters.

    PubMed

    Ayral, Thomas; Vučičević, Jaksa; Parcollet, Olivier

    2017-10-20

    We present an embedded-cluster method, based on the triply irreducible local expansion formalism. It turns the Fierz ambiguity, inherent to approaches based on a bosonic decoupling of local fermionic interactions, into a convergence criterion. It is based on the approximation of the three-leg vertex by a coarse-grained vertex computed from a self-consistently determined cluster impurity model. The computed self-energies are, by construction, continuous functions of momentum. We show that, in three interaction and doping regimes of the two-dimensional Hubbard model, self-energies obtained with clusters of size four only are very close to numerically exact benchmark results. We show that the Fierz parameter, which parametrizes the freedom in the Hubbard-Stratonovich decoupling, can be used as a quality control parameter. By contrast, the GW+extended dynamical mean field theory approximation with four cluster sites is shown to yield good results only in the weak-coupling regime and for a particular decoupling. Finally, we show that the vertex has spatially nonlocal components only at low Matsubara frequencies.

  1. Computer Based Simulation of Laboratory Experiments.

    ERIC Educational Resources Information Center

    Edward, Norrie S.

    1997-01-01

    Examines computer based simulations of practical laboratory experiments in engineering. Discusses the aims and achievements of lab work (cognitive, process, psychomotor, and affective); types of simulations (model building and behavioral); and the strengths and weaknesses of simulations. Describes the development of a centrifugal pump simulation,…

  2. Evaluation of a commercial system for CAMAC-based control of the Chalk River Laboratories tandem-accelerator-superconducting-cyclotron complexcomplex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greiner, B.F.; Caswell, D.J.; Slater, W.R.

    1992-04-01

    This paper discusses the control system of the Tandem Accelerator Superconducting Cyclotron (TASCC) of AECL Research at its Chalk River Laboratories which is presently based on a PDP-11 computer and the IAS operating system. The estimated expense of a custom conversion of the system to a current, equivalent operating system is prohibitive. The authors have evaluated a commercial control package from VISTA Control Systems based on VAX microcomputers and the VMS operating system. Vsystem offers a modern, graphical operator interface, an extensive software toolkit for configuration of the system and a multi-feature data-logging capability, all of which far surpass themore » functionality of the present control system. However, the implementation of some familiar, practical features that TASCC operators find to be essential has proven to be challenging. The assessment of Vsystem, which is described in terms of presently perceived strengths and weaknesses, is, on balance, very positive.« less

  3. Computer Literacy Teaching Using Peer Learning and under the Confucian Heritage Cultural Settings of Macao, China

    ERIC Educational Resources Information Center

    Wong, Kelvin; Neves, Ana; Negreiros, Joao

    2017-01-01

    University students in Macao are required to attend computer literacy courses to raise their basic skills levels and knowledge as part of their literacy foundation. Still, teachers frequently complain about the weak IT skills of many students, suggesting that most of them may not be benefiting sufficiently from their computer literacy courses.…

  4. Anisotropic Galaxy-Galaxy Lensing in the Illustris-1 Simulation

    NASA Astrophysics Data System (ADS)

    Brainerd, Tereasa G.

    2017-06-01

    In Cold Dark Matter universes, the dark matter halos of galaxies are expected to be triaxial, leading to a surface mass density that is not circularly symmetric. In principle, this "flattening" of the dark matter halos of galaxies should be observable as an anisotropy in the weak galaxy-galaxy lensing signal. The degree to which the weak lensing signal is observed to be anisotropic, however, will depend strongly on the degree to which mass (i.e., the dark matter) is aligned with light in the lensing galaxies. That is, the anisotropy will be maximized when the major axis of the projected mass distribution is well aligned with the projected light distribution of the lens galaxies. Observational studies of anisotropic galaxy-galaxy lensing have found an anisotropic weak lensing signal around massive, red galaxies. Detecting the signal around blue, disky galaxies has, however, been more elusive. A possible explanation for this is that mass and light are well aligned within red galaxies and poorly aligned within blue galaxies (an explanation that is supported by studies of the locations of satellites of large, relatively isolated galaxies). Here we compute the weak lensing signal of isolated central galaxies in the Illustris-1 simulation. We compute the anisotropy of the weak lensing signal using two definitions of the geometry: [1] the major axis of the projected dark matter mass distribution and [2] the major axis of the projected stellar mass. On projected scales less than 15% of the virial radius, an anisotropy of order 10% is found for both definitions of the geometry. On larger scales, the anisotropy computed relative to the major axis of the projected light distribution is less than the anisotropy computed relative to the major axis of the projected dark matter. On projected scales of order the virial radius, the anisotropy obtained when using the major axis of the light is an order of magnitude less than the anisotropy obtained when using the major axis of the dark matter. The suppression of the anisotropy when using the major axis of the light to define the geometry is indicative of a significant misalignment of mass and light in the Illustris-1 galaxies at large physical radii.

  5. The Accelerated Reader: An Analysis of the Software's Strengths and Weaknesses and How It Can Be Used to Its Best Potential.

    ERIC Educational Resources Information Center

    Poock, Melanie M.

    1998-01-01

    Describes Accelerated Reader (AR), a computer software program that promotes reading; discusses AR hardware requirements; explains how it is used for book selection and testing in schools; assesses the program's strengths and weaknesses; and describes how Grant and Madison Elementary Schools (Muscatine, Iowa) have used the program effectively.…

  6. National Transportation Safety Board : weak internal control impaired financial accountability

    DOT National Transportation Integrated Search

    2001-09-28

    The U. S. General Accounting Office (GAO) was asked to review the National Transportation Safety Board's (NTSB) internal controls over selected types of fiscal year expenditures. They were asked to determine whether internal control weaknesses were a...

  7. Learning and tuning fuzzy logic controllers through reinforcements

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Khedkar, Pratap

    1992-01-01

    This paper presents a new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system. In particular, our generalized approximate reasoning-based intelligent control (GARIC) architecture (1) learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward neural network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto et al. (1983) to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.

  8. Computational design of a pH-sensitive IgG binding protein.

    PubMed

    Strauch, Eva-Maria; Fleishman, Sarel J; Baker, David

    2014-01-14

    Computational design provides the opportunity to program protein-protein interactions for desired applications. We used de novo protein interface design to generate a pH-dependent Fc domain binding protein that buries immunoglobulin G (IgG) His-433. Using next-generation sequencing of naïve and selected pools of a library of design variants, we generated a molecular footprint of the designed binding surface, confirming the binding mode and guiding further optimization of the balance between affinity and pH sensitivity. In biolayer interferometry experiments, the optimized design binds IgG with a Kd of ∼ 4 nM at pH 8.2, and approximately 500-fold more weakly at pH 5.5. The protein is extremely stable, heat-resistant and highly expressed in bacteria, and allows pH-based control of binding for IgG affinity purification and diagnostic devices.

  9. A method to approximate a closest loadability limit using multiple load flow solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yorino, Naoto; Harada, Shigemi; Cheng, Haozhong

    A new method is proposed to approximate a closest loadability limit (CLL), or closest saddle node bifurcation point, using a pair of multiple load flow solutions. More strictly, the obtainable points by the method are the stationary points including not only CLL but also farthest and saddle points. An operating solution and a low voltage load flow solution are used to efficiently estimate the node injections at a CLL as well as the left and right eigenvectors corresponding to the zero eigenvalue of the load flow Jacobian. They can be used in monitoring loadability margin, in identification of weak spotsmore » in a power system and in the examination of an optimal control against voltage collapse. Most of the computation time of the proposed method is taken in calculating the load flow solution pair. The remaining computation time is less than that of an ordinary load flow.« less

  10. The Relationship between Computer Games and Reading Achievement

    ERIC Educational Resources Information Center

    Reed, Tammy Dotson

    2010-01-01

    Illiteracy rates are increasing. The negative social and economic effects caused by weak reading skills include political unrest, social and health service inequality, poverty, and employment challenges. This quantitative study explored the proposition that the use of computer software games would increase reading achievement in second grade…

  11. Studies on the Effects of High Renewable Penetrations on Driving Point Impedance and Voltage Regulator Performance: National Renewable Energy Laboratory/Sacramento Municipal Utility District Load Tap Changer Driving Point Impedance Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Adarsh; Coddington, Michael H.; Brown, David

    Voltage regulators perform as desired when regulating from the source to the load and when regulating from a strong source (utility) to a weak source (distributed generation). (See the glossary for definitions of a strong source and weak source.) Even when the control is provisioned for reverse operation, it has been observed that tap-changing voltage regulators do not perform as desired in reverse when attempting regulation from the weak source to the strong source. The region of performance that is not as well understood is the regulation between sources that are approaching equal strength. As part of this study, wemore » explored all three scenarios: regulator control from a strong source to a weak source (classic case), control from a weak source to a strong source (during reverse power flow), and control between equivalent sources.« less

  12. Soliton self-frequency shift controlled by a weak seed laser in tellurite photonic crystal fibers.

    PubMed

    Liu, Lai; Meng, Xiangwei; Yin, Feixiang; Liao, Meisong; Zhao, Dan; Qin, Guanshi; Ohishi, Yasutake; Qin, Weiping

    2013-08-01

    We report the first demonstration of soliton self-frequency shift (SSFS) controlled by a weak continuous-wave (CW) laser, from a tellurite photonic crystal fiber pumped by a 1560 nm femtosecond fiber laser. The control of SSFS is performed by the cross-gain modulation of the 1560 nm femtosecond laser. By varying the input power of the weak CW laser (1560 nm) from 0 to 1.17 mW, the soliton generated in the tellurite photonic crystal fiber blue shifts from 1935 to 1591 nm. The dependence of the soliton wavelength on the operation wavelength of the weak CW laser is also measured. The results show the CW laser with a wavelength tunable range of 1530-1592 nm can be used to control the SSFS generation.

  13. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1989-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  14. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1990-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  15. Weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1991-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  16. Locked modes in two reversed-field pinch devices of different size and shell system

    NASA Astrophysics Data System (ADS)

    Malmberg, J.-A.; Brunsell, P. R.; Yagi, Y.; Koguchi, H.

    2000-10-01

    The behavior of locked modes in two reversed-field pinch devices, the Toroidal Pinch Experiment (TPE-RX) [Y. Yagi et al., Plasma Phys. Control. Fusion 41, 2552 (1999)] and Extrap T2 [J. R. Drake et al., in Plasma Physics and Controlled Nuclear Fusion Research 1996, Montreal (International Atomic Energy Agency, Vienna, 1996), Vol. 2, p. 193] is analyzed and compared. The main characteristics of the locked mode are qualitatively similar. The toroidal distribution of the mode locking shows that field errors play a role in both devices. The probability of phase locking is found to increase with increasing magnetic fluctuation levels in both machines. Furthermore, the probability of phase locking increases with plasma current in TPE-RX despite the fact that the magnetic fluctuation levels decrease. A comparison with computations using a theoretical model estimating the critical mode amplitude for locking [R. Fitzpatrick et al., Phys. Plasmas 6, 3878 (1999)] shows a good correlation with experimental results in TPE-RX. In Extrap T2, the magnetic fluctuations scale weakly with both plasma current and electron densities. This is also reflected in the weak scaling of the magnetic fluctuation levels with the Lundquist number (˜S-0.06). In TPE-RX, the corresponding scaling is ˜S-0.18.

  17. Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Freels, J. D.

    1989-01-01

    A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.

  18. Computer-Based Learning in Open and Distance Learning Institutions in Nigeria: Cautions on Use of Internet for Counseling

    ERIC Educational Resources Information Center

    Okopi, Fidel Onjefu; Odeyemi, Olajumoke Janet; Adesina, Adewale

    2015-01-01

    The study has identified the areas of strengths and weaknesses in the current use of Computer Based Learning (CBL) tools in Open and Distance Learning (ODL) institutions in Nigeria. To achieve these objectives, the following research questions were proposed: (i) What are the computer-based learning tools (soft and hard ware) that are actually in…

  19. Improving zero-training brain-computer interfaces by mixing model estimators

    NASA Astrophysics Data System (ADS)

    Verhoeven, T.; Hübner, D.; Tangermann, M.; Müller, K. R.; Dambre, J.; Kindermans, P. J.

    2017-06-01

    Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method’s strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

  20. High Performance Computing and Communications Panel Report.

    ERIC Educational Resources Information Center

    President's Council of Advisors on Science and Technology, Washington, DC.

    This report offers advice on the strengths and weaknesses of the High Performance Computing and Communications (HPCC) initiative, one of five presidential initiatives launched in 1992 and coordinated by the Federal Coordinating Council for Science, Engineering, and Technology. The HPCC program has the following objectives: (1) to extend U.S.…

  1. Computer Aided Evaluation of Higher Education Tutors' Performance

    ERIC Educational Resources Information Center

    Xenos, Michalis; Papadopoulos, Thanos

    2007-01-01

    This article presents a method for computer-aided tutor evaluation: Bayesian Networks are used for organizing the collected data about tutors and for enabling accurate estimations and predictions about future tutor behavior. The model provides indications about each tutor's strengths and weaknesses, which enables the evaluator to exploit strengths…

  2. On a stochastic control method for weakly coupled linear systems. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kwong, R. H.

    1972-01-01

    The stochastic control of two weakly coupled linear systems with different controllers is considered. Each controller only makes measurements about his own system; no information about the other system is assumed to be available. Based on the noisy measurements, the controllers are to generate independently suitable control policies which minimize a quadratic cost functional. To account for the effects of weak coupling directly, an approximate model, which involves replacing the influence of one system on the other by a white noise process is proposed. Simple suboptimal control problem for calculating the covariances of these noises is solved using the matrix minimum principle. The overall system performance based on this scheme is analyzed as a function of the degree of intersystem coupling.

  3. Hybrid Methods in Quantum Information

    NASA Astrophysics Data System (ADS)

    Marshall, Kevin

    Today, the potential power of quantum information processing comes as no surprise to physicist or science-fiction writer alike. However, the grand promises of this field remain unrealized, despite significant strides forward, due to the inherent difficulties of manipulating quantum systems. Simply put, it turns out that it is incredibly difficult to interact, in a controllable way, with the quantum realm when we seem to live our day to day lives in a classical world. In an effort to solve this challenge, people are exploring a variety of different physical platforms, each with their strengths and weaknesses, in hopes of developing new experimental methods that one day might allow us to control a quantum system. One path forward rests in combining different quantum systems in novel ways to exploit the benefits of different systems while circumventing their respective weaknesses. In particular, quantum systems come in two different flavours: either discrete-variable systems or continuous-variable ones. The field of hybrid quantum information seeks to combine these systems, in clever ways, to help overcome the challenges blocking the path between what is theoretically possible and what is achievable in a laboratory. In this thesis we explore four topics in the context of hybrid methods in quantum information, in an effort to contribute to the resolution of existing challenges and to stimulate new avenues of research. First, we explore the manipulation of a continuous-variable quantum system consisting of phonons in a linear chain of trapped ions where we use the discretized internal levels to mediate interactions. Using our proposed interaction we are able to implement, for example, the acoustic equivalent of a beam splitter with modest experimental resources. Next we propose an experimentally feasible implementation of the cubic phase gate, a primitive non-Gaussian gate required for universal continuous-variable quantum computation, based off sequential photon subtraction. We then discuss the notion of embedding a finite dimensional state into a continuous-variable system, and propose a method of performing quantum computations on encrypted continuous-variable states. This protocol allows for a client, of limited quantum ability, to outsource a computation while hiding their information. Next, we discuss the possibility of performing universal quantum computation on discrete-variable logical states encoded in mixed continuous-variable quantum states. Finally, we present an account of open problems related to our results, and possible future avenues of research.

  4. Some issues related to the novel spectral acceleration method for the fast computation of radiation/scattering from one-dimensional extremely large scale quasi-planar structures

    NASA Astrophysics Data System (ADS)

    Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng

    2002-03-01

    The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm, decomposing 1-D extremely large scale quasi-planar structures into more than one weak region and appropriately choosing the NSA parameters for each weak region, is incorporated into the original NSA method to improve its accuracy.

  5. Introduction. Computational aerodynamics.

    PubMed

    Tucker, Paul G

    2007-10-15

    The wide range of uses of computational fluid dynamics (CFD) for aircraft design is discussed along with its role in dealing with the environmental impact of flight. Enabling technologies, such as grid generation and turbulence models, are also considered along with flow/turbulence control. The large eddy simulation, Reynolds-averaged Navier-Stokes and hybrid turbulence modelling approaches are contrasted. The CFD prediction of numerous jet configurations occurring in aerospace are discussed along with aeroelasticity for aeroengine and external aerodynamics, design optimization, unsteady flow modelling and aeroengine internal and external flows. It is concluded that there is a lack of detailed measurements (for both canonical and complex geometry flows) to provide validation and even, in some cases, basic understanding of flow physics. Not surprisingly, turbulence modelling is still the weak link along with, as ever, a pressing need for improved (in terms of robustness, speed and accuracy) solver technology, grid generation and geometry handling. Hence, CFD, as a truly predictive and creative design tool, seems a long way off. Meanwhile, extreme practitioner expertise is still required and the triad of computation, measurement and analytic solution must be judiciously used.

  6. Nuclear Weak Rates and Detailed Balance in Stellar Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Misch, G. Wendell, E-mail: wendell@sjtu.edu, E-mail: wendell.misch@gmail.com

    Detailed balance is often invoked in discussions of nuclear weak transitions in astrophysical environments. Satisfaction of detailed balance is rightly touted as a virtue of some methods of computing nuclear transition strengths, but I argue that it need not necessarily be strictly obeyed in astrophysical environments, especially when the environment is far from weak equilibrium. I present the results of shell model calculations of nuclear weak strengths in both charged-current and neutral-current channels at astrophysical temperatures, finding some violation of detailed balance. I show that a slight modification of the technique to strictly obey detailed balance has little effect onmore » the reaction rates associated with these strengths under most conditions, though at high temperature the modified technique in fact misses some important strength. I comment on the relationship between detailed balance and weak equilibrium in astrophysical conditions.« less

  7. Nonequilibrium mechanisms of weak electrolyte electrification under the action of constant voltage

    NASA Astrophysics Data System (ADS)

    Stishkov, Yu. K.; Chirkov, V. A.

    2016-07-01

    The formation of space charge in weak electrolytes, specifically in liquid dielectrics, has been considered. An analytical solution is given to a simplified set of Nernst-Planck equations that describe the formation of nonequilibrium recombination layers in weak electrolytes. This approximate analytical solution is compared with computer simulation data for a complete set of Poisson-Nernst-Planck equations. It has been shown that the current passage in weak electrolytes can be described by a single dimensionless parameter that equals the length of a near-electrode recombination layer divided by the width of the interelectrode gap. The formation mechanism and the structure of charged nonequilibrium near-electrode layers in the nonstationary regime have been analyzed for different injection-to-conduction current ratios. It has been found that almost all charge structures encountered in weak dielectrics can be accounted for by the nonequilibrium dissociation-recombination mechanism of space charge formation.

  8. Magnetic Control of Hypersonic Flow

    NASA Astrophysics Data System (ADS)

    Poggie, Jonathan; Gaitonde, Datta

    2000-11-01

    Electromagnetic control is an appealing possibility for mitigating the thermal loads that occur in hypersonic flight, in particular for the case of atmospheric entry. There was extensive research on this problem between about 1955 and 1970,(M. F. Romig, ``The Influence of Electric and Magnetic Fields on Heat Transfer to Electrically Conducting Fluids,'' \\underlineAdvances In Heat Transfer), Vol. 1, Academic Press, NY, 1964. and renewed interest has arisen due to developments in the technology of super-conducting magnets and the understanding of the physics of weakly-ionized, non-equilibrium plasmas. In order to examine the physics of this problem, and to evaluate the practicality of electromagnetic control in hypersonic flight, we have developed a computer code to solve the three-dimensional, non-ideal magnetogasdynamics equations. We have applied the code to the problem of magnetically-decelerated hypersonic flow over a sphere, and observed a reduction, with an applied dipole field, in heat flux and skin friction near the nose of the body, as well as an increase in shock standoff distance. The computational results compare favorably with the analytical predictions of Bush.(W. B. Bush, ``Magnetohydrodynamic-Hypersonic Flow Past a Blunt Body'', Journal of the Aero/Space Sciences, Vol. 25, No. 11, 1958; ``The Stagnation-Point Boundary Layer in the Presence of an Applied Magnetic Field'', Vol. 28, No. 8, 1961.)

  9. Blind quantum computing with weak coherent pulses.

    PubMed

    Dunjko, Vedran; Kashefi, Elham; Leverrier, Anthony

    2012-05-18

    The universal blind quantum computation (UBQC) protocol [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual IEEE Symposiumon Foundations of Computer Science (IEEE Computer Society, Los Alamitos, CA, USA, 2009), pp. 517-526.] allows a client to perform quantum computation on a remote server. In an ideal setting, perfect privacy is guaranteed if the client is capable of producing specific, randomly chosen single qubit states. While from a theoretical point of view, this may constitute the lowest possible quantum requirement, from a pragmatic point of view, generation of such states to be sent along long distances can never be achieved perfectly. We introduce the concept of ϵ blindness for UBQC, in analogy to the concept of ϵ security developed for other cryptographic protocols, allowing us to characterize the robustness and security properties of the protocol under possible imperfections. We also present a remote blind single qubit preparation protocol with weak coherent pulses for the client to prepare, in a delegated fashion, quantum states arbitrarily close to perfect random single qubit states. This allows us to efficiently achieve ϵ-blind UBQC for any ϵ>0, even if the channel between the client and the server is arbitrarily lossy.

  10. Blind Quantum Computing with Weak Coherent Pulses

    NASA Astrophysics Data System (ADS)

    Dunjko, Vedran; Kashefi, Elham; Leverrier, Anthony

    2012-05-01

    The universal blind quantum computation (UBQC) protocol [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual IEEE Symposiumon Foundations of Computer Science (IEEE Computer Society, Los Alamitos, CA, USA, 2009), pp. 517-526.] allows a client to perform quantum computation on a remote server. In an ideal setting, perfect privacy is guaranteed if the client is capable of producing specific, randomly chosen single qubit states. While from a theoretical point of view, this may constitute the lowest possible quantum requirement, from a pragmatic point of view, generation of such states to be sent along long distances can never be achieved perfectly. We introduce the concept of ɛ blindness for UBQC, in analogy to the concept of ɛ security developed for other cryptographic protocols, allowing us to characterize the robustness and security properties of the protocol under possible imperfections. We also present a remote blind single qubit preparation protocol with weak coherent pulses for the client to prepare, in a delegated fashion, quantum states arbitrarily close to perfect random single qubit states. This allows us to efficiently achieve ɛ-blind UBQC for any ɛ>0, even if the channel between the client and the server is arbitrarily lossy.

  11. Automatic Approach Tendencies toward High and Low Caloric Food in Restrained Eaters: Influence of Task-Relevance and Mood

    PubMed Central

    Neimeijer, Renate A. M.; Roefs, Anne; Ostafin, Brian D.; de Jong, Peter J.

    2017-01-01

    Objective: Although restrained eaters are motivated to control their weight by dieting, they are often unsuccessful in these attempts. Dual process models emphasize the importance of differentiating between controlled and automatic tendencies to approach food. This study investigated the hypothesis that heightened automatic approach tendencies in restrained eaters would be especially prominent in contexts where food is irrelevant for their current tasks. Additionally, we examined the influence of mood on the automatic tendency to approach food as a function of dietary restraint. Methods: An Affective Simon Task-manikin was administered to measure automatic approach tendencies where food is task-irrelevant, and a Stimulus Response Compatibility task (SRC) to measure automatic approach in contexts where food is task-relevant, in 92 female participants varying in dietary restraint. Prior to the task, sad, stressed, neutral, or positive mood was induced. Food intake was measured during a bogus taste task after the computer tasks. Results: Consistent with their diet goals, participants with a strong tendency to restrain their food intake showed a relatively weak approach bias toward food when food was task-relevant (SRC) and this effect was independent of mood. Restrained eaters showed a relatively strong approach bias toward food when food was task-irrelevant in the positive condition and a relatively weak approach in the sad mood. Conclusion: The weak approach bias in contexts where food is task-relevant may help high-restrained eaters to comply with their diet goal. However, the strong approach bias in contexts where food is task-irrelevant and when being in a positive mood may interfere with restrained eaters’ goal of restricting food-intake. PMID:28443045

  12. Precise design-based defect characterization and root cause analysis

    NASA Astrophysics Data System (ADS)

    Xie, Qian; Venkatachalam, Panneerselvam; Lee, Julie; Chen, Zhijin; Zafar, Khurram

    2017-03-01

    As semiconductor manufacturing continues its march towards more advanced technology nodes, it becomes increasingly important to identify and characterize design weak points, which is typically done using a combination of inline inspection data and the physical layout (or design). However, the employed methodologies have been somewhat imprecise, relying greatly on statistical techniques to signal excursions. For example, defect location error that is inherent to inspection tools prevents them from reporting the true locations of defects. Therefore, common operations such as background-based binning that are designed to identify frequently failing patterns cannot reliably identify specific weak patterns. They can only identify an approximate set of possible weak patterns, but within these sets there are many perfectly good patterns. Additionally, characterizing the failure rate of a known weak pattern based on inline inspection data also has a lot of fuzziness due to coordinate uncertainty. SEM (Scanning Electron Microscope) Review attempts to come to the rescue by capturing high resolution images of the regions surrounding the reported defect locations, but SEM images are reviewed by human operators and the weak patterns revealed in those images must be manually identified and classified. Compounding the problem is the fact that a single Review SEM image may contain multiple defective patterns and several of those patterns might not appear defective to the human eye. In this paper we describe a significantly improved methodology that brings advanced computer image processing and design-overlay techniques to better address the challenges posed by today's leading technology nodes. Specifically, new software techniques allow the computer to analyze Review SEM images in detail, to overlay those images with reference design to detect every defect that might be present in all regions of interest within the overlaid reference design (including several classes of defects that human operators will typically miss), to obtain the exact defect location on design, to compare all defective patterns thus detected against a library of known patterns, and to classify all defective patterns as either new or known. By applying the computer to these tasks, we automate the entire process from defective pattern identification to pattern classification with high precision, and we perform this operation en masse during R & D, ramp, and volume production. By adopting the methodology, whenever a specific weak pattern is identified, we are able to run a series of characterization operations to ultimately arrive at the root cause. These characterization operations can include (a) searching all pre-existing Review SEM images for the presence of the specific weak pattern to determine whether there is any spatial (within die or within wafer) or temporal (within any particular date range, before or after a mask revision, etc.) correlation and (b) understanding the failure rate of the specific weak pattern to prioritize the urgency of the problem, (c) comparing the weak pattern against an OPC (Optical Procimity Correction) Verification report or a PWQ (Process Window Qualification)/FEM (Focus Exposure Matrix) result to assess the likelihood of it being a litho-sensitive pattern, etc. After resolving the specific weak pattern, we will categorize it as known pattern, and the engineer will move forward with discovering new weak patterns.

  13. Mean Field Type Control with Congestion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Achdou, Yves, E-mail: achdou@ljll.univ-paris-diderot.fr; Laurière, Mathieu

    2016-06-15

    We analyze some systems of partial differential equations arising in the theory of mean field type control with congestion effects. We look for weak solutions. Our main result is the existence and uniqueness of suitably defined weak solutions, which are characterized as the optima of two optimal control problems in duality.

  14. Improving the Fraction Word Problem Solving of Students with Mathematics Learning Disabilities: Interactive Computer Application

    ERIC Educational Resources Information Center

    Shin, Mikyung; Bryant, Diane P.

    2017-01-01

    Students with mathematics learning disabilities (MLD) have a weak understanding of fraction concepts and skills, which are foundations of algebra. Such students might benefit from computer-assisted instruction that utilizes evidence-based instructional components (cognitive strategies, feedback, virtual manipulatives). As a pilot study using a…

  15. Known and Unknown Weaknesses in Software Animated Demonstrations (Screencasts): A Study in Self-Paced Learning Settings

    ERIC Educational Resources Information Center

    Palaigeorgiou, George; Despotakis, Theofanis

    2010-01-01

    Learning about computers continues to be regarded as a rather informal and complex landscape dominated by individual exploratory and opportunistic approaches, even for students and instructors in Computer Science Departments. During the last two decades, software animated demonstrations (SADs), also known as screencasts, have attracted particular…

  16. BASIC, Logo, and Pilot: A Comparison of Three Computer Languages.

    ERIC Educational Resources Information Center

    Maddux, Cleborne D.; Cummings, Rhoda E.

    1985-01-01

    Following a brief history of Logo, BASIC, and Pilot programing languages, common educational programing tasks (input from keyboard, evaluation of keyboard input, and computation) are presented in each language to illustrate how each can be used to perform the same tasks and to demonstrate each language's strengths and weaknesses. (MBR)

  17. Current Trends in Computer-Based Language Instruction.

    ERIC Educational Resources Information Center

    Hart, Robert S.

    1987-01-01

    A discussion of computer-based language instruction examines the quality of materials currently in use and looks at developments in the field. It is found that language courseware is generally weak in the areas of error analysis and feedback, communicative realism, and convenience of lesson authoring. A review of research under way to improve…

  18. Mechanical design and driving mechanism of an isokinetic functional electrical stimulation-based leg stepping trainer.

    PubMed

    Hamzaid, N A; Fornusek, C; Ruys, A; Davis, G M

    2007-12-01

    The mechanical design of a constant velocity (isokinetic) leg stepping trainer driven by functional electrical stimulation-evoked muscle contractions was the focus of this paper. The system was conceived for training the leg muscles of neurologically-impaired patients. A commercially available slider crank mechanism for elliptical stepping exercise was adapted to a motorized isokinetic driving mechanism. The exercise system permits constant-velocity pedalling at cadences of 1-60 rev x min(-1). The variable-velocity feature allows low pedalling forces for individuals with very weak leg muscles, yet provides resistance to higher pedalling effort in stronger patients. In the future, the system will be integrated with a computer-controlled neuromuscular stimulator and a feedback control unit to monitor training responses of spinal cord-injured, stroke and head injury patients.

  19. The weak coherence account: detail-focused cognitive style in autism spectrum disorders.

    PubMed

    Happé, Francesca; Frith, Uta

    2006-01-01

    "Weak central coherence" refers to the detail-focused processing style proposed to characterise autism spectrum disorders (ASD). The original suggestion of a core deficit in central processing resulting in failure to extract global form/meaning, has been challenged in three ways. First, it may represent an outcome of superiority in local processing. Second, it may be a processing bias, rather than deficit. Third, weak coherence may occur alongside, rather than explain, deficits in social cognition. A review of over 50 empirical studies of coherence suggests robust findings of local bias in ASD, with mixed findings regarding weak global processing. Local bias appears not to be a mere side-effect of executive dysfunction, and may be independent of theory of mind deficits. Possible computational and neural models are discussed.

  20. Weak Galerkin finite element methods for Darcy flow: Anisotropy and heterogeneity

    NASA Astrophysics Data System (ADS)

    Lin, Guang; Liu, Jiangguo; Mu, Lin; Ye, Xiu

    2014-11-01

    This paper presents a family of weak Galerkin finite element methods (WGFEMs) for Darcy flow computation. The WGFEMs are new numerical methods that rely on the novel concept of discrete weak gradients. The WGFEMs solve for pressure unknowns both in element interiors and on the mesh skeleton. The numerical velocity is then obtained from the discrete weak gradient of the numerical pressure. The new methods are quite different than many existing numerical methods in that they are locally conservative by design, the resulting discrete linear systems are symmetric and positive-definite, and there is no need for tuning problem-dependent penalty factors. We test the WGFEMs on benchmark problems to demonstrate the strong potential of these new methods in handling strong anisotropy and heterogeneity in Darcy flow.

  1. Quantum Information Theory of Measurement

    NASA Astrophysics Data System (ADS)

    Glick, Jennifer Ranae

    Quantum measurement lies at the heart of quantum information processing and is one of the criteria for quantum computation. Despite its central role, there remains a need for a robust quantum information-theoretical description of measurement. In this work, I will quantify how information is processed in a quantum measurement by framing it in quantum information-theoretic terms. I will consider a diverse set of measurement scenarios, including weak and strong measurements, and parallel and consecutive measurements. In each case, I will perform a comprehensive analysis of the role of entanglement and entropy in the measurement process and track the flow of information through all subsystems. In particular, I will discuss how weak and strong measurements are fundamentally of the same nature and show that weak values can be computed exactly for certain measurements with an arbitrary interaction strength. In the context of the Bell-state quantum eraser, I will derive a trade-off between the coherence and "which-path" information of an entangled pair of photons and show that a quantum information-theoretic approach yields additional insights into the origins of complementarity. I will consider two types of quantum measurements: those that are made within a closed system where every part of the measurement device, the ancilla, remains under control (what I will call unamplified measurements), and those performed within an open system where some degrees of freedom are traced over (amplified measurements). For sequences of measurements of the same quantum system, I will show that information about the quantum state is encoded in the measurement chain and that some of this information is "lost" when the measurements are amplified-the ancillae become equivalent to a quantum Markov chain. Finally, using the coherent structure of unamplified measurements, I will outline a protocol for generating remote entanglement, an essential resource for quantum teleportation and quantum cryptographic tasks.

  2. Notification: FY 2018 CSB Management Challenges and Internal Control Weaknesses

    EPA Pesticide Factsheets

    December 26, 2017. The OIG is beginning work to update for fiscal year 2018 its list of proposed key management challenges and internal control weaknesses confronting the U.S. Chemical Safety and Hazard Investigation Board (CSB).

  3. Bivariate spline solution of time dependent nonlinear PDE for a population density over irregular domains.

    PubMed

    Gutierrez, Juan B; Lai, Ming-Jun; Slavov, George

    2015-12-01

    We study a time dependent partial differential equation (PDE) which arises from classic models in ecology involving logistic growth with Allee effect by introducing a discrete weak solution. Existence, uniqueness and stability of the discrete weak solutions are discussed. We use bivariate splines to approximate the discrete weak solution of the nonlinear PDE. A computational algorithm is designed to solve this PDE. A convergence analysis of the algorithm is presented. We present some simulations of population development over some irregular domains. Finally, we discuss applications in epidemiology and other ecological problems. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Reachability Analysis Applied to Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Holzinger, M.; Scheeres, D.

    Several existing and emerging applications of Space Situational Awareness (SSA) relate directly to spacecraft Rendezvous, Proximity Operations, and Docking (RPOD) and Formation / Cluster Flight (FCF). When multiple Resident Space Ob jects (RSOs) are in vicinity of one another with appreciable periods between observations, correlating new RSO tracks to previously known objects becomes a non-trivial problem. A particularly difficult sub-problem is seen when long breaks in observations are coupled with continuous, low- thrust maneuvers. Reachability theory, directly related to optimal control theory, can compute contiguous reachability sets for known or estimated control authority and can support such RSO search and correlation efforts in both ground and on-board settings. Reachability analysis can also directly estimate the minimum control authority of a given RSO. For RPOD and FCF applications, emerging mission concepts such as fractionation drastically increase system complexity of on-board autonomous fault management systems. Reachability theory, as applied to SSA in RPOD and FCF applications, can involve correlation of nearby RSO observations, control authority estimation, and sensor track re-acquisition. Additional uses of reachability analysis are formation reconfiguration, worst-case passive safety, and propulsion failure modes such as a "stuck" thruster. Existing reachability theory is applied to RPOD and FCF regimes. An optimal control policy is developed to maximize the reachability set and optimal control law discontinuities (switching) are examined. The Clohessy-Wiltshire linearized equations of motion are normalized to accentuate relative control authority for spacecraft propulsion systems at both Low Earth Orbit (LEO) and Geostationary Earth Orbit (GEO). Several examples with traditional and low thrust propulsion systems in LEO and GEO are explored to illustrate the effects of relative control authority on the time-varying reachability set surface. Both monopropellant spacecraft at LEO and Hall thruster spacecraft at GEO are shown to be strongly actuated while Hall thruster spacecraft at LEO are found to be weakly actuated. Weaknesses with the current implementation are discussed and future numerical improvements and analytical efforts are discussed.

  5. Swarm satellite mission scheduling & planning using Hybrid Dynamic Mutation Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Zixuan; Guo, Jian; Gill, Eberhard

    2017-08-01

    Space missions have traditionally been controlled by operators from a mission control center. Given the increasing number of satellites for some space missions, generating a command list for multiple satellites can be time-consuming and inefficient. Developing multi-satellite, onboard mission scheduling & planning techniques is, therefore, a key research field for future space mission operations. In this paper, an improved Genetic Algorithm (GA) using a new mutation strategy is proposed as a mission scheduling algorithm. This new mutation strategy, called Hybrid Dynamic Mutation (HDM), combines the advantages of both dynamic mutation strategy and adaptive mutation strategy, overcoming weaknesses such as early convergence and long computing time, which helps standard GA to be more efficient and accurate in dealing with complex missions. HDM-GA shows excellent performance in solving both unconstrained and constrained test functions. The experiments of using HDM-GA to simulate a multi-satellite, mission scheduling problem demonstrates that both the computation time and success rate mission requirements can be met. The results of a comparative test between HDM-GA and three other mutation strategies also show that HDM has outstanding performance in terms of speed and reliability.

  6. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Hodges, Dewey H.

    1990-01-01

    A regular perturbation analysis is presented. Closed-loop simulations were performed with a first order correction including all of the atmospheric terms. In addition, a method was developed for independently checking the accuracy of the analysis and the rather extensive programming required to implement the complete first order correction with all of the aerodynamic effects included. This amounted to developing an equivalent Hamiltonian computed from the first order analysis. A second order correction was also completed for the neglected spherical Earth and back-pressure effects. Finally, an analysis was begun on a method for dealing with control inequality constraints. The results on including higher order corrections do show some improvement for this application; however, it is not known at this stage if significant improvement will result when the aerodynamic forces are included. The weak formulation for solving optimal problems was extended in order to account for state inequality constraints. The formulation was tested on three example problems and numerical results were compared to the exact solutions. Development of a general purpose computational environment for the solution of a large class of optimal control problems is under way. An example, along with the necessary input and the output, is given.

  7. Scientific and personal recollections of Roberto Petronzio

    NASA Astrophysics Data System (ADS)

    Parisi, Giorgio

    2018-03-01

    This paper aims to recall some of the main contributions of Roberto Petronzio to physics, with a particular regard to the period we have been working together. His seminal contributions cover an extremely wide range of topics: the foundation of the perturbative approach to QCD, various aspects of weak interaction theory, from basic questions (e.g. the mass of the Higgs) to lattice weak interaction, lattice QCD from the beginning to most recent computations.

  8. Using an innovative multiple regression procedure in a cancer population (Part 1): detecting and probing relationships of common interacting symptoms (pain, fatigue/weakness, sleep problems) as a strategy to discover influential symptom pairs and clusters.

    PubMed

    Francoeur, Richard B

    2015-01-01

    The majority of patients with advanced cancer experience symptom pairs or clusters among pain, fatigue, and insomnia. Improved methods are needed to detect and interpret interactions among symptoms or diesease markers to reveal influential pairs or clusters. In prior work, I developed and validated sequential residual centering (SRC), a method that improves the sensitivity of multiple regression to detect interactions among predictors, by conditioning for multicollinearity (shared variation) among interactions and component predictors. Using a hypothetical three-way interaction among pain, fatigue, and sleep to predict depressive affect, I derive and explain SRC multiple regression. Subsequently, I estimate raw and SRC multiple regressions using real data for these symptoms from 268 palliative radiation outpatients. Unlike raw regression, SRC reveals that the three-way interaction (pain × fatigue/weakness × sleep problems) is statistically significant. In follow-up analyses, the relationship between pain and depressive affect is aggravated (magnified) within two partial ranges: 1) complete-to-some control over fatigue/weakness when there is complete control over sleep problems (ie, a subset of the pain-fatigue/weakness symptom pair), and 2) no control over fatigue/weakness when there is some-to-no control over sleep problems (ie, a subset of the pain-fatigue/weakness-sleep problems symptom cluster). Otherwise, the relationship weakens (buffering) as control over fatigue/weakness or sleep problems diminishes. By reducing the standard error, SRC unmasks a three-way interaction comprising a symptom pair and cluster. Low-to-moderate levels of the moderator variable for fatigue/weakness magnify the relationship between pain and depressive affect. However, when the comoderator variable for sleep problems accompanies fatigue/weakness, only frequent or unrelenting levels of both symptoms magnify the relationship. These findings suggest that a countervailing mechanism involving depressive affect could account for the effectiveness of a cognitive behavioral intervention to reduce the severity of a pain, fatigue, and sleep disturbance cluster in a previous randomized trial.

  9. Generation of entanglement in quantum parametric oscillators using phase control.

    PubMed

    Gonzalez-Henao, J C; Pugliese, E; Euzzor, S; Abdalah, S F; Meucci, R; Roversi, J A

    2015-08-19

    The control of quantum entanglement in systems in contact with environment plays an important role in information processing, cryptography and quantum computing. However, interactions with the environment, even when very weak, entail decoherence in the system with consequent loss of entanglement. Here we consider a system of two coupled oscillators in contact with a common heat bath and with a time dependent oscillation frequency. The possibility to control the entanglement of the oscillators by means of an external sinusoidal perturbation applied to the oscillation frequency has been theoretically explored. We demonstrate that the oscillators become entangled exactly in the region where the classical counterpart is unstable, otherwise when the classical system is stable, entanglement is not possible. Therefore, we can control the entanglement swapping from stable to unstable regions by adjusting amplitude and phase of our external controller. We also show that the entanglement rate is approximately proportional to the real part of the Floquet coefficient of the classical counterpart of the oscillators. Our results have the intriguing peculiarity of manipulating quantum information operating on a classical system.

  10. Computational design of treatment strategies for proactive therapy on atopic dermatitis using optimal control theory

    PubMed Central

    Christodoulides, Panayiotis; Hirata, Yoshito; Domínguez-Hüttinger, Elisa; Danby, Simon G.; Cork, Michael J.; Williams, Hywel C.; Aihara, Kazuyuki

    2017-01-01

    Atopic dermatitis (AD) is a common chronic skin disease characterized by recurrent skin inflammation and a weak skin barrier, and is known to be a precursor to other allergic diseases such as asthma. AD affects up to 25% of children worldwide and the incidence continues to rise. There is still uncertainty about the optimal treatment strategy in terms of choice of treatment, potency, duration and frequency. This study aims to develop a computational method to design optimal treatment strategies for the clinically recommended ‘proactive therapy’ for AD. Proactive therapy aims to prevent recurrent flares once the disease has been brought under initial control. Typically, this is done by using an anti-inflammatory treatment such as a potent topical corticosteroid intensively for a few weeks to ‘get control’, followed by intermittent weekly treatment to suppress subclinical inflammation to ‘keep control’. Using a hybrid mathematical model of AD pathogenesis that we recently proposed, we computationally derived the optimal treatment strategies for individual virtual patient cohorts, by recursively solving optimal control problems using a differential evolution algorithm. Our simulation results suggest that such an approach can inform the design of optimal individualized treatment schedules that include application of topical corticosteroids and emollients, based on the disease status of patients observed on their weekly hospital visits. We demonstrate the potential and the gaps of our approach to be applied to clinical settings. This article is part of the themed issue ‘Mathematical methods in medicine: neuroscience, cardiology and pathology’. PMID:28507230

  11. AdaBoost-based algorithm for network intrusion detection.

    PubMed

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.

  12. Linearly resummed hydrodynamics in a weakly curved spacetime

    NASA Astrophysics Data System (ADS)

    Bu, Yanyan; Lublinsky, Michael

    2015-04-01

    We extend our study of all-order linearly resummed hydrodynamics in a flat space [1, 2] to fluids in weakly curved spaces. The underlying microscopic theory is a finite temperature super-Yang-Mills theory at strong coupling. The AdS/CFT correspondence relates black brane solutions of the Einstein gravity in asymptotically locally AdS5 geometry to relativistic conformal fluids in a weakly curved 4D background. To linear order in the amplitude of hydrodynamic variables and metric perturbations, the fluid's energy-momentum tensor is computed with derivatives of both the fluid velocity and background metric resummed to all orders. We extensively discuss the meaning of all order hydrodynamics by expressing it in terms of the memory function formalism, which is also suitable for practical simulations. In addition to two viscosity functions discussed at length in refs. [1, 2], we find four curvature induced structures coupled to the fluid via new transport coefficient functions. In ref. [3], the latter were referred to as gravitational susceptibilities of the fluid. We analytically compute these coefficients in the hydrodynamic limit, and then numerically up to large values of momenta.

  13. A Methodological Analysis of Randomized Clinical Trials of Computer-Assisted Therapies for Psychiatric Disorders: Toward Improved Standards for an Emerging Field

    PubMed Central

    Kiluk, Brian D.; Sugarman, Dawn E.; Nich, Charla; Gibbons, Carly J.; Martino, Steve; Rounsaville, Bruce J.; Carroll, Kathleen M.

    2013-01-01

    Objective Computer-assisted therapies offer a novel, cost-effective strategy for providing evidence-based therapies to a broad range of individuals with psychiatric disorders. However, the extent to which the growing body of randomized trials evaluating computer-assisted therapies meets current standards of methodological rigor for evidence-based interventions is not clear. Method A methodological analysis of randomized clinical trials of computer-assisted therapies for adult psychiatric disorders, published between January 1990 and January 2010, was conducted. Seventy-five studies that examined computer-assisted therapies for a range of axis I disorders were evaluated using a 14-item methodological quality index. Results Results indicated marked heterogeneity in study quality. No study met all 14 basic quality standards, and three met 13 criteria. Consistent weaknesses were noted in evaluation of treatment exposure and adherence, rates of follow-up assessment, and conformity to intention-to-treat principles. Studies utilizing weaker comparison conditions (e.g., wait-list controls) had poorer methodological quality scores and were more likely to report effects favoring the computer-assisted condition. Conclusions While several well-conducted studies have indicated promising results for computer-assisted therapies, this emerging field has not yet achieved a level of methodological quality equivalent to those required for other evidence-based behavioral therapies or pharmacotherapies. Adoption of more consistent standards for methodological quality in this field, with greater attention to potential adverse events, is needed before computer-assisted therapies are widely disseminated or marketed as evidence based. PMID:21536689

  14. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Hodges, Dewey H.; Leung, Martin S.; Bless, Robert R.

    1991-01-01

    The proposed investigation on a Matched Asymptotic Expansion (MAE) method was carried out. It was concluded that the method of MAE is not applicable to launch vehicle ascent trajectory optimization due to a lack of a suitable stretched variable. More work was done on the earlier regular perturbation approach using a piecewise analytic zeroth order solution to generate a more accurate approximation. In the meantime, a singular perturbation approach using manifold theory is also under current investigation. Work on a general computational environment based on the use of MACSYMA and the weak Hamiltonian finite element method continued during this period. This methodology is capable of the solution of a large class of optimal control problems.

  15. Dendritic Learning as a Paradigm Shift in Brain Learning.

    PubMed

    Sardi, Shira; Vardi, Roni; Goldental, Amir; Tugendhaft, Yael; Uzan, Herut; Kanter, Ido

    2018-06-20

    Experimental and theoretical results reveal a new underlying mechanism for fast brain learning process, dendritic learning, as opposed to the misdirected research in neuroscience over decades, which is based solely on slow synaptic plasticity. The presented paradigm indicates that learning occurs in closer proximity to the neuron, the computational unit, dendritic strengths are self-oscillating, and weak synapses, which comprise the majority of our brain and previously were assumed to be insignificant, play a key role in plasticity. The new learning sites of the brain call for a reevaluation of current treatments for disordered brain functionality and for a better understanding of proper chemical drugs and biological mechanisms to maintain, control and enhance learning.

  16. The neural basis of reversible sentence comprehension: Evidence from voxel-based lesion-symptom mapping in aphasia

    PubMed Central

    Thothathiri, Malathi; Kimberg, Daniel Y.; Schwartz, Myrna F.

    2012-01-01

    We explored the neural basis of reversible sentence comprehension in a large group of aphasic patients (N=79). Voxel-based lesion-symptom mapping revealed a significant association between damage in temporoparietal cortex and impaired sentence comprehension. This association remained after we controlled for phonological working memory. We hypothesize that this region plays an important role in the thematic or what-where processing of sentences. In contrast, we detected weak or no association between reversible sentence comprehension and the ventrolateral prefrontal cortex, which includes Broca’s area, even for syntactically complex sentences. This casts doubt on theories that presuppose a critical role for this region in syntactic computations. PMID:21861679

  17. Lessons Learned in Designing and Implementing a Computer-Adaptive Test for English

    ERIC Educational Resources Information Center

    Burston, Jack; Neophytou, Maro

    2014-01-01

    This paper describes the lessons learned in designing and implementing a computer-adaptive test (CAT) for English. The early identification of students with weak L2 English proficiency is of critical importance in university settings that have compulsory English language course graduation requirements. The most efficient means of diagnosing the L2…

  18. Interactive Computer Based Assessment Tasks: How Problem-Solving Process Data Can Inform Instruction

    ERIC Educational Resources Information Center

    Zoanetti, Nathan

    2010-01-01

    This article presents key steps in the design and analysis of a computer based problem-solving assessment featuring interactive tasks. The purpose of the assessment is to support targeted instruction for students by diagnosing strengths and weaknesses at different stages of problem-solving. The first focus of this article is the task piloting…

  19. General review of the MOSTAS computer code for wind turbines

    NASA Technical Reports Server (NTRS)

    Dungundji, J.; Wendell, J. H.

    1981-01-01

    The MOSTAS computer code for wind turbine analysis is reviewed, and techniques and methods used in its analyses are described. Impressions of its strengths and weakness, and recommendations for its application, modification, and further development are made. Basic techniques used in wind turbine stability and response analyses for systems with constant and periodic coefficients are reviewed.

  20. Identifying the optimal segmentors for mass classification in mammograms

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Tomuro, Noriko; Furst, Jacob; Raicu, Daniela S.

    2015-03-01

    In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a "weak segmentor" because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.

  1. Bioinformatics approaches to predict target genes from transcription factor binding data.

    PubMed

    Essebier, Alexandra; Lamprecht, Marnie; Piper, Michael; Bodén, Mikael

    2017-12-01

    Transcription factors regulate gene expression and play an essential role in development by maintaining proliferative states, driving cellular differentiation and determining cell fate. Transcription factors are capable of regulating multiple genes over potentially long distances making target gene identification challenging. Currently available experimental approaches to detect distal interactions have multiple weaknesses that have motivated the development of computational approaches. Although an improvement over experimental approaches, existing computational approaches are still limited in their application, with different weaknesses depending on the approach. Here, we review computational approaches with a focus on data dependency, cell type specificity and usability. With the aim of identifying transcription factor target genes, we apply available approaches to typical transcription factor experimental datasets. We show that approaches are not always capable of annotating all transcription factor binding sites; binding sites should be treated disparately; and a combination of approaches can increase the biological relevance of the set of genes identified as targets. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Experimental generation and computational modeling of intracellular pH gradients in cardiac myocytes.

    PubMed

    Swietach, Pawel; Leem, Chae-Hun; Spitzer, Kenneth W; Vaughan-Jones, Richard D

    2005-04-01

    It is often assumed that pH(i) is spatially uniform within cells. A double-barreled microperfusion system was used to apply solutions of weak acid (acetic acid, CO(2)) or base (ammonia) to localized regions of an isolated ventricular myocyte (guinea pig). A stable, longitudinal pH(i) gradient (up to 1 pH(i) unit) was observed (using confocal imaging of SNARF-1 fluorescence). Changing the fractional exposure of the cell to weak acid/base altered the gradient, as did changing the concentration and type of weak acid/base applied. A diffusion-reaction computational model accurately simulated this behavior of pH(i). The model assumes that H(i)(+) movement occurs via diffusive shuttling on mobile buffers, with little free H(+) diffusion. The average diffusion constant for mobile buffer was estimated as 33 x 10(-7) cm(2)/s, consistent with an apparent H(i)(+) diffusion coefficient, D(H)(app), of 14.4 x 10(-7) cm(2)/s (at pH(i) 7.07), a value two orders of magnitude lower than for H(+) ions in water but similar to that estimated recently from local acid injection via a cell-attached glass micropipette. We conclude that, because H(i)(+) mobility is so low, an extracellular concentration gradient of permeant weak acid readily induces pH(i) nonuniformity. Similar concentration gradients for weak acid (e.g., CO(2)) occur across border zones during regional myocardial ischemia, raising the possibility of steep pH(i) gradients within the heart under some pathophysiological conditions.

  3. Assessing Binocular Interaction in Amblyopia and Its Clinical Feasibility

    PubMed Central

    Kwon, MiYoung; Lu, Zhong-Lin; Miller, Alexandra; Kazlas, Melanie; Hunter, David G.; Bex, Peter J.

    2014-01-01

    Purpose To measure binocular interaction in amblyopes using a rapid and patient-friendly computer-based method, and to test the feasibility of the assessment in the clinic. Methods Binocular interaction was assessed in subjects with strabismic amblyopia (n = 7), anisometropic amblyopia (n = 6), strabismus without amblyopia (n = 15) and normal vision (n = 40). Binocular interaction was measured with a dichoptic phase matching task in which subjects matched the position of a binocular probe to the cyclopean perceived phase of a dichoptic pair of gratings whose contrast ratios were systematically varied. The resulting effective contrast ratio of the weak eye was taken as an indicator of interocular imbalance. Testing was performed in an ophthalmology clinic under 8 mins. We examined the relationships between our binocular interaction measure and standard clinical measures indicating abnormal binocularity such as interocular acuity difference and stereoacuity. The test-retest reliability of the testing method was also evaluated. Results Compared to normally-sighted controls, amblyopes exhibited significantly reduced effective contrast (∼20%) of the weak eye, suggesting a higher contrast requirement for the amblyopic eye compared to the fellow eye. We found that the effective contrast ratio of the weak eye covaried with standard clincal measures of binocular vision. Our results showed that there was a high correlation between the 1st and 2nd measurements (r = 0.94, p<0.001) but without any significant bias between the two. Conclusions Our findings demonstrate that abnormal binocular interaction can be reliably captured by measuring the effective contrast ratio of the weak eye and quantitative assessment of binocular interaction is a quick and simple test that can be performed in the clinic. We believe that reliable and timely assessment of deficits in a binocular interaction may improve detection and treatment of amblyopia. PMID:24959842

  4. Experimental Blind Quantum Computing for a Classical Client.

    PubMed

    Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C; Lu, Chao-Yang; Pan, Jian-Wei

    2017-08-04

    To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.

  5. Experimental Blind Quantum Computing for a Classical Client

    NASA Astrophysics Data System (ADS)

    Huang, He-Liang; Zhao, Qi; Ma, Xiongfeng; Liu, Chang; Su, Zu-En; Wang, Xi-Lin; Li, Li; Liu, Nai-Le; Sanders, Barry C.; Lu, Chao-Yang; Pan, Jian-Wei

    2017-08-01

    To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.

  6. Cognitive flexibility in preschool children with and without stuttering disorders.

    PubMed

    Eichorn, Naomi; Marton, Klara; Pirutinsky, Steven

    2017-11-13

    Multifactorial explanations of developmental stuttering suggest that difficulties in self-regulation and weak attentional flexibility contribute to persisting stuttering. We tested this prediction by examining whether preschool-age children who stutter (CWS) shift their attention less flexibly than children who do not stutter (CWNS) during a modified version of the Dimension Card Change Sort (DCCS), a reliable measure of attention switching for young children. Sixteen CWS (12 males) and 30 children CWNS (11 males) participated in the study. Groups were matched on age (CWS: M=49.63, SD=10.34, range=38-80months; CWNS: M=50.63, SD=9.82, range=37-74months), cognitive ability, and language skills. All children completed a computer-based variation of the DCCS, in which they matched on-screen bivalent stimuli to response buttons based on rules that switched mid-task. Results showed increased slowing for CWS compared to controls during the postswitch phase, as well as contrasting patterns of speed-accuracy tradeoff for CWS and CWNS as they moved from the preswitch to postswitch phase of the task. Group differences in performance suggest that early stuttering may be associated with difficulty shifting attention efficiently and greater concern about errors. Findings are consistent with a growing literature indicating links between weak attentional control and persisting developmental stuttering. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Weak Galerkin finite element methods for Darcy flow: Anisotropy and heterogeneity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Liu, Jiangguo; Mu, Lin

    2014-11-01

    This paper presents a family of weak Galerkin finite element methods (WGFEMs) for Darcy flow computation. The WGFEMs are new numerical methods that rely on the novel concept of discrete weak gradients. The WGFEMs solve for pressure unknowns both in element interiors and on the mesh skeleton. The numerical velocity is then obtained from the discrete weak gradient of the numerical pressure. The new methods are quite different than many existing numerical methods in that they are locally conservative by design, the resulting discrete linear systems are symmetric and positive-definite, and there is no need for tuning problem-dependent penalty factors.more » We test the WGFEMs on benchmark problems to demonstrate the strong potential of these new methods in handling strong anisotropy and heterogeneity in Darcy flow.« less

  8. WiLE: A Mathematica package for weak coupling expansion of Wilson loops in ABJ(M) theory

    NASA Astrophysics Data System (ADS)

    Preti, M.

    2018-06-01

    We present WiLE, a Mathematica® package designed to perform the weak coupling expansion of any Wilson loop in ABJ(M) theory at arbitrary perturbative order. For a given set of fields on the loop and internal vertices, the package displays all the possible Feynman diagrams and their integral representations. The user can also choose to exclude non planar diagrams, tadpoles and self-energies. Through the use of interactive input windows, the package should be easily accessible to users with little or no previous experience. The package manual provides some pedagogical examples and the computation of all ladder diagrams at three-loop relevant for the cusp anomalous dimension in ABJ(M). The latter application gives also support to some recent results computed in different contexts.

  9. Using computer assisted learning for clinical skills education in nursing: integrative review.

    PubMed

    Bloomfield, Jacqueline G; While, Alison E; Roberts, Julia D

    2008-08-01

    This paper is a report of an integrative review of research investigating computer assisted learning for clinical skills education in nursing, the ways in which it has been studied and the general findings. Clinical skills are an essential aspect of nursing practice and there is international debate about the most effective ways in which these can be taught. Computer assisted learning has been used as an alternative to conventional teaching methods, and robust research to evaluate its effectiveness is essential. The CINAHL, Medline, BNI, PsycInfo and ERIC electronic databases were searched for the period 1997-2006 for research-based papers published in English. Electronic citation tracking and hand searching of reference lists and relevant journals was also undertaken. Twelve studies met the inclusion criteria. An integrative review was conducted and each paper was explored in relation to: design, aims, sample, outcome measures and findings. Many of the study samples were small and there were weaknesses in designs. There is limited empirical evidence addressing the use of computer assisted learning for clinical skills education in nursing. Computer assisted learning has been used to teach a limited range of clinical skills in a variety of settings. The paucity of evaluative studies indicates the need for more rigorous research to investigate the effect of computer assisted learning for this purpose. Areas that need to be addressed in future studies include: sample size, range of skills, longitudinal follow-up and control of confounding variables.

  10. Computational aspects of helicopter trim analysis and damping levels from Floquet theory

    NASA Technical Reports Server (NTRS)

    Gaonkar, Gopal H.; Achar, N. S.

    1992-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  11. Optical flip-flops and sequential logic circuits using a liquid crystal light valve

    NASA Technical Reports Server (NTRS)

    Fatehi, M. T.; Collins, S. A., Jr.; Wasmundt, K. C.

    1984-01-01

    This paper is concerned with the application of optics to digital computing. A Hughes liquid crystal light valve is used as an active optical element where a weak light beam can control a strong light beam with either a positive or negative gain characteristic. With this device as the central element the ability to produce bistable states from which different types of flip-flop can be implemented is demonstrated. In this paper, some general comments are first presented on digital computing as applied to optics. This is followed by a discussion of optical implementation of various types of flip-flop. These flip-flops are then used in the design of optical equivalents to a few simple sequential circuits such as shift registers and accumulators. As a typical sequential machine, a schematic layout for an optical binary temporal integrator is presented. Finally, a suggested experimental configuration for an optical master-slave flip-flop array is given.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartoletti, T.

    SPI/U3.1 consists of five tools used to assess and report the security posture of computers running the UNIX operating system. The tools are: Access Control Test: A rule-based system which identifies sequential dependencies in UNIX access controls. Binary Inspector Tool: Evaluates the release status of system binaries by comparing a crypto-checksum to provide table entries. Change Detection Tool: Maintains and applies a snapshot of critical system files and attributes for purposes of change detection. Configuration Query Language: Accepts CQL-based scripts (provided) to evaluate queries over the status of system files, configuration of services and many other elements of UNIX systemmore » security. Password Security Inspector: Tests for weak or aged passwords. The tools are packaged with a forms-based user interface providing on-line context-sensistive help, job scheduling, parameter management and output report management utilities. Tools may be run independent of the UI.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartoletti, Tony

    SPI/U3.2 consists of five tools used to assess and report the security posture of computers running the UNIX operating system. The tools are: Access Control Test: A rule-based system which identifies sequential dependencies in UNIX access controls. Binary Authentication Tool: Evaluates the release status of system binaries by comparing a crypto-checksum to provide table entries. Change Detection Tool: Maintains and applies a snapshot of critical system files and attributes for purposes of change detection. Configuration Query Language: Accepts CQL-based scripts (provided) to evaluate queries over the status of system files, configuration of services and many other elements of UNIX systemmore » security. Password Security Inspector: Tests for weak or aged passwords. The tools are packaged with a forms-based user interface providing on-line context-sensistive help, job scheduling, parameter management and output report management utilities. Tools may be run independent of the UI.« less

  14. Development of a virtual speaking simulator using Image Based Rendering.

    PubMed

    Lee, J M; Kim, H; Oh, M J; Ku, J H; Jang, D P; Kim, I Y; Kim, S I

    2002-01-01

    The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology has enabled the use of virtual reality (VR) for the treatment of the fear of public speaking. There are two techniques for building virtual environments for the treatment of this fear: a model-based and a movie-based method. Both methods have the weakness that they are unrealistic and not controllable individually. To understand these disadvantages, this paper presents a virtual environment produced with Image Based Rendering (IBR) and a chroma-key simultaneously. IBR enables the creation of realistic virtual environments where the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma-keys puts virtual audience members under individual control in the environment. In addition, real time capture technique is used in constructing the virtual environments enabling spoken interaction between the subject and a therapist or another subject.

  15. SPI/U3.2. Security Profile Inspector for UNIX Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartoletti, A.

    1994-08-01

    SPI/U3.2 consists of five tools used to assess and report the security posture of computers running the UNIX operating system. The tools are: Access Control Test: A rule-based system which identifies sequential dependencies in UNIX access controls. Binary Authentication Tool: Evaluates the release status of system binaries by comparing a crypto-checksum to provide table entries. Change Detection Tool: Maintains and applies a snapshot of critical system files and attributes for purposes of change detection. Configuration Query Language: Accepts CQL-based scripts (provided) to evaluate queries over the status of system files, configuration of services and many other elements of UNIX systemmore » security. Password Security Inspector: Tests for weak or aged passwords. The tools are packaged with a forms-based user interface providing on-line context-sensistive help, job scheduling, parameter management and output report management utilities. Tools may be run independent of the UI.« less

  16. Performance of Reynolds Averaged Navier-Stokes Models in Predicting Separated Flows: Study of the Hump Flow Model Problem

    NASA Technical Reports Server (NTRS)

    Cappelli, Daniele; Mansour, Nagi N.

    2012-01-01

    Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.

  17. Fuzzy Sarsa with Focussed Replacing Eligibility Traces for Robust and Accurate Control

    NASA Astrophysics Data System (ADS)

    Kamdem, Sylvain; Ohki, Hidehiro; Sueda, Naomichi

    Several methods of reinforcement learning in continuous state and action spaces that utilize fuzzy logic have been proposed in recent years. This paper introduces Fuzzy Sarsa(λ), an on-policy algorithm for fuzzy learning that relies on a novel way of computing replacing eligibility traces to accelerate the policy evaluation. It is tested against several temporal difference learning algorithms: Sarsa(λ), Fuzzy Q(λ), an earlier fuzzy version of Sarsa and an actor-critic algorithm. We perform detailed evaluations on two benchmark problems : a maze domain and the cart pole. Results of various tests highlight the strengths and weaknesses of these algorithms and show that Fuzzy Sarsa(λ) outperforms all other algorithms tested for a larger granularity of design and under noisy conditions. It is a highly competitive method of learning in realistic noisy domains where a denser fuzzy design over the state space is needed for a more precise control.

  18. Electrically driven spin qubit based on valley mixing

    NASA Astrophysics Data System (ADS)

    Huang, Wister; Veldhorst, Menno; Zimmerman, Neil M.; Dzurak, Andrew S.; Culcer, Dimitrie

    2017-02-01

    The electrical control of single spin qubits based on semiconductor quantum dots is of great interest for scalable quantum computing since electric fields provide an alternative mechanism for qubit control compared with magnetic fields and can also be easier to produce. Here we outline the mechanism for a drastic enhancement in the electrically-driven spin rotation frequency for silicon quantum dot qubits in the presence of a step at a heterointerface. The enhancement is due to the strong coupling between the ground and excited states which occurs when the electron wave function overcomes the potential barrier induced by the interface step. We theoretically calculate single qubit gate times tπ of 170 ns for a quantum dot confined at a silicon/silicon-dioxide interface. The engineering of such steps could be used to achieve fast electrical rotation and entanglement of spin qubits despite the weak spin-orbit coupling in silicon.

  19. Weak Compactness and Control Measures in the Space of Unbounded Measures

    PubMed Central

    Brooks, James K.; Dinculeanu, Nicolae

    1972-01-01

    We present a synthesis theorem for a family of locally equivalent measures defined on a ring of sets. This theorem is then used to exhibit a control measure for weakly compact sets of unbounded measures. In addition, the existence of a local control measure for locally strongly bounded vector measures is proved by means of the synthesis theorem. PMID:16591980

  20. Steady States, Fluctuation-Dissipation Theorems and Homogenization for Reversible Diffusions in a Random Environment

    NASA Astrophysics Data System (ADS)

    Mathieu, P.; Piatnitski, A.

    2018-04-01

    Prolongating our previous paper on the Einstein relation, we study the motion of a particle diffusing in a random reversible environment when subject to a small external forcing. In order to describe the long time behavior of the particle, we introduce the notions of steady state and weak steady state. We establish the continuity of weak steady states for an ergodic and uniformly elliptic environment. When the environment has finite range of dependence, we prove the existence of the steady state and weak steady state and compute its derivative at a vanishing force. Thus we obtain a complete `fluctuation-dissipation Theorem' in this context as well as the continuity of the effective variance.

  1. Optical study of the DAFT/FADA galaxy cluster survey

    NASA Astrophysics Data System (ADS)

    Martinet, N.; Durret, F.; Clowe, D.; Adami, C.

    2013-11-01

    DAFT/FADA (Dark energy American French Team) is a large survey of ˜90 high redshift (0.42×10^{14} M_{⊙}) clusters with HST weak lensing oriented data, plus BVRIZJ 4m ground based follow up to compute photometric redshifts. The main goals of this survey are to constrain dark energy parameters using weak lensing tomography and to study a large homogeneous sample of high redshift massive clusters. We will briefly review the latest results of this optical survey, focusing on two ongoing works: the calculation of galaxy luminosity functions from photometric redshift catalogs and the weak lensing analysis of ground based data.

  2. The effects of self-control, gang membership, and parental attachment/identification on police contacts among Latino and African American youths.

    PubMed

    Flexon, Jamie L; Greenleaf, Richard G; Lurigio, Arthur J

    2012-04-01

    This study assessed the correlates of self-control and police contact in a sample of Chicago public high school students. The investigation examined the effects of parental attachment/identification, family structure, and peer association on self-control and the effects of parental attachment/identification, family structure, peer association, and self-control on police contact. Differences between African American and Latino youth on the predictors of the two dependent measures were tested in separate regression models. Weak parental attachment/identification and gang affiliation (peer association) predicted low self-control among all students. Among African American youth, only weak maternal attachment/identification predicted low self-control; both weak maternal attachment/identification and gang affiliation predicted low self-control among Latino youth. Gang affiliation predicted police stops (delinquency) among African Americans but not among Latinos. However, both African American and Latino students with lower self-control were more likely to be stopped by the police than those with higher self-control.

  3. Nested Incremental Modeling in the Development of Computational Theories: The CDP+ Model of Reading Aloud

    ERIC Educational Resources Information Center

    Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco

    2007-01-01

    At least 3 different types of computational model have been shown to account for various facets of both normal and impaired single word reading: (a) the connectionist triangle model, (b) the dual-route cascaded model, and (c) the connectionist dual process model. Major strengths and weaknesses of these models are identified. In the spirit of…

  4. Stationary Apparatus Would Apply Forces of Walking to Feet

    NASA Technical Reports Server (NTRS)

    Hauss, Jessica; Wood, John; Budinoff, Jason; Correia, Michael; Albrecht, Rudolf

    2006-01-01

    A proposed apparatus would apply controlled cyclic forces to both feet for the purpose of preventing the loss of bone density in a human subject whose bones are not subjected daily to the mechanical loads of normal activity in normal Earth gravitation. The apparatus was conceived for use by astronauts on long missions in outer space; it could also be used by bedridden patients on Earth, including patients too weak to generate the necessary forces by their own efforts. The apparatus (see figure) would be a modified version of a bicycle-like exercise machine, called the cycle ergometer with vibration isolation system (CEVIS), now aboard the International Space Station. Attached to each CEVIS pedal would be a computer-controlled stress/ vibration exciter connected to the heel portion of a special-purpose pedal. The user would wear custom shoes that would amount to standard bicycle shoes equipped with cleats for secure attachment of the balls of the feet to the special- purpose pedals. If possible, prior to use of the apparatus, the human subject would wear a portable network of recording accelerometers, while walking, jogging, and running. The information thus gathered would be fed to the computer, wherein it would be used to make the exciters apply forces and vibrations closely approximating the forces and vibrations experienced by that individual during normal exercise. It is anticipated that like the forces applied to bones during natural exercise, these artificial forces would stimulate the production of osteoblasts (bone-forming cells), as needed to prevent or retard loss of bone mass. In addition to helping to prevent deterioration of bones, the apparatus could be used in treating a person already suffering from osteoporosis. For this purpose, the magnitude of the applied forces could be reduced, if necessary, to a level at which weak hip and leg bones would still be stimulated to produce osteoblasts without exposing them to the full stresses of walking and thereby risking fracture.

  5. ASIS v1.0: an adaptive solver for the simulation of atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Cariolle, Daniel; Moinat, Philippe; Teyssèdre, Hubert; Giraud, Luc; Josse, Béatrice; Lefèvre, Franck

    2017-04-01

    This article reports on the development and tests of the adaptive semi-implicit scheme (ASIS) solver for the simulation of atmospheric chemistry. To solve the ordinary differential equation systems associated with the time evolution of the species concentrations, ASIS adopts a one-step linearized implicit scheme with specific treatments of the Jacobian of the chemical fluxes. It conserves mass and has a time-stepping module to control the accuracy of the numerical solution. In idealized box-model simulations, ASIS gives results similar to the higher-order implicit schemes derived from the Rosenbrock's and Gear's methods and requires less computation and run time at the moderate precision required for atmospheric applications. When implemented in the MOCAGE chemical transport model and the Laboratoire de Météorologie Dynamique Mars general circulation model, the ASIS solver performs well and reveals weaknesses and limitations of the original semi-implicit solvers used by these two models. ASIS can be easily adapted to various chemical schemes and further developments are foreseen to increase its computational efficiency, and to include the computation of the concentrations of the species in aqueous-phase in addition to gas-phase chemistry.

  6. CALCLENS: Weak lensing simulations for large-area sky surveys and second-order effects in cosmic shear power spectra

    NASA Astrophysics Data System (ADS)

    Becker, Matthew Rand

    I present a new algorithm, CALCLENS, for efficiently computing weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. This new algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift- dependent shear signals including corrections to the Born approximation by using multiple- plane ray tracing, and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multigrid methods. As a result, large areas of sky (~10,000 square degrees) can be ray traced efficiently at high-resolution using only a few hundred cores. Using this new algorithm and curved-sky calculations that only use a slower but more accurate spherical harmonic transform Poisson solver, I study the convergence, shear E-mode, shear B-mode and rotation mode power spectra. Employing full-sky E/B-mode decompositions, I confirm that the numerically computed shear B-mode and rotation mode power spectra are equal at high accuracy ( ≲ 1%) as expected from perturbation theory up to second order. Coupled with realistic galaxy populations placed in large N-body light cone simulations, this new algorithm is ideally suited for the construction of synthetic weak lensing shear catalogs to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys. The implementation presented in this work, written in C and employing widely available software libraries to maintain portability, is publicly available at http://code.google.com/p/calclens.

  7. CALCLENS: weak lensing simulations for large-area sky surveys and second-order effects in cosmic shear power spectra

    NASA Astrophysics Data System (ADS)

    Becker, Matthew R.

    2013-10-01

    I present a new algorithm, Curved-sky grAvitational Lensing for Cosmological Light conE simulatioNS (CALCLENS), for efficiently computing weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. This new algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift-dependent shear signals including corrections to the Born approximation by using multiple-plane ray tracing and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multigrid methods. As a result, large areas of sky (˜10 000 square degrees) can be ray traced efficiently at high resolution using only a few hundred cores. Using this new algorithm and curved-sky calculations that only use a slower but more accurate spherical harmonic transform Poisson solver, I study the convergence, shear E-mode, shear B-mode and rotation mode power spectra. Employing full-sky E/B-mode decompositions, I confirm that the numerically computed shear B-mode and rotation mode power spectra are equal at high accuracy (≲1 per cent) as expected from perturbation theory up to second order. Coupled with realistic galaxy populations placed in large N-body light cone simulations, this new algorithm is ideally suited for the construction of synthetic weak lensing shear catalogues to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys. The implementation presented in this work, written in C and employing widely available software libraries to maintain portability, is publicly available at http://code.google.com/p/calclens.

  8. A full computation-relevant topological dynamics classification of elementary cellular automata.

    PubMed

    Schüle, Martin; Stoop, Ruedi

    2012-12-01

    Cellular automata are both computational and dynamical systems. We give a complete classification of the dynamic behaviour of elementary cellular automata (ECA) in terms of fundamental dynamic system notions such as sensitivity and chaoticity. The "complex" ECA emerge to be sensitive, but not chaotic and not eventually weakly periodic. Based on this classification, we conjecture that elementary cellular automata capable of carrying out complex computations, such as needed for Turing-universality, are at the "edge of chaos."

  9. Developing computer training programs for blood bankers.

    PubMed

    Eisenbrey, L

    1992-01-01

    Two surveys were conducted in July 1991 to gather information about computer training currently performed within American Red Cross Blood Services Regions. One survey was completed by computer trainers from software developer-vendors and regional centers. The second survey was directed to the trainees, to determine their perception of the computer training. The surveys identified the major concepts, length of training, evaluations, and methods of instruction used. Strengths and weaknesses of training programs were highlighted by trainee respondents. Using the survey information and other sources, recommendations (including those concerning which computer skills and tasks should be covered) are made that can be used as guidelines for developing comprehensive computer training programs at any blood bank or blood center.

  10. Electrical control of single hole spins in nanowire quantum dots.

    PubMed

    Pribiag, V S; Nadj-Perge, S; Frolov, S M; van den Berg, J W G; van Weperen, I; Plissard, S R; Bakkers, E P A M; Kouwenhoven, L P

    2013-03-01

    The development of viable quantum computation devices will require the ability to preserve the coherence of quantum bits (qubits). Single electron spins in semiconductor quantum dots are a versatile platform for quantum information processing, but controlling decoherence remains a considerable challenge. Hole spins in III-V semiconductors have unique properties, such as a strong spin-orbit interaction and weak coupling to nuclear spins, and therefore, have the potential for enhanced spin control and longer coherence times. A weaker hyperfine interaction has previously been reported in self-assembled quantum dots using quantum optics techniques, but the development of hole-spin-based electronic devices in conventional III-V heterostructures has been limited by fabrication challenges. Here, we show that gate-tunable hole quantum dots can be formed in InSb nanowires and used to demonstrate Pauli spin blockade and electrical control of single hole spins. The devices are fully tunable between hole and electron quantum dots, which allows the hyperfine interaction strengths, g-factors and spin blockade anisotropies to be compared directly in the two regimes.

  11. Using an innovative multiple regression procedure in a cancer population (Part 1): detecting and probing relationships of common interacting symptoms (pain, fatigue/weakness, sleep problems) as a strategy to discover influential symptom pairs and clusters

    PubMed Central

    Francoeur, Richard B

    2015-01-01

    Background The majority of patients with advanced cancer experience symptom pairs or clusters among pain, fatigue, and insomnia. Improved methods are needed to detect and interpret interactions among symptoms or diesease markers to reveal influential pairs or clusters. In prior work, I developed and validated sequential residual centering (SRC), a method that improves the sensitivity of multiple regression to detect interactions among predictors, by conditioning for multicollinearity (shared variation) among interactions and component predictors. Materials and methods Using a hypothetical three-way interaction among pain, fatigue, and sleep to predict depressive affect, I derive and explain SRC multiple regression. Subsequently, I estimate raw and SRC multiple regressions using real data for these symptoms from 268 palliative radiation outpatients. Results Unlike raw regression, SRC reveals that the three-way interaction (pain × fatigue/weakness × sleep problems) is statistically significant. In follow-up analyses, the relationship between pain and depressive affect is aggravated (magnified) within two partial ranges: 1) complete-to-some control over fatigue/weakness when there is complete control over sleep problems (ie, a subset of the pain–fatigue/weakness symptom pair), and 2) no control over fatigue/weakness when there is some-to-no control over sleep problems (ie, a subset of the pain–fatigue/weakness–sleep problems symptom cluster). Otherwise, the relationship weakens (buffering) as control over fatigue/weakness or sleep problems diminishes. Conclusion By reducing the standard error, SRC unmasks a three-way interaction comprising a symptom pair and cluster. Low-to-moderate levels of the moderator variable for fatigue/weakness magnify the relationship between pain and depressive affect. However, when the comoderator variable for sleep problems accompanies fatigue/weakness, only frequent or unrelenting levels of both symptoms magnify the relationship. These findings suggest that a countervailing mechanism involving depressive affect could account for the effectiveness of a cognitive behavioral intervention to reduce the severity of a pain, fatigue, and sleep disturbance cluster in a previous randomized trial. PMID:25565865

  12. Asymmetric Weakness and West Nile Virus Infection.

    PubMed

    Kuo, Dick C; Bilal, Saadiyah; Koller, Paul

    2015-09-01

    Weakness is a common presentation in the emergency department (ED). Asymmetric weakness or weakness that appears not to follow an anatomical pattern is a less common occurrence. Acute flaccid paralysis with no signs of meningoencephalitis is one of the more uncommon presentations of West Nile virus (WNV). Patient may complain of an acute onset of severe weakness, or even paralysis, in one or multiple limbs with no sensory deficits. This weakness is caused by injury to the anterior horn cells of the spinal cord. We present a case of acute asymmetric flaccid paralysis with preserved sensory responses that was eventually diagnosed as neuroinvasive WNV infection. A 31-year-old male with no medical history presented with complaints of left lower and right upper extremity weakness. Computed tomography scan was negative and multiple other studies were performed in the ED. Eventually, he was admitted to the hospital and was found to have decreased motor amplitudes, severely reduced motor neuron recruitment, and denervation on electrodiagnostic study. Cerebrospinal fluid specimen tested positive for WNV immunoglobulin (Ig) G and IgM antibodies. WHY SHOULD AN EMERGENCY PHYSICIAN BE AWARE OF THIS?: Acute asymmetric flaccid paralysis with no signs of viremia or meningoencephalitis is an unusual presentation of WNV infection. WNV should be included in the differential for patients with asymmetric weakness, especially in the summer months in areas with large mosquito populations. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Distinguishing Motor Weakness From Impaired Spatial Awareness: A Helping Hand!

    PubMed

    Raju, Suneil A; Swift, Charles R; Bardhan, Karna Dev

    2017-01-01

    Our patient, aged 73 years, had background peripheral neuropathy of unknown cause, stable for several years, which caused some difficulty in walking on uneven ground. He attended for a teaching session but now staggered in, a new development. He had apparent weakness of his right arm, but there was difficulty in distinguishing motor weakness from impaired spatial awareness suggestive of parietal lobe dysfunction. With the patient seated, eyes closed, and left arm outstretched, S.A.R. lifted the patient's right arm and asked him to indicate when both were level. This confirmed motor weakness. Urgent computed tomographic scan confirmed left subdural haematoma and its urgent evacuation rapidly resolved the patient's symptoms. Intrigued by our patient's case, we explored further and learnt that in rehabilitation medicine, the awareness of limb position is commonly viewed in terms of joint position sense. We present recent literature evidence indicating that the underlying mechanisms are more subtle.

  14. Distinguishing Motor Weakness From Impaired Spatial Awareness: A Helping Hand!

    PubMed Central

    Raju, Suneil A; Swift, Charles R; Bardhan, Karna Dev

    2017-01-01

    Our patient, aged 73 years, had background peripheral neuropathy of unknown cause, stable for several years, which caused some difficulty in walking on uneven ground. He attended for a teaching session but now staggered in, a new development. He had apparent weakness of his right arm, but there was difficulty in distinguishing motor weakness from impaired spatial awareness suggestive of parietal lobe dysfunction. With the patient seated, eyes closed, and left arm outstretched, S.A.R. lifted the patient’s right arm and asked him to indicate when both were level. This confirmed motor weakness. Urgent computed tomographic scan confirmed left subdural haematoma and its urgent evacuation rapidly resolved the patient’s symptoms. Intrigued by our patient’s case, we explored further and learnt that in rehabilitation medicine, the awareness of limb position is commonly viewed in terms of joint position sense. We present recent literature evidence indicating that the underlying mechanisms are more subtle. PMID:28579860

  15. Decoding subtle forearm flexions using fractal features of surface electromyogram from single and multiple sensors.

    PubMed

    Arjunan, Sridhar Poosapadi; Kumar, Dinesh Kant

    2010-10-21

    Identifying finger and wrist flexion based actions using a single channel surface electromyogram (sEMG) can lead to a number of applications such as sEMG based controllers for near elbow amputees, human computer interface (HCI) devices for elderly and for defence personnel. These are currently infeasible because classification of sEMG is unreliable when the level of muscle contraction is low and there are multiple active muscles. The presence of noise and cross-talk from closely located and simultaneously active muscles is exaggerated when muscles are weakly active such as during sustained wrist and finger flexion. This paper reports the use of fractal properties of sEMG to reliably identify individual wrist and finger flexion, overcoming the earlier shortcomings. SEMG signal was recorded when the participant maintained pre-specified wrist and finger flexion movements for a period of time. Various established sEMG signal parameters such as root mean square (RMS), Mean absolute value (MAV), Variance (VAR) and Waveform length (WL) and the proposed fractal features: fractal dimension (FD) and maximum fractal length (MFL) were computed. Multi-variant analysis of variance (MANOVA) was conducted to determine the p value, indicative of the significance of the relationships between each of these parameters with the wrist and finger flexions. Classification accuracy was also computed using the trained artificial neural network (ANN) classifier to decode the desired subtle movements. The results indicate that the p value for the proposed feature set consisting of FD and MFL of single channel sEMG was 0.0001 while that of various combinations of the five established features ranged between 0.009 - 0.0172. From the accuracy of classification by the ANN, the average accuracy in identifying the wrist and finger flexions using the proposed feature set of single channel sEMG was 90%, while the average accuracy when using a combination of other features ranged between 58% and 73%. The results show that the MFL and FD of a single channel sEMG recorded from the forearm can be used to accurately identify a set of finger and wrist flexions even when the muscle activity is very weak. A comparison with other features demonstrates that this feature set offers a dramatic improvement in the accuracy of identification of the wrist and finger movements. It is proposed that such a system could be used to control a prosthetic hand or for a human computer interface.

  16. Decoding subtle forearm flexions using fractal features of surface electromyogram from single and multiple sensors

    PubMed Central

    2010-01-01

    Background Identifying finger and wrist flexion based actions using a single channel surface electromyogram (sEMG) can lead to a number of applications such as sEMG based controllers for near elbow amputees, human computer interface (HCI) devices for elderly and for defence personnel. These are currently infeasible because classification of sEMG is unreliable when the level of muscle contraction is low and there are multiple active muscles. The presence of noise and cross-talk from closely located and simultaneously active muscles is exaggerated when muscles are weakly active such as during sustained wrist and finger flexion. This paper reports the use of fractal properties of sEMG to reliably identify individual wrist and finger flexion, overcoming the earlier shortcomings. Methods SEMG signal was recorded when the participant maintained pre-specified wrist and finger flexion movements for a period of time. Various established sEMG signal parameters such as root mean square (RMS), Mean absolute value (MAV), Variance (VAR) and Waveform length (WL) and the proposed fractal features: fractal dimension (FD) and maximum fractal length (MFL) were computed. Multi-variant analysis of variance (MANOVA) was conducted to determine the p value, indicative of the significance of the relationships between each of these parameters with the wrist and finger flexions. Classification accuracy was also computed using the trained artificial neural network (ANN) classifier to decode the desired subtle movements. Results The results indicate that the p value for the proposed feature set consisting of FD and MFL of single channel sEMG was 0.0001 while that of various combinations of the five established features ranged between 0.009 - 0.0172. From the accuracy of classification by the ANN, the average accuracy in identifying the wrist and finger flexions using the proposed feature set of single channel sEMG was 90%, while the average accuracy when using a combination of other features ranged between 58% and 73%. Conclusions The results show that the MFL and FD of a single channel sEMG recorded from the forearm can be used to accurately identify a set of finger and wrist flexions even when the muscle activity is very weak. A comparison with other features demonstrates that this feature set offers a dramatic improvement in the accuracy of identification of the wrist and finger movements. It is proposed that such a system could be used to control a prosthetic hand or for a human computer interface. PMID:20964863

  17. Concepts, Control, and Context: A Connectionist Account of Normal and Disordered Semantic Cognition

    PubMed Central

    2018-01-01

    Semantic cognition requires conceptual representations shaped by verbal and nonverbal experience and executive control processes that regulate activation of knowledge to meet current situational demands. A complete model must also account for the representation of concrete and abstract words, of taxonomic and associative relationships, and for the role of context in shaping meaning. We present the first major attempt to assimilate all of these elements within a unified, implemented computational framework. Our model combines a hub-and-spoke architecture with a buffer that allows its state to be influenced by prior context. This hybrid structure integrates the view, from cognitive neuroscience, that concepts are grounded in sensory-motor representation with the view, from computational linguistics, that knowledge is shaped by patterns of lexical co-occurrence. The model successfully codes knowledge for abstract and concrete words, associative and taxonomic relationships, and the multiple meanings of homonyms, within a single representational space. Knowledge of abstract words is acquired through (a) their patterns of co-occurrence with other words and (b) acquired embodiment, whereby they become indirectly associated with the perceptual features of co-occurring concrete words. The model accounts for executive influences on semantics by including a controlled retrieval mechanism that provides top-down input to amplify weak semantic relationships. The representational and control elements of the model can be damaged independently, and the consequences of such damage closely replicate effects seen in neuropsychological patients with loss of semantic representation versus control processes. Thus, the model provides a wide-ranging and neurally plausible account of normal and impaired semantic cognition. PMID:29733663

  18. Measuring radio-signal power accurately

    NASA Technical Reports Server (NTRS)

    Goldstein, R. M.; Newton, J. W.; Winkelstein, R. A.

    1979-01-01

    Absolute value of signal power in weak radio signals is determined by computer-aided measurements. Equipment operates by averaging received signal over several-minute period and comparing average value with noise level of receiver previously calibrated.

  19. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are three computer software packages including "Martin Luther King, Jr.: Instant Replay of History,""Weeds to Trees," and "The New Print Shop, School Edition." Discussed are hardware requirements, costs, grade levels, availability, emphasis, strengths, and weaknesses. (CW)

  20. Computer-aided drug discovery.

    PubMed

    Bajorath, Jürgen

    2015-01-01

    Computational approaches are an integral part of interdisciplinary drug discovery research. Understanding the science behind computational tools, their opportunities, and limitations is essential to make a true impact on drug discovery at different levels. If applied in a scientifically meaningful way, computational methods improve the ability to identify and evaluate potential drug molecules, but there remain weaknesses in the methods that preclude naïve applications. Herein, current trends in computer-aided drug discovery are reviewed, and selected computational areas are discussed. Approaches are highlighted that aid in the identification and optimization of new drug candidates. Emphasis is put on the presentation and discussion of computational concepts and methods, rather than case studies or application examples. As such, this contribution aims to provide an overview of the current methodological spectrum of computational drug discovery for a broad audience.

  1. User's Guide to Computing High School Graduation Rates. Volume 2. Technical Report: Technical Evaluation of Proxy Graduation Indicators. NCES 2006-605

    ERIC Educational Resources Information Center

    Seastrom, Marilyn M.; Chapman, Chris; Stillwell, Robert; McGrath, Daniel; Peltola, Pia; Dinkes, Rachel; Xu, Zeyu

    2006-01-01

    This report consists of two volumes, the first takes an in-depth look at the various graduation indicators, with a description of the computational formulas, the data required for each indicator, the assumptions underlying each formula, the strengths and weaknesses of each indicator, and a consideration of the conditions under which each indicator…

  2. User's Guide to Computing High School Graduation Rates. Volume 1. Technical Report: Review of Current and Proposed Graduation Indicators. NCES 2006-604

    ERIC Educational Resources Information Center

    Seastrom, Marilyn M.; Chapman, Chris; Stillwell, Robert; McGrath, Daniel; Peltola, Pia; Dinkes, Rachel; Xu, Zeyu

    2006-01-01

    The first volume of this report examines the existing measures of high school completion and the newly proposed proxy measures. This includes a description of the computational formulas, the data required for each indicator, the assumptions underlying each formula, the strengths and weaknesses of each indicator relative to a true cohort on-time…

  3. In the Mind's Eye: Visual Thinkers, Gifted People with Dyslexia and Other Learning Difficulties, Computer Images and the Ironies of Creativity. Updated Edition.

    ERIC Educational Resources Information Center

    West, Thomas G.

    This book presents research on how some innovations in computer visualization are making work and education more favorable to visual thinking. The book exposes many popular myths about conventional intelligence through an examination of the role of visual-spatial strengths and verbal weaknesses in the lives of 11 gifted individuals, including…

  4. Validation of two-phase CFD models for propellant tank self-pressurization: Crossing fluid types, scales, and gravity levels

    NASA Astrophysics Data System (ADS)

    Kassemi, Mohammad; Kartuzova, Olga; Hylton, Sonya

    2018-01-01

    This paper examines our computational ability to capture the transport and phase change phenomena that govern cryogenic storage tank pressurization and underscores our strengths and weaknesses in this area in terms of three computational-experimental validation case studies. In the first study, 1g pressurization of a simulant low-boiling point fluid in a small scale transparent tank is considered in the context of the Zero-Boil-Off Tank (ZBOT) Experiment to showcase the relatively strong capability that we have developed in modelling the coupling between the convective transport and stratification in the bulk phases with the interfacial evaporative and condensing heat and mass transfer that ultimately control self-pressurization in the storage tank. Here, we show that computational predictions exhibit excellent temporal and spatial fidelity under the moderate Ra number - high Bo number convective-phase distribution regimes. In the second example, we focus on 1g pressurization and pressure control of the large-scale K-site liquid hydrogen tank experiment where we show that by crossing fluid types and physical scales, we enter into high Bo number - high Ra number flow regimes that challenge our ability to predict turbulent heat and mass transfer and their impact on the tank pressurization correctly, especially, in the vapor domain. In the final example, we examine pressurization results from the small scale simulant fluid Tank Pressure Control Experiment (TCPE) performed in microgravity to underscore the fact that in crossing into a low Ra number - low Bo number regime in microgravity, the temporal evolution of the phase front as affected by the time-dependent residual gravity and impulse accelerations becomes an important consideration. In this case detailed acceleration data are needed to predict the correct rate of tank self-pressurization.

  5. Modes of self-organization of diluted bubbly liquids in acoustic fields: One-dimensional theory.

    PubMed

    Gumerov, Nail A; Akhatov, Iskander S

    2017-02-01

    The paper is dedicated to mathematical modeling of self-organization of bubbly liquids in acoustic fields. A continuum model describing the two-way interaction of diluted polydisperse bubbly liquids and acoustic fields in weakly-nonlinear approximation is studied analytically and numerically in the one-dimensional case. It is shown that the regimes of self-organization of monodisperse bubbly liquids can be controlled by only a few dimensionless parameters. Two basic modes, clustering and propagating shock waves of void fraction (acoustically induced transparency), are identified and criteria for their realization in the space of parameters are proposed. A numerical method for solving of one-dimensional self-organization problems is developed. Computational results for mono- and polydisperse systems are discussed.

  6. Noise-Resilient Quantum Computing with a Nitrogen-Vacancy Center and Nuclear Spins.

    PubMed

    Casanova, J; Wang, Z-Y; Plenio, M B

    2016-09-23

    Selective control of qubits in a quantum register for the purposes of quantum information processing represents a critical challenge for dense spin ensembles in solid-state systems. Here we present a protocol that achieves a complete set of selective electron-nuclear gates and single nuclear rotations in such an ensemble in diamond facilitated by a nearby nitrogen-vacancy (NV) center. The protocol suppresses internuclear interactions as well as unwanted coupling between the NV center and other spins of the ensemble to achieve quantum gate fidelities well exceeding 99%. Notably, our method can be applied to weakly coupled, distant spins representing a scalable procedure that exploits the exceptional properties of nuclear spins in diamond as robust quantum memories.

  7. Specific and Non-Specific Protein Association in Solution: Computation of Solvent Effects and Prediction of First-Encounter Modes for Efficient Configurational Bias Monte Carlo Simulations

    PubMed Central

    Cardone, Antonio; Pant, Harish; Hassan, Sergio A.

    2013-01-01

    Weak and ultra-weak protein-protein association play a role in molecular recognition, and can drive spontaneous self-assembly and aggregation. Such interactions are difficult to detect experimentally, and are a challenge to the force field and sampling technique. A method is proposed to identify low-population protein-protein binding modes in aqueous solution. The method is designed to identify preferential first-encounter complexes from which the final complex(es) at equilibrium evolves. A continuum model is used to represent the effects of the solvent, which accounts for short- and long-range effects of water exclusion and for liquid-structure forces at protein/liquid interfaces. These effects control the behavior of proteins in close proximity and are optimized based on binding enthalpy data and simulations. An algorithm is described to construct a biasing function for self-adaptive configurational-bias Monte Carlo of a set of interacting proteins. The function allows mixing large and local changes in the spatial distribution of proteins, thereby enhancing sampling of relevant microstates. The method is applied to three binary systems. Generalization to multiprotein complexes is discussed. PMID:24044772

  8. Notification: FY 2017 Update of Proposed Key Management Challenges and Internal Control Weaknesses Confronting the U.S. Chemical Safety and Hazard Investigation Board

    EPA Pesticide Factsheets

    Jan 5, 2017. The EPA OIG is beginning work to update for fiscal year 2017 its list of proposed key management challenges and internal control weaknesses confronting the U.S. Chemical Safety and Hazard Investigation Board (CSB).

  9. A Taylor weak-statement algorithm for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Kim, J. W.

    1987-01-01

    Finite element analysis, applied to computational fluid dynamics (CFD) problem classes, presents a formal procedure for establishing the ingredients of a discrete approximation numerical solution algorithm. A classical Galerkin weak-statement formulation, formed on a Taylor series extension of the conservation law system, is developed herein that embeds a set of parameters eligible for constraint according to specification of suitable norms. The derived family of Taylor weak statements is shown to contain, as special cases, over one dozen independently derived CFD algorithms published over the past several decades for the high speed flow problem class. A theoretical analysis is completed that facilitates direct qualitative comparisons. Numerical results for definitive linear and nonlinear test problems permit direct quantitative performance comparisons.

  10. Middle ear osteoma causing progressive facial nerve weakness: a case report.

    PubMed

    Curtis, Kate; Bance, Manohar; Carter, Michael; Hong, Paul

    2014-09-18

    Facial nerve weakness is most commonly due to Bell's palsy or cerebrovascular accidents. Rarely, middle ear tumor presents with facial nerve dysfunction. We report a very unusual case of middle ear osteoma in a 49-year-old Caucasian woman causing progressive facial nerve deficit. A subtle middle ear lesion was observed on otoscopy and computed tomographic images demonstrated an osseous middle ear tumor. Complete surgical excision resulted in the partial recovery of facial nerve function. Facial nerve dysfunction is rarely caused by middle ear tumors. The weakness is typically due to a compressive effect on the middle ear portion of the facial nerve. Early recognition is crucial since removal of these lesions may lead to the recuperation of facial nerve function.

  11. CPMIP: measurements of real computational performance of Earth system models in CMIP6

    NASA Astrophysics Data System (ADS)

    Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett

    2017-01-01

    A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).

  12. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network

    NASA Astrophysics Data System (ADS)

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-03-01

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.

  13. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network.

    PubMed

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-03-21

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices' non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.

  14. A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network

    PubMed Central

    Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien

    2017-01-01

    With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing. PMID:28322262

  15. Identification of interfaces involved in weak interactions with application to F-actin-aldolase rafts.

    PubMed

    Hu, Guiqing; Taylor, Dianne W; Liu, Jun; Taylor, Kenneth A

    2018-03-01

    Macromolecular interactions occur with widely varying affinities. Strong interactions form well defined interfaces but weak interactions are more dynamic and variable. Weak interactions can collectively lead to large structures such as microvilli via cooperativity and are often the precursors of much stronger interactions, e.g. the initial actin-myosin interaction during muscle contraction. Electron tomography combined with subvolume alignment and classification is an ideal method for the study of weak interactions because a 3-D image is obtained for the individual interactions, which subsequently are characterized collectively. Here we describe a method to characterize heterogeneous F-actin-aldolase interactions in 2-D rafts using electron tomography. By forming separate averages of the two constituents and fitting an atomic structure to each average, together with the alignment information which relates the raw motif to the average, an atomic model of each crosslink is determined and a frequency map of contact residues is computed. The approach should be applicable to any large structure composed of constituents that interact weakly and heterogeneously. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Conformational stability as a design target to control protein aggregation.

    PubMed

    Costanzo, Joseph A; O'Brien, Christopher J; Tiller, Kathryn; Tamargo, Erin; Robinson, Anne Skaja; Roberts, Christopher J; Fernandez, Erik J

    2014-05-01

    Non-native protein aggregation is a prevalent problem occurring in many biotechnological manufacturing processes and can compromise the biological activity of the target molecule or induce an undesired immune response. Additionally, some non-native aggregation mechanisms lead to amyloid fibril formation, which can be associated with debilitating diseases. For natively folded proteins, partial or complete unfolding is often required to populate aggregation-prone conformational states, and therefore one proposed strategy to mitigate aggregation is to increase the free energy for unfolding (ΔGunf) prior to aggregation. A computational design approach was tested using human γD crystallin (γD-crys) as a model multi-domain protein. Two mutational strategies were tested for their ability to reduce/increase aggregation rates by increasing/decreasing ΔGunf: stabilizing the less stable domain and stabilizing the domain-domain interface. The computational protein design algorithm, RosettaDesign, was implemented to identify point variants. The results showed that although the predicted free energies were only weakly correlated with the experimental ΔGunf values, increased/decreased aggregation rates for γD-crys correlated reasonably well with decreases/increases in experimental ΔGunf, illustrating improved conformational stability as a possible design target to mitigate aggregation. However, the results also illustrate that conformational stability is not the sole design factor controlling aggregation rates of natively folded proteins.

  17. Electronic and Optical Properties of Borophene, a Two-dimensional Transparent Metal.

    NASA Astrophysics Data System (ADS)

    Adamska, Lyudmyla; Sadasivam, Sridhar; Darancet, Pierre; Sharifzadeh, Sahar

    Borophene is a recently synthesized metallic sheet that displays many similarities to graphene and has been predicted to be complimentary to graphene as a high density of states, optically transparent 2D conductor. The atomic arrangement of boron in the monolayer strongly depends on the growth substrate and significantly alters the optoelectronic properties. Here, we report a first-principles density functional theory and many-body perturbation theory study aimed at understanding the optoelectronic properties of two likely allotropes of monolayer boron that are consistent with experimental scanning tunneling microscopy images. We predict that despite both systems are metallic, the two allotropes have substantially different bandstructure and optical properties, with one structure being transparent up to 3 eV and the second weakly absorbing in the UV/Vis region. We demonstrate that this strong structure-dependence of optoelectronic properties is present with the application of strain. Lastly, we discuss the strength of electron-phonon and electron-hole interactions within these materials. Overall, we determine that precise control of the growth conditions in necessary for controlled optical properties. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357, and the Texas Advanced Computing Center (TACC) at The University of Texas at Austin.

  18. Negotiation from weakness: Concept, model, and application to strategic negotiations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tangredi, S.J.

    Analysis of the dynamics of asymmetrical negotiations requires the development of the novel concept of negotiation from weakness. A nation is assumed to be negotiating from weakness when the elements of national power place it at a relative disadvantage in achieving the desired objectives of a particular set of negotiations. Successful negotiation from weakness is the adoption and application of negotiating strategies and tactics (subjective elements) that nullify the possible effects of an asymmetry in objective power potential. Once developed, the model is applied to arms control negotiations between the United States and Soviet Union in 1962-1972, a period inmore » which the United States was assumed to be strategically superior. Outcomes of the arms control negotiations examined suggests that the Soviet Union attempted to utilize strategies and tactics appropriate to the negotiating from weakness situation. The success of the Soviet Union is reversing the perceived strategic balance by 1972, implies that the concept of successful negotiating from weakness is a viable approach to the examination of asymmetrical negotiations involving security issues.« less

  19. Constraining particle size-dependent plume sedimentation from the 17 June 1996 eruption of Ruapehu Volcano, New Zealand, using geophysical inversions

    NASA Astrophysics Data System (ADS)

    Klawonn, M.; Frazer, L. N.; Wolfe, C. J.; Houghton, B. F.; Rosenberg, M. D.

    2014-03-01

    Weak subplinian-plinian plumes pose frequent hazards to populations and aviation, yet many key parameters of these particle-laden plumes are, to date, poorly constrained. This study recovers the particle size-dependent mass distribution along the trajectory of a well-constrained weak plume by inverting the dispersion process of tephra fallout. We use the example of the 17 June 1996 Ruapehu eruption in New Zealand and base our computations on mass per unit area tephra measurements and grain size distributions at 118 sample locations. Comparisons of particle fall times and time of sampling collection, as well as observations during the eruption, reveal that particles smaller than 250 μm likely settled as aggregates. For simplicity we assume that all of these fine particles fell as aggregates of constant size and density, whereas we assume that large particles fell as individual particles at their terminal velocity. Mass fallout along the plume trajectory follows distinct trends between larger particles (d≥250 μm) and the fine population (d<250 μm) that are likely due to the two different settling behaviors (aggregate settling versus single-particle settling). In addition, we computed the resulting particle size distribution within the weak plume along its axis and find that the particle mode shifts from an initial 1φ mode to a 2.5φ mode 10 km from the vent and is dominated by a 2.5 to 3φ mode 10-180 km from vent, where the plume reaches the coastline and we do not have further field constraints. The computed particle distributions inside the plume provide new constraints on the mass transport processes within weak plumes and improve previous models. The distinct decay trends between single-particle settling and aggregate settling may serve as a new tool to identify particle sizes that fell as aggregates for other eruptions.

  20. Baseline scheme for polarization preservation and control in the MEIC ion complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derbenev, Yaroslav S.; Lin, Fanglei; Morozov, Vasiliy

    2015-09-01

    The scheme for preservation and control of the ion polarization in the Medium-energy Electron-Ion Collider (MEIC) has been under active development in recent years. The figure-8 configuration of the ion rings provides a unique capability to control the polarization of any ion species including deuterons by means of "weak" solenoids rotating the particle spins by small angles. Insertion of "weak" solenoids into the magnetic lattices of the booster and collider rings solves the problem of polarization preservation during acceleration of the ion beam. Universal 3D spin rotators designed on the basis of "weak" solenoids allow one to obtain any polarizationmore » orientation at an interaction point of MEIC. This paper presents the baseline scheme for polarization preservation and control in the MEIC ion complex.« less

  1. Controlling charge quantization with quantum fluctuations.

    PubMed

    Jezouin, S; Iftikhar, Z; Anthore, A; Parmentier, F D; Gennser, U; Cavanna, A; Ouerghi, A; Levkivskyi, I P; Idrisov, E; Sukhorukov, E V; Glazman, L I; Pierre, F

    2016-08-04

    In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.

  2. Collisions of ideal gas molecules with a rough/fractal surface. A computational study.

    PubMed

    Panczyk, Tomasz

    2007-02-01

    The frequency of collisions of ideal gas molecules (argon) with a rough surface has been studied. The rough/fractal surface was created using random deposition technique. By applying various depositions, the roughness of the surface was controlled and, as a measure of the irregularity, the fractal dimensions of the surfaces were determined. The surfaces were next immersed in argon (under pressures 2 x 10(3) to 2 x 10(5) Pa) and the numbers of collisions with these surfaces were counted. The calculations were carried out using a simplified molecular dynamics simulation technique (only hard core repulsions were assumed). As a result, it was stated that the frequency of collisions is a linear function of pressure for all fractal dimensions studied (D = 2, ..., 2.5). The frequency per unit pressure is quite complex function of the fractal dimension; however, the changes of that frequency with the fractal dimension are not strong. It was found that the frequency of collisions is controlled by the number of weakly folded sites on the surfaces and there is some mapping between the shape of adsorption energy distribution functions and this number of weakly folded sites. The results for the rough/fractal surfaces were compared with the prediction given by the Langmuir-Hertz equation (valid for smooth surface), generally the departure from the Langmuir-Hertz equation is not higher than 48% for the studied systems (i.e. for the surfaces created using the random deposition technique).

  3. Controllable nonlinearity in a dual-coupling optomechanical system under a weak-coupling regime

    NASA Astrophysics Data System (ADS)

    Zhu, Gui-Lei; Lü, Xin-You; Wan, Liang-Liang; Yin, Tai-Shuang; Bin, Qian; Wu, Ying

    2018-03-01

    Strong quantum nonlinearity gives rise to many interesting quantum effects and has wide applications in quantum physics. Here we investigate the quantum nonlinear effect of an optomechanical system (OMS) consisting of both linear and quadratic coupling. Interestingly, a controllable optomechanical nonlinearity is obtained by applying a driving laser into the cavity. This controllable optomechanical nonlinearity can be enhanced into a strong coupling regime, even if the system is initially in the weak-coupling regime. Moreover, the system dissipation can be suppressed effectively, which allows the appearance of phonon sideband and photon blockade effects in the weak-coupling regime. This work may inspire the exploration of a dual-coupling optomechanical system as well as its applications in modern quantum science.

  4. Guidelines and Options for Computer Access from a Reclined Position.

    PubMed

    Grott, Ray

    2015-01-01

    Many people can benefit from working in a reclined position when accessing a computer. This can be due to disabilities involving musculoskeletal weakness, or the need to offload pressure on the spine or elevate the legs. Although there are "reclining workstations" on the market that work for some people, potentially better solutions tailored to individual needs can be configured at modest cost by following some basic principles.

  5. Methods of Optimal Control of Laser-Plasma Instabilities Using Spike Trains of Uneven Duration and Delay (STUD Pulses)

    NASA Astrophysics Data System (ADS)

    Afeyan, Bedros

    2013-10-01

    We have recently introduced and extensively studied a new adaptive method of LPI control. It promises to extend the effectiveness of laser as inertial fusion drivers by allowing active control of stimulated Raman and Brillouin scattering and crossed beam energy transfer. It breaks multi-nanosecond pulses into a series of picosecond (ps) time scale spikes with comparable gaps in between. The height and width of each spike as well as their separations are optimization parameters. In addition, the spatial speckle patterns are changed after a number of successive spikes as needed (from every spike to never). The combination of these parameters allows the taming of parametric instabilities to conform to any desired reduced reflectivity profile, within the bounds of the performance limitations of the lasers. Instead of pulse shaping on hydrodynamical time scales, far faster (from 1 ps to 10 ps) modulations of the laser profile will be needed to implement the STUD pulse program for full LPI control. We will show theoretical and computational evidence for the effectiveness of the STUD pulse program to control LPI. The physics of why STUD pulses work and how optimization can be implemented efficiently using statistical nonlinear optical models and techniques will be explained. We will also discuss a novel diagnostic system employing STUD pulses that will allow the boosted measurement of velocity distribution function slopes on a ps time scale in the small crossing volume of a pump and a probe beam. Various regimes from weak to strong coupling and weak to strong damping will be treated. Novel pulse modulation schemes and diagnostic tools based on time-lenses used in both microscope and telescope modes will be suggested for the execution of the STUD pule program. Work Supported by the DOE NNSA-OFES Joint Program on HEDLP and DOE OFES SBIR Phase I Grants.

  6. Allele Age Under Non-Classical Assumptions is Clarified by an Exact Computational Markov Chain Approach.

    PubMed

    De Sanctis, Bianca; Krukov, Ivan; de Koning, A P Jason

    2017-09-19

    Determination of the age of an allele based on its population frequency is a well-studied problem in population genetics, for which a variety of approximations have been proposed. We present a new result that, surprisingly, allows the expectation and variance of allele age to be computed exactly (within machine precision) for any finite absorbing Markov chain model in a matter of seconds. This approach makes none of the classical assumptions (e.g., weak selection, reversibility, infinite sites), exploits modern sparse linear algebra techniques, integrates over all sample paths, and is rapidly computable for Wright-Fisher populations up to N e  = 100,000. With this approach, we study the joint effect of recurrent mutation, dominance, and selection, and demonstrate new examples of "selective strolls" where the classical symmetry of allele age with respect to selection is violated by weakly selected alleles that are older than neutral alleles at the same frequency. We also show evidence for a strong age imbalance, where rare deleterious alleles are expected to be substantially older than advantageous alleles observed at the same frequency when population-scaled mutation rates are large. These results highlight the under-appreciated utility of computational methods for the direct analysis of Markov chain models in population genetics.

  7. A Comprehensive Review of Existing Risk Assessment Models in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Amini, Ahmad; Jamil, Norziana

    2018-05-01

    Cloud computing is a popular paradigm in information technology and computing as it offers numerous advantages in terms of economical saving and minimal management effort. Although elasticity and flexibility brings tremendous benefits, it still raises many information security issues due to its unique characteristic that allows ubiquitous computing. Therefore, the vulnerabilities and threats in cloud computing have to be identified and proper risk assessment mechanism has to be in place for better cloud computing management. Various quantitative and qualitative risk assessment models have been proposed but up to our knowledge, none of them is suitable for cloud computing environment. This paper, we compare and analyse the strengths and weaknesses of existing risk assessment models. We then propose a new risk assessment model that sufficiently address all the characteristics of cloud computing, which was not appeared in the existing models.

  8. Superconducting Microelectronics.

    ERIC Educational Resources Information Center

    Henry, Richard W.

    1984-01-01

    Discusses superconducting microelectronics based on the Josephson effect and its advantages over conventional integrated circuits in speed and sensitivity. Considers present uses in standards laboratories (voltage) and in measuring weak magnetic fields. Also considers future applications in superfast computer circuitry using Superconducting…

  9. IOS and ECS line coupling calculation for the CO-He system - Influence on the vibration-rotation band shapes

    NASA Technical Reports Server (NTRS)

    Boissoles, J.; Boulet, C.; Robert, D.; Green, S.

    1987-01-01

    Line coupling coefficients resulting from rotational excitation of CO perturbed by He are computed within the infinite order sudden approximation (IOSA) and within the energy corrected sudden approximation (ECSA). The influence of this line coupling on the 1-0 CO-He vibration-rotation band shape is then computed for the case of weakly overlapping lines in the 292-78 K temperature range. The IOS and ECS results differ only at 78 K by a weak amount at high frequencies. Comparison with an additive superposition of Lorentzian lines shows strong modifications in the troughs between the lines. These calculated modifications are in excellent quantitative agreement with recent experimental data for all the temperatures considered. The applicability of previous approaches to CO-He system, based on either the strong collision model or exponential energy gap law, is also discussed.

  10. Sobolev metrics on diffeomorphism groups and the derived geometry of spaces of submanifolds

    NASA Astrophysics Data System (ADS)

    Micheli, Mario; Michor, Peter W.; Mumford, David

    2013-06-01

    Given a finite-dimensional manifold N, the group \\operatorname{Diff}_{ S}(N) of diffeomorphisms diffeomorphism of N which decrease suitably rapidly to the identity, acts on the manifold B(M,N) of submanifolds of N of diffeomorphism-type M, where M is a compact manifold with \\operatorname{dim} M<\\operatorname{dim} N. Given the right-invariant weak Riemannian metric on \\operatorname{Diff}_{ S}(N) induced by a quite general operator L\\colon \\mathfrak{X}_{ S}(N)\\to \\Gamma(T^*N\\otimes\\operatorname{vol}(N)), we consider the induced weak Riemannian metric on B(M,N) and compute its geodesics and sectional curvature. To do this, we derive a covariant formula for the curvature in finite and infinite dimensions, we show how it makes O'Neill's formula very transparent, and we finally use it to compute the sectional curvature on B(M,N).

  11. On the competition between weak O-H···F and C-H···F hydrogen bonds, in cooperation with C-H···O contacts, in the difluoromethane – tert-butyl alcohol cluster

    PubMed Central

    Spada, Lorenzo; Tasinato, Nicola; Bosi, Giulio; Vazart, Fanny; Barone, Vincenzo; Puzzarini, Cristina

    2017-01-01

    The 1:1 complex of tert-butyl alcohol with difluoromethane has been characterized by means of a joint experimental-computational investigation. Its rotational spectrum has been recorded by using a pulsed-jet Fourier-Transform microwave spectrometer. The experimental work has been guided and supported by accurate quantum-chemical calculations. In particular, the computed potential energy landscape pointed out the formation of three stable isomers. However, the very low interconversion barriers explain why only one isomer, showing one O-H···F and two C-H···O weak hydrogen bonds, has been experimentally characterized. The effect of the H → tert-butyl- group substitution has been analyzed from the comparison to the difluoromethane-water adduct. PMID:28919646

  12. Cubature on Wiener Space: Pathwise Convergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayer, Christian, E-mail: christian.bayer@wias-berlin.de; Friz, Peter K., E-mail: friz@math.tu-berlin.de

    2013-04-15

    Cubature on Wiener space (Lyons and Victoir in Proc. R. Soc. Lond. A 460(2041):169-198, 2004) provides a powerful alternative to Monte Carlo simulation for the integration of certain functionals on Wiener space. More specifically, and in the language of mathematical finance, cubature allows for fast computation of European option prices in generic diffusion models.We give a random walk interpretation of cubature and similar (e.g. the Ninomiya-Victoir) weak approximation schemes. By using rough path analysis, we are able to establish weak convergence for general path-dependent option prices.

  13. The Influence of Forward and Backward Associative Strength on False Memories for Encoding Context

    PubMed Central

    Arndt, Jason

    2016-01-01

    Two experiments examined the effects of Forward Associative Strength (FAS) and Backward Associative Strength (FAS) on false recollection of unstudied lure items. Themes were constructed such that four associates were strongly related to a lure item in terms of FAS or BAS and four associates were weakly related to a lure item in terms of FAS or BAS. Further, when FAS was manipulated, BAS was controlled across strong and weak associates, while FAS was controlled across strong and weak associates when BAS was manipulated. Strong associates were presented in one font while weak associates were presented in a second font. At test, lure items were disproportionately attributed to the source used to present lures’ strong associates compared to lures’ weak associates, both when BAS was manipulated and when FAS was manipulated. This outcome demonstrates that both BAS and FAS influence lure item false recollection, which favors global-matching models’ explanation of false recollection over the explanation offered by spreading-activation theories. PMID:25312499

  14. Formation flying design and applications in weak stability boundary regions.

    PubMed

    Folta, David

    2004-05-01

    Weak stability regions serve as superior locations for interferomertric scientific investigations. These regions are often selected to minimize environmental disturbances and maximize observation efficiency. Designs of formations in these regions are becoming ever more challenging as more complex missions are envisioned. The development of algorithms to enable the capability for formation design must be further enabled to incorporate better understanding of weak stability boundary solution space. This development will improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple formation missions in weak stability boundary regions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes both algorithm and software development. The Constellation-X, Maxim, and Stellar Imager missions are examples of the use of improved numeric methods to attain constrained formation geometries and control their dynamical evolution. This paper presents a survey of formation missions in the weak stability boundary regions and a brief description of formation design using numerical and dynamical techniques.

  15. Computer simulation of flagellar movement. VI. Simple curvature-controlled models are incompletely specified.

    PubMed

    Brokaw, C J

    1985-10-01

    Computer simulation is used to examine a simple flagellar model that will initiate and propagate bending waves in the absence of viscous resistances. The model contains only an elastic bending resistance and an active sliding mechanism that generates reduced active shear moment with increasing sliding velocity. Oscillation results from a distributed control mechanism that reverses the direction of operation of the active sliding mechanism when the curvature reaches critical magnitudes in either direction. Bend propagation by curvature-controlled flagellar models therefore does not require interaction with the viscous resistance of an external fluid. An analytical examination of moment balance during bend propagation by this model yields a solution curve giving values of frequency and wavelength that satisfy the moment balance equation and give uniform bend propagation, suggesting that the model is underdetermined. At 0 viscosity, the boundary condition of 0 shear rate at the basal end of the flagellum during the development of new bends selects the particular solution that is obtained by computer simulations. Therefore, the details of the pattern of bend initiation at the basal end of a flagellum can be of major significance in determining the properties of propagated bending waves in the distal portion of a flagellum. At high values of external viscosity, the model oscillates at frequencies and wavelengths that give approximately integral numbers of waves on the flagellum. These operating points are selected because they facilitate the balance of bending moments at the ends of the model, where the external viscous moment approaches 0. These mode preferences can be overridden by forcing the model to operate at a predetermined frequency. The strong mode preferences shown by curvature-controlled flagellar models, in contrast to the weak or absent mode preferences shown by real flagella, therefore do not demonstrate the inapplicability of the moment-balance approach to real flagella. Instead, they indicate a need to specify additional properties of real flagella that are responsible for selecting particular operating points.

  16. Discontinuous Galerkin Methods for Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Collis, S. Scott

    2002-01-01

    A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.

  17. Melnikov method approach to control of homoclinic/heteroclinic chaos by weak harmonic excitations.

    PubMed

    Chacón, Ricardo

    2006-09-15

    A review on the application of Melnikov's method to control homoclinic and heteroclinic chaos in low-dimensional, non-autonomous and dissipative oscillator systems by weak harmonic excitations is presented, including diverse applications, such as chaotic escape from a potential well, chaotic solitons in Frenkel-Kontorova chains and chaotic-charged particles in the field of an electrostatic wave packet.

  18. Adaptive Response and Tolerance to Weak Acids in Saccharomyces cerevisiae: A Genome-Wide View

    PubMed Central

    Mira, Nuno P.; Teixeira, Miguel Cacho

    2010-01-01

    Abstract Weak acids are widely used as food preservatives (e.g., acetic, propionic, benzoic, and sorbic acids), herbicides (e.g., 2,4-dichlorophenoxyacetic acid), and as antimalarial (e.g., artesunic and artemisinic acids), anticancer (e.g., artesunic acid), and immunosuppressive (e.g., mycophenolic acid) drugs, among other possible applications. The understanding of the mechanisms underlying the adaptive response and resistance to these weak acids is a prerequisite to develop more effective strategies to control spoilage yeasts, and the emergence of resistant weeds, drug resistant parasites or cancer cells. Furthermore, the identification of toxicity mechanisms and resistance determinants to weak acid-based pharmaceuticals increases current knowledge on their cytotoxic effects and may lead to the identification of new drug targets. This review integrates current knowledge on the mechanisms of toxicity and tolerance to weak acid stress obtained in the model eukaryote Saccharomyces cerevisiae using genome-wide approaches and more detailed gene-by-gene analysis. The major features of the yeast response to weak acids in general, and the more specific responses and resistance mechanisms towards a specific weak acid or a group of weak acids, depending on the chemical nature of the side chain R group (R-COOH), are highlighted. The involvement of several transcriptional regulatory networks in the genomic response to different weak acids is discussed, focusing on the regulatory pathways controlled by the transcription factors Msn2p/Msn4p, War1p, Haa1p, Rim101p, and Pdr1p/Pdr3p, which are known to orchestrate weak acid stress response in yeast. The extrapolation of the knowledge gathered in yeast to other eukaryotes is also attempted. PMID:20955006

  19. Information Security: Serious Weakness Put State Department and FAA Operations at Risk

    DOT National Transportation Integrated Search

    1998-05-19

    Testimony focuses on the results of recent reviews of computer security at the Department of State and the Federal Aviation Administration (FAA). Makes specific recommendations for improving State and FAA's information security posture. Highlights be...

  20. Childhood Forearm Breaks Resulting from Mild Trauma May Indicate Bone Deficits

    MedlinePlus

    ... a powerful new technology called high-resolution peripheral quantitative computed tomography (HRpQCT), which, unlike DXA, can assess ... persist throughout life. The investigators concluded that additional research is needed to determine if childhood bone weakness ...

  1. Information Systems: The Status of Computer Security at the Department of Veterans Affairs

    DTIC Science & Technology

    1999-10-01

    security weaknesses identified. The results of our underlying reviews were shared with VAs Office of Inspector General (OIG) for its use in auditing VA’s consolidated financial statements for fiscal year 1998.

  2. Report: Fiscal Year 2006 Federal Information Security Management Act Report Status of EPA’s Computer Security Program

    EPA Pesticide Factsheets

    Report #2006-S-00008, September 25, 2006. Although the Agency has made substantial progress to improve its security program, the OIG identified weaknesses in the Agency’s incident reporting practices.

  3. Comptomization and radiation spectra of X-ray sources. Calculation of the Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Pozdnyakov, L. A.; Sobol, I. M.; Sonyayev, R. A.

    1980-01-01

    The results of computations of the Comptomization of low frequency radiation in weakly relativistic plasma are presented. The influence of photoabsorption by iron ions on a hard X-ray spectrum is considered.

  4. Self-compassion increases self-improvement motivation.

    PubMed

    Breines, Juliana G; Chen, Serena

    2012-09-01

    Can treating oneself with compassion after making a mistake increase self-improvement motivation? In four experiments, the authors examined the hypothesis that self-compassion motivates people to improve personal weaknesses, moral transgressions, and test performance. Participants in a self-compassion condition, compared to a self-esteem control condition and either no intervention or a positive distraction control condition, expressed greater incremental beliefs about a personal weakness (Experiment 1); reported greater motivation to make amends and avoid repeating a recent moral transgression (Experiment 2); spent more time studying for a difficult test following an initial failure (Experiment 3); exhibited a preference for upward social comparison after reflecting on a personal weakness (Experiment 4); and reported greater motivation to change the weakness (Experiment 4). These findings suggest that, somewhat paradoxically, taking an accepting approach to personal failure may make people more motivated to improve themselves.

  5. Crystallization of a salt of a weak organic acid and base: solubility relations, supersaturation control and polymorphic behavior.

    PubMed

    Jones, H P; Davey, R J; Cox, B G

    2005-03-24

    Control of crystallization processes for organic salts is of importance to the pharmaceutical industry as many active pharmaceutical materials are marketed as salts. In this study, a method for estimating the solubility product of a salt of a weak acid and weak base from measured pH-solubility data is described for the first time. This allows calculation of the supersaturation of solutions at known pH. Ethylenediammonium 3,5-dinitrobenzoate is a polymorphic organic salt. A detailed study of the effects of pH, supersaturation, and temperature of crystallization on the physical properties of this salt shows that the desired polymorph may be produced by appropriate selection of the pH and supersaturation of crystallization. Crystal morphology is also controlled by these crystallization conditions.

  6. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    NASA Technical Reports Server (NTRS)

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-01-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  7. Detecting and correcting the bias of unmeasured factors using perturbation analysis: a data-mining approach.

    PubMed

    Lee, Wen-Chung

    2014-02-05

    The randomized controlled study is the gold-standard research method in biomedicine. In contrast, the validity of a (nonrandomized) observational study is often questioned because of unknown/unmeasured factors, which may have confounding and/or effect-modifying potential. In this paper, the author proposes a perturbation test to detect the bias of unmeasured factors and a perturbation adjustment to correct for such bias. The proposed method circumvents the problem of measuring unknowns by collecting the perturbations of unmeasured factors instead. Specifically, a perturbation is a variable that is readily available (or can be measured easily) and is potentially associated, though perhaps only very weakly, with unmeasured factors. The author conducted extensive computer simulations to provide a proof of concept. Computer simulations show that, as the number of perturbation variables increases from data mining, the power of the perturbation test increased progressively, up to nearly 100%. In addition, after the perturbation adjustment, the bias decreased progressively, down to nearly 0%. The data-mining perturbation analysis described here is recommended for use in detecting and correcting the bias of unmeasured factors in observational studies.

  8. Atomic detail brownian dynamics simulations of concentrated protein solutions with a mean field treatment of hydrodynamic interactions.

    PubMed

    Mereghetti, Paolo; Wade, Rebecca C

    2012-07-26

    High macromolecular concentrations are a distinguishing feature of living organisms. Understanding how the high concentration of solutes affects the dynamic properties of biological macromolecules is fundamental for the comprehension of biological processes in living systems. In this paper, we describe the implementation of mean field models of translational and rotational hydrodynamic interactions into an atomically detailed many-protein brownian dynamics simulation method. Concentrated solutions (30-40% volume fraction) of myoglobin, hemoglobin A, and sickle cell hemoglobin S were simulated, and static structure factors, oligomer formation, and translational and rotational self-diffusion coefficients were computed. Good agreement of computed properties with available experimental data was obtained. The results show the importance of both solvent mediated interactions and weak protein-protein interactions for accurately describing the dynamics and the association properties of concentrated protein solutions. Specifically, they show a qualitative difference in the translational and rotational dynamics of the systems studied. Although the translational diffusion coefficient is controlled by macromolecular shape and hydrodynamic interactions, the rotational diffusion coefficient is affected by macromolecular shape, direct intermolecular interactions, and both translational and rotational hydrodynamic interactions.

  9. Spiking computation and stochastic amplification in a neuron-like semiconductor microstructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samardak, A. S.; Laboratory of Thin Film Technologies, Far Eastern Federal University, Vladivostok 690950; Nogaret, A.

    2011-05-15

    We have demonstrated the proof of principle of a semiconductor neuron, which has dendrites, axon, and a soma and computes information encoded in electrical pulses in the same way as biological neurons. Electrical impulses applied to dendrites diffuse along microwires to the soma. The soma is the active part of the neuron, which regenerates input pulses above a voltage threshold and transmits them into the axon. Our concept of neuron is a major step forward because its spatial structure controls the timing of pulses, which arrive at the soma. Dendrites and axon act as transmission delay lines, which modify themore » information, coded in the timing of pulses. We have finally shown that noise enhances the detection sensitivity of the neuron by helping the transmission of weak periodic signals. A maximum enhancement of signal transmission was observed at an optimum noise level known as stochastic resonance. The experimental results are in excellent agreement with simulations of the FitzHugh-Nagumo model. Our neuron is therefore extremely well suited to providing feedback on the various mathematical approximations of neurons and building functional networks.« less

  10. Computer Simulation Elucidates Yeast Flocculation and Sedimentation for Efficient Industrial Fermentation.

    PubMed

    Liu, Chen-Guang; Li, Zhi-Yang; Hao, Yue; Xia, Juan; Bai, Feng-Wu; Mehmood, Muhammad Aamer

    2018-05-01

    Flocculation plays an important role in the immobilized fermentation of biofuels and biochemicals. It is essential to understand the flocculation phenomenon at physical and molecular scale; however, flocs cannot be studied directly due to fragile nature. Hence, the present study is focused on the morphological specificities of yeast flocs formation and sedimentation via the computer simulation by a single floc growth model, based on Diffusion-Limited Aggregation (DLA) model. The impact of shear force, adsorption, and cell propagation on porosity and floc size is systematically illustrated. Strong shear force and weak adsorption reduced floc size but have little impact on porosity. Besides, cell propagation concreted the compactness of flocs enabling them to gain a larger size. Later, a multiple flocs growth model is developed to explain sedimentation at various initial floc sizes. Both models exhibited qualitative agreements with available experimental data. By regulating the operation constraints during fermentation, the present study will lead to finding optimal conditions to control the floc size distribution for efficient fermentation and harvesting. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Global analysis of protein folding using massively parallel design, synthesis and testing

    PubMed Central

    Rocklin, Gabriel J.; Chidyausiku, Tamuka M.; Goreshnik, Inna; Ford, Alex; Houliston, Scott; Lemak, Alexander; Carter, Lauren; Ravichandran, Rashmi; Mulligan, Vikram K.; Chevalier, Aaron; Arrowsmith, Cheryl H.; Baker, David

    2017-01-01

    Proteins fold into unique native structures stabilized by thousands of weak interactions that collectively overcome the entropic cost of folding. Though these forces are “encoded” in the thousands of known protein structures, “decoding” them is challenging due to the complexity of natural proteins that have evolved for function, not stability. Here we combine computational protein design, next-generation gene synthesis, and a high-throughput protease susceptibility assay to measure folding and stability for over 15,000 de novo designed miniproteins, 1,000 natural proteins, 10,000 point-mutants, and 30,000 negative control sequences, identifying over 2,500 new stable designed proteins in four basic folds. This scale—three orders of magnitude greater than that of previous studies of design or folding—enabled us to systematically examine how sequence determines folding and stability in uncharted protein space. Iteration between design and experiment increased the design success rate from 6% to 47%, produced stable proteins unlike those found in nature for topologies where design was initially unsuccessful, and revealed subtle contributions to stability as designs became increasingly optimized. Our approach achieves the long-standing goal of a tight feedback cycle between computation and experiment, and promises to transform computational protein design into a data-driven science. PMID:28706065

  12. A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database

    DOE PAGES

    Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; ...

    2013-01-01

    Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order in amore » 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less

  13. Behavioural and computational varieties of response inhibition in eye movements.

    PubMed

    Cutsuridis, Vassilis

    2017-04-19

    Response inhibition is the ability to override a planned or an already initiated response. It is the hallmark of executive control as its deficits favour impulsive behaviours, which may be detrimental to an individual's life. This article reviews behavioural and computational guises of response inhibition. It focuses only on inhibition of oculomotor responses. It first reviews behavioural paradigms of response inhibition in eye movement research, namely the countermanding and antisaccade paradigms, both proven to be useful tools for the study of response inhibition in cognitive neuroscience and psychopathology. Then, it briefly reviews the neural mechanisms of response inhibition in these two behavioural paradigms. Computational models that embody a hypothesis and/or a theory of mechanisms underlying performance in both behavioural paradigms as well as provide a critical analysis of strengths and weaknesses of these models are discussed. All models assume the race of decision processes. The decision process in each paradigm that wins the race depends on different mechanisms. It has been shown that response latency is a stochastic process and has been proven to be an important measure of the cognitive control processes involved in response stopping in healthy and patient groups. Then, the inhibitory deficits in different brain diseases are reviewed, including schizophrenia and obsessive-compulsive disorder. Finally, new directions are suggested to improve the performance of models of response inhibition by drawing inspiration from successes of models in other domains.This article is part of the themed issue 'Movement suppression: brain mechanisms for stopping and stillness'. © 2017 The Author(s).

  14. A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.

    PubMed

    Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan

    2017-06-22

    Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.

  15. Watch what you say, your computer might be listening: A review of automated speech recognition

    NASA Technical Reports Server (NTRS)

    Degennaro, Stephen V.

    1991-01-01

    Spoken language is the most convenient and natural means by which people interact with each other and is, therefore, a promising candidate for human-machine interactions. Speech also offers an additional channel for hands-busy applications, complementing the use of motor output channels for control. Current speech recognition systems vary considerably across a number of important characteristics, including vocabulary size, speaking mode, training requirements for new speakers, robustness to acoustic environments, and accuracy. Algorithmically, these systems range from rule-based techniques through more probabilistic or self-learning approaches such as hidden Markov modeling and neural networks. This tutorial begins with a brief summary of the relevant features of current speech recognition systems and the strengths and weaknesses of the various algorithmic approaches.

  16. Quantum controlled-Z gate for weakly interacting qubits

    NASA Astrophysics Data System (ADS)

    Mičuda, Michal; Stárek, Robert; Straka, Ivo; Miková, Martina; Dušek, Miloslav; Ježek, Miroslav; Filip, Radim; Fiurášek, Jaromír

    2015-08-01

    We propose and experimentally demonstrate a scheme for the implementation of a maximally entangling quantum controlled-Z gate between two weakly interacting systems. We conditionally enhance the interqubit coupling by quantum interference. Both before and after the interqubit interaction, one of the qubits is coherently coupled to an auxiliary quantum system, and finally it is projected back onto qubit subspace. We experimentally verify the practical feasibility of this technique by using a linear optical setup with weak interferometric coupling between single-photon qubits. Our procedure is universally applicable to a wide range of physical platforms including hybrid systems such as atomic clouds or optomechanical oscillators coupled to light.

  17. The method of A-harmonic approximation and optimal interior partial regularity for nonlinear elliptic systems under the controllable growth condition

    NASA Astrophysics Data System (ADS)

    Chen, Shuhong; Tan, Zhong

    2007-11-01

    In this paper, we consider the nonlinear elliptic systems under controllable growth condition. We use a new method introduced by Duzaar and Grotowski, for proving partial regularity for weak solutions, based on a generalization of the technique of harmonic approximation. We extend previous partial regularity results under the natural growth condition to the case of the controllable growth condition, and directly establishing the optimal Hölder exponent for the derivative of a weak solution.

  18. The emerging role of cloud computing in molecular modelling.

    PubMed

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. NextGen Operational Improvements: Will they Improve Human Performance

    NASA Technical Reports Server (NTRS)

    Beard, Bettina L.; Johnston, James C.; Holbrook, Jon

    2013-01-01

    Modernization of the National Airspace System depends critically on the development of advanced technology, including cutting-edge automation, controller decision-support tools and integrated on-demand information. The Next Generation Air Transportation System national plan envisions air traffic control tower automation that proposes solutions for seven problems: 1) departure metering, 2) taxi routing, 3) taxi and runway scheduling, 4) departure runway assignments, 5) departure flow management, 6) integrated arrival and departure scheduling and 7) runway configuration management. Government, academia and industry are simultaneously pursuing the development of these tools. For each tool, the development process typically begins by assessing its potential benefits, and then progresses to designing preliminary versions of the tool, followed by testing the tool's strengths and weaknesses using computational modeling, human-in-the-loop simulation and/or field tests. We compiled the literature, evaluated the methodological rigor of the studies and served as referee for partisan conclusions that were sometimes overly optimistic. Here we provide the results of this review.

  20. Defence electronics industry profile, 1990-1991

    NASA Astrophysics Data System (ADS)

    The defense electronics industry profiled in this review comprises an estimated 150 Canadian companies that develop, manufacture, and repair radio and communications equipment, radars for surveillance and navigation, air traffic control systems, acoustic and infrared sensors, computers for navigation and fire control, signal processors and display units, special-purpose electronic components, and systems engineering and associated software. Canadian defense electronics companies generally serve market niches and end users of their products are limited to the military, government agencies, or commercial airlines. Geographically, the industry is concentrated in Ontario and Quebec, where about 91 percent of the industry's production and employment is found. In 1989, the estimated revenue of the industry was $2.36 billion, and exports totalled an estimated $1.4 billion. Strengths and weaknesses of the industry are discussed in terms of such factors as the relatively small size of Canadian companies, the ability of Canadian firms to access research and development opportunities and export markets in the United States, the dependence on foreign-made components, and international competition.

  1. Regulation of Cortical Dynamic Range by Background Synaptic Noise and Feedforward Inhibition

    PubMed Central

    Khubieh, Ayah; Ratté, Stéphanie; Lankarany, Milad; Prescott, Steven A.

    2016-01-01

    The cortex encodes a broad range of inputs. This breadth of operation requires sensitivity to weak inputs yet non-saturating responses to strong inputs. If individual pyramidal neurons were to have a narrow dynamic range, as previously claimed, then staggered all-or-none recruitment of those neurons would be necessary for the population to achieve a broad dynamic range. Contrary to this explanation, we show here through dynamic clamp experiments in vitro and computer simulations that pyramidal neurons have a broad dynamic range under the noisy conditions that exist in the intact brain due to background synaptic input. Feedforward inhibition capitalizes on those noise effects to control neuronal gain and thereby regulates the population dynamic range. Importantly, noise allows neurons to be recruited gradually and occludes the staggered recruitment previously attributed to heterogeneous excitation. Feedforward inhibition protects spike timing against the disruptive effects of noise, meaning noise can enable the gain control required for rate coding without compromising the precise spike timing required for temporal coding. PMID:26209846

  2. An integrated utility-based model of conflict evaluation and resolution in the Stroop task.

    PubMed

    Chuderski, Adam; Smolen, Tomasz

    2016-04-01

    Cognitive control allows humans to direct and coordinate their thoughts and actions in a flexible way, in order to reach internal goals regardless of interference and distraction. The hallmark test used to examine cognitive control is the Stroop task, which elicits both the weakly learned but goal-relevant and the strongly learned but goal-irrelevant response tendencies, and requires people to follow the former while ignoring the latter. After reviewing the existing computational models of cognitive control in the Stroop task, its novel, integrated utility-based model is proposed. The model uses 3 crucial control mechanisms: response utility reinforcement learning, utility-based conflict evaluation using the Festinger formula for assessing the conflict level, and top-down adaptation of response utility in service of conflict resolution. Their complex, dynamic interaction led to replication of 18 experimental effects, being the largest data set explained to date by 1 Stroop model. The simulations cover the basic congruency effects (including the response latency distributions), performance dynamics and adaptation (including EEG indices of conflict), as well as the effects resulting from manipulations applied to stimulation and responding, which are yielded by the extant Stroop literature. (c) 2016 APA, all rights reserved).

  3. Lightweight robotic mobility: template-based modeling for dynamics and controls using ADAMS/car and MATLAB

    NASA Astrophysics Data System (ADS)

    Adamczyk, Peter G.; Gorsich, David J.; Hudas, Greg R.; Overholt, James

    2003-09-01

    The U.S. Army is seeking to develop autonomous off-road mobile robots to perform tasks in the field such as supply delivery and reconnaissance in dangerous territory. A key problem to be solved with these robots is off-road mobility, to ensure that the robots can accomplish their tasks without loss or damage. We have developed a computer model of one such concept robot, the small-scale "T-1" omnidirectional vehicle (ODV), to study the effects of different control strategies on the robot's mobility in off-road settings. We built the dynamic model in ADAMS/Car and the control system in Matlab/Simulink. This paper presents the template-based method used to construct the ADAMS model of the T-1 ODV. It discusses the strengths and weaknesses of ADAMS/Car software in such an application, and describes the benefits and challenges of the approach as a whole. The paper also addresses effective linking of ADAMS/Car and Matlab for complete control system development. Finally, this paper includes a section describing the extension of the T-1 templates to other similar ODV concepts for rapid development.

  4. Euclidean mirrors: enhanced vacuum decay from reflected instantons

    NASA Astrophysics Data System (ADS)

    Akal, Ibrahim; Moortgat-Pick, Gudrid

    2018-05-01

    We study the tunnelling of virtual matter–antimatter pairs from the quantum vacuum in the presence of a spatially uniform, time-dependent electric background composed of a strong, slow field superimposed with a weak, rapid field. After analytic continuation to Euclidean spacetime, we obtain from the instanton equations two critical points. While one of them is the closing point of the instanton path, the other serves as an Euclidean mirror which reflects and squeezes the instanton. It is this reflection and shrinking which is responsible for an enormous enhancement of the vacuum pair production rate. We discuss how important features of two different mechanisms can be analysed and understood via such a rotation in the complex plane. (a) Consistent with previous studies, we first discuss the standard assisted mechanism with a static strong field and certain weak fields with a distinct pole structure in order to show that the reflection takes place exactly at the poles. We also discuss the effect of possible sub-cycle structures. We extend this reflection picture then to weak fields which have no poles present and illustrate the effective reflections with explicit examples. An additional field strength dependence for the rate occurs in such cases. We analytically compute the characteristic threshold for the assisted mechanism given by the critical combined Keldysh parameter. We discuss significant differences between these two types of fields. For various backgrounds, we present the contributing instantons and perform analytical computations for the corresponding rates treating both fields nonperturbatively. (b) In addition, we also study the case with a nonstatic strong field which gives rise to the assisted dynamical mechanism. For different strong field profiles we investigate the impact on the critical combined Keldysh parameter. As an explicit example, we analytically compute the rate by employing the exact reflection points. The validity of the predictions for both mechanisms is confirmed by numerical computations.

  5. Nonperturbative stochastic method for driven spin-boson model

    NASA Astrophysics Data System (ADS)

    Orth, Peter P.; Imambekov, Adilet; Le Hur, Karyn

    2013-01-01

    We introduce and apply a numerically exact method for investigating the real-time dissipative dynamics of quantum impurities embedded in a macroscopic environment beyond the weak-coupling limit. We focus on the spin-boson Hamiltonian that describes a two-level system interacting with a bosonic bath of harmonic oscillators. This model is archetypal for investigating dissipation in quantum systems, and tunable experimental realizations exist in mesoscopic and cold-atom systems. It finds abundant applications in physics ranging from the study of decoherence in quantum computing and quantum optics to extended dynamical mean-field theory. Starting from the real-time Feynman-Vernon path integral, we derive an exact stochastic Schrödinger equation that allows us to compute the full spin density matrix and spin-spin correlation functions beyond weak coupling. We greatly extend our earlier work [P. P. Orth, A. Imambekov, and K. Le Hur, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.032118 82, 032118 (2010)] by fleshing out the core concepts of the method and by presenting a number of interesting applications. Methodologically, we present an analogy between the dissipative dynamics of a quantum spin and that of a classical spin in a random magnetic field. This analogy is used to recover the well-known noninteracting-blip approximation in the weak-coupling limit. We explain in detail how to compute spin-spin autocorrelation functions. As interesting applications of our method, we explore the non-Markovian effects of the initial spin-bath preparation on the dynamics of the coherence σx(t) and of σz(t) under a Landau-Zener sweep of the bias field. We also compute to a high precision the asymptotic long-time dynamics of σz(t) without bias and demonstrate the wide applicability of our approach by calculating the spin dynamics at nonzero bias and different temperatures.

  6. Department of Defense Office of the Inspector General FY 2013 Audit Plan

    DTIC Science & Technology

    2012-11-01

    oversight procedures to review KPMG LLPs work; and if applicable disclose instances where KPMG LLP does not comply, in all material respects, with U.S...decisions. Pervasive material internal control weaknesses impact the accuracy, reliability and timeliness of budgetary and accounting data and...reported the same 13 material internal control weaknesses as in the previous year. These pervasive and longstanding financial management challenges

  7. Maintaining the competitive edge; Use of computers for undergraduate instruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurley, F.; Miller, M.; Podlo, A.L.

    1991-11-01

    There is a revolution in U.S. undergraduate engineering curricula, one marked by a renaissance of interest in liberal arts education, re-emphasis on basic education, and a new emphasis on computer training. The Dept. of Petroleum Engineering at the U. of Texas recognized its weaknesses and in Sept. 1987 designed and implemented new curricula incorporating computer and technical communications skills for undergraduate students. This paper provides details of the curricula changes. The results of this 4-year program demonstrate that problem-solving skills of petroleum engineering students are sharpened through computerized education and proficient communication.

  8. Control on frontal thrust progression by the mechanically weak Gondwana horizon in the Darjeeling-Sikkim Himalaya

    NASA Astrophysics Data System (ADS)

    Ghosh, Subhajit; Bose, Santanu; Mandal, Nibir; Das, Animesh

    2018-03-01

    This study integrates field evidence with laboratory experiments to show the mechanical effects of a lithologically contrasting stratigraphic sequence on the development of frontal thrusts: Main Boundary Thrust (MBT) and Daling Thrust (DT) in the Darjeeling-Sikkim Himalaya (DSH). We carried out field investigations mainly along two river sections in the DSH: Tista-Kalijhora and Mahanadi, covering an orogen-parallel stretch of 20 km. Our field observations suggest that the coal-shale dominated Gondwana sequence (sandwiched between the Daling Group in the north and Siwaliks in the south) has acted as a mechanically weak horizon to localize the MBT and DT. We simulated a similar mechanical setting in scaled model experiments to validate our field interpretation. In experiments, such a weak horizon at a shallow depth perturbs the sequential thrust progression, and causes a thrust to localize in the vicinity of the weak zone, splaying from the basal detachment. We correlate this weak-zone-controlled thrust with the DT, which accommodates a large shortening prior to activation of the weak zone as a new detachment with ongoing horizontal shortening. The entire shortening in the model is then transferred to this shallow detachment to produce a new sequence of thrust splays. Extrapolating this model result to the natural prototype, we show that the mechanically weak Gondwana Sequence has caused localization of the DT and MBT in the mountain front of DSH.

  9. Integrable subsectors from holography

    NASA Astrophysics Data System (ADS)

    de Mello Koch, Robert; Kim, Minkyoo; Van Zyl, Hendrik J. R.

    2018-05-01

    We consider operators in N=4 super Yang-Mills theory dual to closed string states propagating on a class of LLM geometries. The LLM geometries we consider are specified by a boundary condition that is a set of black rings on the LLM plane. When projected to the LLM plane, the closed strings are polygons with all corners lying on the outer edge of a single ring. The large N limit of correlators of these operators receives contributions from non-planar diagrams even for the leading large N dynamics. Our interest in these fluctuations is because a previous weak coupling analysis argues that the net effect of summing the huge set of non-planar diagrams, is a simple rescaling of the 't Hooft coupling. We carry out some nontrivial checks of this proposal. Using the su(2|2)2 symmetry we determine the two magnon S-matrix and demonstrate that it agrees, up to two loops, with a weak coupling computation performed in the CFT. We also compute the first finite size corrections to both the magnon and the dyonic magnon by constructing solutions to the Nambu-Goto action that carry finite angular momentum. These finite size computations constitute a strong coupling confirmation of the proposal.

  10. COSMIC REIONIZATION ON COMPUTERS: NUMERICAL AND PHYSICAL CONVERGENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov; Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637; Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce a weakmore » convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite-resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ∼20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, such as stellar masses and metallicities. Yet other properties of model galaxies, for example, their H i masses, are recovered in the weakly converged runs only within a factor of 2.« less

  11. The scaling of weak field phase-only control in Markovian dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Am-Shallem, Morag; Kosloff, Ronnie

    We consider population transfer in open quantum systems, which are described by quantum dynamical semigroups (QDS). Using second order perturbation theory of the Lindblad equation, we show that it depends on a weak external field only through the field's autocorrelation function, which is phase independent. Therefore, for leading order in perturbation, QDS cannot support dependence of the population transfer on the phase properties of weak fields. We examine an example of weak-field phase-dependent population transfer, and show that the phase-dependence comes from the next order in the perturbation.

  12. Definition and classification of negative motor signs in childhood.

    PubMed

    Sanger, Terence D; Chen, Daofen; Delgado, Mauricio R; Gaebler-Spira, Deborah; Hallett, Mark; Mink, Jonathan W

    2006-11-01

    In this report we describe the outcome of a consensus meeting that occurred at the National Institutes of Health in Bethesda, Maryland, March 12 through 14, 2005. The meeting brought together 39 specialists from multiple clinical and research disciplines including developmental pediatrics, neurology, neurosurgery, orthopedic surgery, physical therapy, occupational therapy, physical medicine and rehabilitation, neurophysiology, muscle physiology, motor control, and biomechanics. The purpose of the meeting was to establish terminology and definitions for 4 aspects of motor disorders that occur in children: weakness, reduced selective motor control, ataxia, and deficits of praxis. The purpose of the definitions is to assist communication between clinicians, select homogeneous groups of children for clinical research trials, facilitate the development of rating scales to assess improvement or deterioration with time, and eventually to better match individual children with specific therapies. "Weakness" is defined as the inability to generate normal voluntary force in a muscle or normal voluntary torque about a joint. "Reduced selective motor control" is defined as the impaired ability to isolate the activation of muscles in a selected pattern in response to demands of a voluntary posture or movement. "Ataxia" is defined as an inability to generate a normal or expected voluntary movement trajectory that cannot be attributed to weakness or involuntary muscle activity about the affected joints. "Apraxia" is defined as an impairment in the ability to accomplish previously learned and performed complex motor actions that is not explained by ataxia, reduced selective motor control, weakness, or involuntary motor activity. "Developmental dyspraxia" is defined as a failure to have ever acquired the ability to perform age-appropriate complex motor actions that is not explained by the presence of inadequate demonstration or practice, ataxia, reduced selective motor control, weakness, or involuntary motor activity.

  13. Coupled-cluster computations of atomic nuclei

    NASA Astrophysics Data System (ADS)

    Hagen, G.; Papenbrock, T.; Hjorth-Jensen, M.; Dean, D. J.

    2014-09-01

    In the past decade, coupled-cluster theory has seen a renaissance in nuclear physics, with computations of neutron-rich and medium-mass nuclei. The method is efficient for nuclei with product-state references, and it describes many aspects of weakly bound and unbound nuclei. This report reviews the technical and conceptual developments of this method in nuclear physics, and the results of coupled-cluster calculations for nucleonic matter, and for exotic isotopes of helium, oxygen, calcium, and some of their neighbors.

  14. Finite element formulation with embedded weak discontinuities for strain localization under dynamic conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Tao; Mourad, Hashem M.; Bronkhorst, Curt A.

    Here, we present an explicit finite element formulation designed for the treatment of strain localization under highly dynamic conditions. We also used a material stability analysis to detect the onset of localization behavior. Finite elements with embedded weak discontinuities are employed with the aim of representing subsequent localized deformation accurately. The formulation and its algorithmic implementation are described in detail. Numerical results are presented to illustrate the usefulness of this computational framework in the treatment of strain localization under highly dynamic conditions, and to examine its performance characteristics in the context of two-dimensional plane-strain problems.

  15. Finite element formulation with embedded weak discontinuities for strain localization under dynamic conditions

    DOE PAGES

    Jin, Tao; Mourad, Hashem M.; Bronkhorst, Curt A.; ...

    2017-09-13

    Here, we present an explicit finite element formulation designed for the treatment of strain localization under highly dynamic conditions. We also used a material stability analysis to detect the onset of localization behavior. Finite elements with embedded weak discontinuities are employed with the aim of representing subsequent localized deformation accurately. The formulation and its algorithmic implementation are described in detail. Numerical results are presented to illustrate the usefulness of this computational framework in the treatment of strain localization under highly dynamic conditions, and to examine its performance characteristics in the context of two-dimensional plane-strain problems.

  16. Coherent quantum control of internal conversion: {S}_{2}\\;\\leftrightarrow \\;{S}_{1} in pyrazine via {S}_{0}\\;\\to \\;{S}_{2}/{S}_{1} weak field excitation

    NASA Astrophysics Data System (ADS)

    Grinev, Timur; Shapiro, Moshe; Brumer, Paul

    2015-09-01

    Coherent control of internal conversion (IC) between the first (S1) and second (S2) singlet excited electronic states in pyrazine, where the S2 state is populated from the ground singlet electronic state S0 by weak field excitation, is examined. Control is implemented by shaping the laser which excites S2. Excitation and IC are considered simultaneously, using the recently introduced resonance-based control approach. Highly successful control is achieved by optimizing both the amplitude and phase profiles of the laser spectrum. The dependence of control on the properties of resonances in S2 is demonstrated.

  17. Development of a software interface for optical disk archival storage for a new life sciences flight experiments computer

    NASA Technical Reports Server (NTRS)

    Bartram, Peter N.

    1989-01-01

    The current Life Sciences Laboratory Equipment (LSLE) microcomputer for life sciences experiment data acquisition is now obsolete. Among the weaknesses of the current microcomputer are small memory size, relatively slow analog data sampling rates, and the lack of a bulk data storage device. While life science investigators normally prefer data to be transmitted to Earth as it is taken, this is not always possible. No down-link exists for experiments performed in the Shuttle middeck region. One important aspect of a replacement microcomputer is provision for in-flight storage of experimental data. The Write Once, Read Many (WORM) optical disk was studied because of its high storage density, data integrity, and the availability of a space-qualified unit. In keeping with the goals for a replacement microcomputer based upon commercially available components and standard interfaces, the system studied includes a Small Computer System Interface (SCSI) for interfacing the WORM drive. The system itself is designed around the STD bus, using readily available boards. Configurations examined were: (1) master processor board and slave processor board with the SCSI interface; (2) master processor with SCSI interface; (3) master processor with SCSI and Direct Memory Access (DMA); (4) master processor controlling a separate STD bus SCSI board; and (5) master processor controlling a separate STD bus SCSI board with DMA.

  18. Prediction of forces and moments for flight vehicle control effectors: Workplan

    NASA Technical Reports Server (NTRS)

    Maughmer, Mark D.

    1989-01-01

    Two research activities directed at hypersonic vehicle configurations are currently underway. The first involves the validation of a number of classical local surface inclination methods commonly employed in preliminary design studies of hypersonic flight vehicles. Unlike studies aimed at validating such methods for predicting overall vehicle aerodynamics, this effort emphasizes validating the prediction of forces and moments for flight control studies. Specifically, several vehicle configurations for which experimental or flight-test data are available are being examined. By comparing the theoretical predictions with these data, the strengths and weaknesses of the local surface inclination methods can be ascertained and possible improvements suggested. The second research effort, of significance to control during take-off and landing of most proposed hypersonic vehicle configurations, is aimed at determining the change due to ground effect in control effectiveness of highly swept delta planforms. Central to this research is the development of a vortex-lattice computer program which incorporates an unforced trailing vortex sheet and an image ground plane. With this program, the change in pitching moment of the basic vehicle due to ground proximity, and whether or not there is sufficient control power available to trim, can be determined. In addition to the current work, two different research directions are suggested for future study. The first is aimed at developing an interactive computer program to assist the flight controls engineer in determining the forces and moments generated by different types of control effectors that might be used on hypersonic vehicles. The first phase of this work would deal in the subsonic portion of the flight envelope, while later efforts would explore the supersonic/hypersonic flight regimes. The second proposed research direction would explore methods for determining the aerodynamic trim drag of a generic hypersonic flight vehicle and ways in which it can be minimized through vehicle design and trajectory optimization.

  19. The sn stars - Magnetically controlled stellar winds among the helium-weak stars

    NASA Technical Reports Server (NTRS)

    Shore, Steven N.; Brown, Douglas N.; Sonneborn, George

    1987-01-01

    The paper reports observations of magnetically controlled stellar mass outflows in three helium-weak sn stars: HD 21699 = HR 1063; HD 5737 = Alpha Scl; and HD 79158 = 36 Lyn. IUE observations show that the C IV resonance doublet is variable on the rotational timescale but that there are no other strong-spectrum variations in the UV. Magnetic fields, which reverse sign on the rotational timescale, are present in all three stars. This phenomenology is interpreted in terms of jetlike mass loss above the magnetic poles, and these objects are discussed in the context of a general survey of the C IV and Si IV profiles of other more typical helium-weak stars.

  20. Architectural Analysis of a LLNL LWIR Sensor System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, Essex J.; Curry, Jim R.; LaFortune, Kai N.

    The architecture of an LLNL airborne imaging and detection system is considered in this report. The purpose of the system is to find the location of substances of interest by detecting their chemical signatures using a long-wave infrared (LWIR) imager with geo-registration capability. The detection system consists of an LWIR imaging spectrometer as well as a network of computer hardware and analysis software for analyzing the images for the features of interest. The system has been in the operations phase now for well over a year, and as such, there is enough use data and feedback from the primary beneficiarymore » to assess the current successes and shortcomings of the LWIR system architecture. LWIR system has been successful in providing reliable data collection and the delivery of a report with results. The weakness of the architecture has been identified in two areas: with the network of computer hardware and software and with the feedback of the state of the system health. Regarding the former, the system computers and software that carry out the data acquisition are too complicated for routine operations and maintenance. With respect to the latter, the primary beneficiary of the instrument’s data does not have enough metrics to use to filter the large quantity of data to determine its utility. In addition to the needs in these two areas, a latent need of one of the stakeholders is identified. This report documents the strengths and weaknesses, as well as proposes a solution for enhancing the architecture that simultaneously addresses the two areas of weakness and leverages them to meet the newly identified latent need.« less

  1. Estimation of the interference coupling into cables within electrically large multiroom structures

    NASA Astrophysics Data System (ADS)

    Keghie, J.; Kanyou Nana, R.; Schetelig, B.; Potthast, S.; Dickmann, S.

    2010-10-01

    Communication cables are used to transfer data between components of a system. As a part of the EMC analysis of complex systems, it is necessary to determine which level of interference can be expected at the input of connected devices due to the coupling into the irradiated cable. For electrically large systems consisting of several rooms with cables connecting components located in different rooms, an estimation of the coupled disturbances inside cables using commercial field computation software is often not feasible without several restrictions. In many cases, this is related to the non-availability of computing memory and processing power needed for the computation. In this paper, we are going to show that, starting from a topological analysis of the entire system, weak coupling paths within the system can be can be identified. By neglecting these coupling paths and using the transmission line approach, the original system will be simplified so that a simpler estimation is possible. Using the example of a system which is composed of two rooms, multiple apertures, and a network cable located in both chambers, it is shown that an estimation of the coupled disturbances due to external electromagnetic sources is feasible with this approach. Starting from an incident electromagnetic field, we determine transfer functions describing the coupling means (apertures, cables). Using these transfer functions and the knowledge of the weak coupling paths above, a decision is taken regarding the means for paths that can be neglected during the estimation. The estimation of the coupling into the cable is then made while taking only paths with strong coupling into account. The remaining part of the wiring harness in areas with weak coupling is represented by its input impedance. A comparison with the original network shows a good agreement.

  2. Quantum interference control of an isolated resonance lifetime in the weak-field limit.

    PubMed

    García-Vela, A

    2015-11-21

    Resonance states play an important role in a large variety of physical and chemical processes. Thus, controlling the resonance behavior, and particularly a key property like the resonance lifetime, opens up the possibility of controlling those resonance mediated processes. While such a resonance control is possible by applying strong-field approaches, the development of flexible weak-field control schemes that do not alter significantly the system dynamics still remains a challenge. In this work, one such control scheme within the weak-field regime is proposed for the first time in order to modify the lifetime of an isolated resonance state. The basis of the scheme suggested is quantum interference between two pathways induced by laser fields, that pump wave packet amplitude to the target resonance under control. The simulations reported here show that the scheme allows for both enhancement and quenching of the resonance survival lifetime, being particularly flexible to achieve large lifetime enhancements. Control effects on the resonance lifetime take place only while the pulse is operating. In addition, the conditions required to generate the two interfering quantum pathways are found to be rather easy to meet for general systems, which makes the experimental implementation straightforward and implies the wide applicability of the control scheme.

  3. Fiscal Year 2010 U.S. Government Financial Statements: Federal Government Continues To Face Financial Management And Long-Term Fiscal Challenges

    DTIC Science & Technology

    2011-03-09

    effective oversight of federal government programs and policies. Over the years, certain material weaknesses in internal control over...ineffective process for preparing the consolidated financial statements. In addition to the material weaknesses underlying these major impediments, GAO...noted material weaknesses involving billions of dollars in improper payments, information security, and tax collection activities. With regard to the

  4. Labels, cognomes, and cyclic computation: an ethological perspective.

    PubMed

    Murphy, Elliot

    2015-01-01

    For the past two decades, it has widely been assumed by linguists that there is a single computational operation, Merge, which is unique to language, distinguishing it from other cognitive domains. The intention of this paper is to progress the discussion of language evolution in two ways: (i) survey what the ethological record reveals about the uniqueness of the human computational system, and (ii) explore how syntactic theories account for what ethology may determine to be human-specific. It is shown that the operation Label, not Merge, constitutes the evolutionary novelty which distinguishes human language from non-human computational systems; a proposal lending weight to a Weak Continuity Hypothesis and leading to the formation of what is termed Computational Ethology. Some directions for future ethological research are suggested.

  5. Generic, Type-Safe and Object Oriented Computer Algebra Software

    NASA Astrophysics Data System (ADS)

    Kredel, Heinz; Jolly, Raphael

    Advances in computer science, in particular object oriented programming, and software engineering have had little practical impact on computer algebra systems in the last 30 years. The software design of existing systems is still dominated by ad-hoc memory management, weakly typed algorithm libraries and proprietary domain specific interactive expression interpreters. We discuss a modular approach to computer algebra software: usage of state-of-the-art memory management and run-time systems (e.g. JVM) usage of strongly typed, generic, object oriented programming languages (e.g. Java) and usage of general purpose, dynamic interactive expression interpreters (e.g. Python) To illustrate the workability of this approach, we have implemented and studied computer algebra systems in Java and Scala. In this paper we report on the current state of this work by presenting new examples.

  6. Labels, cognomes, and cyclic computation: an ethological perspective

    PubMed Central

    Murphy, Elliot

    2015-01-01

    For the past two decades, it has widely been assumed by linguists that there is a single computational operation, Merge, which is unique to language, distinguishing it from other cognitive domains. The intention of this paper is to progress the discussion of language evolution in two ways: (i) survey what the ethological record reveals about the uniqueness of the human computational system, and (ii) explore how syntactic theories account for what ethology may determine to be human-specific. It is shown that the operation Label, not Merge, constitutes the evolutionary novelty which distinguishes human language from non-human computational systems; a proposal lending weight to a Weak Continuity Hypothesis and leading to the formation of what is termed Computational Ethology. Some directions for future ethological research are suggested. PMID:26089809

  7. Computation of porosity redistribution resulting from thermal convection in slanted porous layers

    NASA Astrophysics Data System (ADS)

    Gouze, Phillippe; Coudrain-Ribstein, Anne; Bernard, Dominique

    1994-01-01

    Unlike fluid displacement due to regional hydraulic head, thermoconvetive motions are generally slow. The thermal impacts of such movements are very weak, whereas their chemical impacts may be significant because of their cumulated effects over geologic time. For nonhorizontal thick sedimentary reservoirs, the fluid velocity due to thermal convection can be accurately approximated by an explicit function of the dip of the reservior, the permeability and the difference in thermal conductivity between the aquifer and the confining beds. The latter parameter controls the rotation direction of the flow and, for clastic reservoirs bounded by impervious clayey media, fluid moves up the slope along the caprock layer. As the fluid velocity is small, the major rock-forming minerals control the fluid composition by thermodynamic equilibrium. Thus, whereas the volume of redistributed mineral depends on the volume of water circulated, the localization of porosity enhancement is strongly controlled by the reservoir mineralogy. With realistic values of permeability and layer thickness, several per cent of secondary porosity per million years can be created or lost at shallow depth (less than 2 km), depending on the chlorinity, the set of representative minerals and the temperature. In sandstone resevoirs and high-chlorinity calcarenite resoervoirs, the porosity decreases under the caprock where hydrocarbons can accumulate. In chlorinity calcarenite resevoirs, the porosity decreases under the caprock where hydrocarbons can accumulate. In chloride-depleted carbonate aquifers, the simulataneous control by carbonates, silica and aluminosilicates can produce a decrease of porosity above the bedrock and an enhancement of porosity under the caprock. However, computations show that the quality of the upper part of the reservoir is mainly reduced by the precipitation of silica and clays.

  8. A new perspective on the perceptual selectivity of attention under load.

    PubMed

    Giesbrecht, Barry; Sy, Jocelyn; Bundesen, Claus; Kyllingsbaek, Søren

    2014-05-01

    The human attention system helps us cope with a complex environment by supporting the selective processing of information relevant to our current goals. Understanding the perceptual, cognitive, and neural mechanisms that mediate selective attention is a core issue in cognitive neuroscience. One prominent model of selective attention, known as load theory, offers an account of how task demands determine when information is selected and an account of the efficiency of the selection process. However, load theory has several critical weaknesses that suggest that it is time for a new perspective. Here we review the strengths and weaknesses of load theory and offer an alternative biologically plausible computational account that is based on the neural theory of visual attention. We argue that this new perspective provides a detailed computational account of how bottom-up and top-down information is integrated to provide efficient attentional selection and allocation of perceptual processing resources. © 2014 New York Academy of Sciences.

  9. Using Laboratory Homework to Facilitate Skill Integration and Assess Understanding in Intermediate Physics Courses

    NASA Astrophysics Data System (ADS)

    Johnston, Marty; Jalkio, Jeffrey

    2013-04-01

    By the time students have reached the intermediate level physics courses they have been exposed to a broad set of analytical, experimental, and computational skills. However, their ability to independently integrate these skills into the study of a physical system is often weak. To address this weakness and assess their understanding of the underlying physical concepts we have introduced laboratory homework into lecture based, junior level theoretical mechanics and electromagnetics courses. A laboratory homework set replaces a traditional one and emphasizes the analysis of a single system. In an exercise, students use analytical and computational tools to predict the behavior of a system and design a simple measurement to test their model. The laboratory portion of the exercises is straight forward and the emphasis is on concept integration and application. The short student reports we collect have revealed misconceptions that were not apparent in reviewing the traditional homework and test problems. Work continues on refining the current problems and expanding the problem sets.

  10. A secured authentication protocol for wireless sensor networks using elliptic curves cryptography.

    PubMed

    Yeh, Hsiu-Lien; Chen, Tien-Ho; Liu, Pin-Chuan; Kim, Tai-Hoo; Wei, Hsin-Wen

    2011-01-01

    User authentication is a crucial service in wireless sensor networks (WSNs) that is becoming increasingly common in WSNs because wireless sensor nodes are typically deployed in an unattended environment, leaving them open to possible hostile network attack. Because wireless sensor nodes are limited in computing power, data storage and communication capabilities, any user authentication protocol must be designed to operate efficiently in a resource constrained environment. In this paper, we review several proposed WSN user authentication protocols, with a detailed review of the M.L Das protocol and a cryptanalysis of Das' protocol that shows several security weaknesses. Furthermore, this paper proposes an ECC-based user authentication protocol that resolves these weaknesses. According to our analysis of security of the ECC-based protocol, it is suitable for applications with higher security requirements. Finally, we present a comparison of security, computation, and communication costs and performances for the proposed protocols. The ECC-based protocol is shown to be suitable for higher security WSNs.

  11. Spin polarized photons from an axially charged plasma at weak coupling: Complete leading order

    DOE PAGES

    Mamo, Kiminad A.; Yee, Ho-Ung

    2016-03-24

    In the presence of (approximately conserved) axial charge in the QCD plasma at finite temperature, the emitted photons are spin aligned, which is a unique P- and CP-odd signature of axial charge in the photon emission observables. We compute this “P-odd photon emission rate” in a weak coupling regime at a high temperature limit to complete leading order in the QCD coupling constant: the leading log as well as the constant under the log. As in the P-even total emission rate in the literature, the computation of the P-odd emission rate at leading order consists of three parts: (1) Comptonmore » and pair annihilation processes with hard momentum exchange, (2) soft t- and u-channel contributions with hard thermal loop resummation, (3) Landau-Pomeranchuk-Migdal resummation of collinear bremsstrahlung and pair annihilation. In conclusion, we present analytical and numerical evaluations of these contributions to our P-odd photon emission rate observable.« less

  12. Libstatmech and applications to astrophysics

    NASA Astrophysics Data System (ADS)

    Yu, Tianhong

    In this work an introduction to Libstatmech is presented and applications especially to astrophysics are discussed. Libstatmech is a C toolkit for computing the statistical mechanics of fermions and bosons, written on top of libxml and gsl (GNU Scientific Library). Calculations of Thomas-Fermi Screening model and Bose-Einstein Condensate based on libstatmech demonstrate the expected results. For astrophysics application, a simple Type Ia Supernovae model is established to run the network calculation with weak reactions, in which libstatmech contributes to compute the electron chemical potential and allows the weak reverse rates to be calculated from detailed balance. Starting with pure 12C and T9=1.8, we find that at high initial density (rho~ 9x 109 g/cm3) there are relatively large abundances of neutron-rich iron-group isotopes (e.g. 66Ni, 50Ti, 48Ca) produced during the explosion, and Y e can drop to ~0.4, which indicates that the rare, high density Type Ia supernovae may help to explain the 48Ca and 50Ti effect in FUN CAIs.

  13. A Secured Authentication Protocol for Wireless Sensor Networks Using Elliptic Curves Cryptography

    PubMed Central

    Yeh, Hsiu-Lien; Chen, Tien-Ho; Liu, Pin-Chuan; Kim, Tai-Hoo; Wei, Hsin-Wen

    2011-01-01

    User authentication is a crucial service in wireless sensor networks (WSNs) that is becoming increasingly common in WSNs because wireless sensor nodes are typically deployed in an unattended environment, leaving them open to possible hostile network attack. Because wireless sensor nodes are limited in computing power, data storage and communication capabilities, any user authentication protocol must be designed to operate efficiently in a resource constrained environment. In this paper, we review several proposed WSN user authentication protocols, with a detailed review of the M.L Das protocol and a cryptanalysis of Das’ protocol that shows several security weaknesses. Furthermore, this paper proposes an ECC-based user authentication protocol that resolves these weaknesses. According to our analysis of security of the ECC-based protocol, it is suitable for applications with higher security requirements. Finally, we present a comparison of security, computation, and communication costs and performances for the proposed protocols. The ECC-based protocol is shown to be suitable for higher security WSNs. PMID:22163874

  14. Influence of rotator cuff tears on glenohumeral stability during abduction tasks.

    PubMed

    Hölscher, Thomas; Weber, Tim; Lazarev, Igor; Englert, Carsten; Dendorfer, Sebastian

    2016-09-01

    One of the main goals in reconstructing rotator cuff tears is the restoration of glenohumeral joint stability, which is subsequently of utmost importance in order to prevent degenerative damage such as superior labral anterior posterior (SLAP) lesion, arthrosis, and malfunction. The goal of the current study was to facilitate musculoskeletal models in order to estimate glenohumeral instability introduced by muscle weakness due to cuff lesions. Inverse dynamics simulations were used to compute joint reaction forces for several static abduction tasks with different muscle weakness. Results were compared with the existing literature in order to ensure the model validity. Further arm positions taken from activities of daily living, requiring the rotator cuff muscles were modeled and their contribution to joint kinetics computed. Weakness of the superior rotator cuff muscles (supraspinatus; infraspinatus) leads to a deviation of the joint reaction force to the cranial dorsal rim of the glenoid. Massive rotator cuff defects showed higher potential for glenohumeral instability in contrast to single muscle ruptures. The teres minor muscle seems to substitute lost joint torque during several simulated muscle tears to maintain joint stability. Joint instability increases with cuff tear size. Weakness of the upper part of the rotator cuff leads to a joint reaction force closer to the upper glenoid rim. This indicates the comorbidity of cuff tears with SLAP lesions. The teres minor is crucial for maintaining joint stability in case of massive cuff defects and should be uprated in clinical decision-making. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:1628-1635, 2016. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  15. Prospects and expectations for unstructured methods

    NASA Technical Reports Server (NTRS)

    Baker, Timothy J.

    1995-01-01

    The last decade has witnessed a vigorous and sustained research effort on unstructured methods for computational fluid dynamics. Unstructured mesh generators and flow solvers have evolved to the point where they are now in use for design purposes throughout the aerospace industry. In this paper we survey the various mesh types, structured as well as unstructured, and examine their relative strengths and weaknesses. We argue that unstructured methodology does offer the best prospect for the next generation of computational fluid dynamics algorithms.

  16. Bounds for the Z-spectral radius of nonnegative tensors.

    PubMed

    He, Jun; Liu, Yan-Min; Ke, Hua; Tian, Jun-Kang; Li, Xiang

    2016-01-01

    In this paper, we have proposed some new upper bounds for the largest Z-eigenvalue of an irreducible weakly symmetric and nonnegative tensor, which improve the known upper bounds obtained in Chang et al. (Linear Algebra Appl 438:4166-4182, 2013), Song and Qi (SIAM J Matrix Anal Appl 34:1581-1595, 2013), He and Huang (Appl Math Lett 38:110-114, 2014), Li et al. (J Comput Anal Appl 483:182-199, 2015), He (J Comput Anal Appl 20:1290-1301, 2016).

  17. 3D receiver function Kirchhoff depth migration image of Cascadia subduction slab weak zone

    NASA Astrophysics Data System (ADS)

    Cheng, C.; Allen, R. M.; Bodin, T.; Tauzin, B.

    2016-12-01

    We have developed a highly computational efficient algorithm of applying 3D Kirchhoff depth migration to telesismic receiver function data. Combine primary PS arrival with later multiple arrivals we are able to reveal a better knowledge about the earth discontinuity structure (transmission and reflection). This method is highly useful compare with traditional CCP method when dipping structure is met during the imaging process, such as subduction slab. We apply our method to the reginal Cascadia subduction zone receiver function data and get a high resolution 3D migration image, for both primary and multiples. The image showed us a clear slab weak zone (slab hole) in the upper plate boundary under Northern California and the whole Oregon. Compare with previous 2D receiver function image from 2D array(CAFE and CASC93), the position of the weak zone shows interesting conherency. This weak zone is also conherent with local seismicity missing and heat rising, which lead us to think about and compare with the ocean plate stucture and the hydralic fluid process during the formation and migration of the subduction slab.

  18. Microstructure and micromechanical elastic properties of weak layers

    NASA Astrophysics Data System (ADS)

    Köchle, Berna; Matzl, Margret; Proksch, Martin; Schneebeli, Martin

    2014-05-01

    Weak layers are the mechanically most important stratigraphic layer for avalanches. Yet, there is little known about their exact geometry and their micromechanical properties. To distinguish weak layers or interfaces is essential to assess stability. However, except by destructive mechanical tests, they cannot be easily identified and characterized in the field. We casted natural weak layers and their adjacent layers in the field during two winter seasons and scanned them non-destructively with X-ray computer tomography with a resolution between 10 - 20 µm. Reconstructed three-dimensional models of centimeter-sized layered samples allow for calculating the change of structural properties. We found that structural transitions cannot always by expressed by geometry like density or grain size. In addition, we calculated the Young's modulus and Poisson's ratio of the individual layers with voxel-based finite element simulations. As any material has its characteristic elastic parameters, they may potentially differentiate individual layers, and therefore different microstructures. Our results show that Young's modulus correlates well with density but do not indicate snow's microstructure, in contrast to Poisson's ratio which tends to be lower for strongly anisotropic forms like cup crystals and facets.

  19. Exploration of the validity of weak magnets as a suitable placebo in trials of magnetic therapy.

    PubMed

    Greaves, C J; Harlow, T N

    2008-06-01

    To investigate whether 50 mT magnetic bracelets would be suitable as a placebo control condition for studying the pain relieving effects of higher strength magnetic bracelets in arthritis. Randomised controlled comparison between groups given either a weak 50 mT or a higher strength 180 mT magnetic bracelets to test. Four arthritis support groups in Devon, UK. One hundred sixteen people with osteoarthritis and rheumatoid arthritis. Beliefs about group allocation and expectation of benefit. There was no significant difference between groups in beliefs about allocation to the 'active magnet' group. Participants were however more likely to have an expectation of benefit (pain relief) with the higher strength magnetic bracelets. Asking about perceived group allocation is not sufficient to rule out placebo effects in trials of magnetic bracelets which use weak magnets as a control condition. There are differences in expectation of benefit between different magnet strengths.

  20. Self-regulation and social pressure reduce prejudiced responding and increase the motivation to be non-prejudiced.

    PubMed

    Buzinski, Steven G; Kitchens, Michael B

    2017-01-01

    Self-regulation constrains the expression of prejudice, but when self-regulation falters, the immediate environment can act as an external source of prejudice regulation. This hypothesis derives from work demonstrating that external controls and internal self-regulation can prompt goal pursuit in the absence of self-imposed controls. Across four studies, we found support for this complementary model of prejudice regulation. In Study 1, self-regulatory fatigue resulted in less motivation to be non-prejudiced, compared to a non-fatigued control. In Study 2, strong (vs. weak) perceived social pressure was related to greater motivation to be non-prejudiced. In Study 3, dispositional self-regulation predicted non-prejudice motivation when perceived social pressure was weak or moderate, but not when it was strong. Finally, in Study 4 self-regulatory fatigue increased prejudice when social pressure was weak but not when it was strong.

  1. Enhanced weak-signal sensitivity in two-photon microscopy by adaptive illumination.

    PubMed

    Chu, Kengyeh K; Lim, Daryl; Mertz, Jerome

    2007-10-01

    We describe a technique to enhance both the weak-signal relative sensitivity and the dynamic range of a laser scanning optical microscope. The technique is based on maintaining a fixed detection power by fast feedback control of the illumination power, thereby transferring high measurement resolution to weak signals while virtually eliminating the possibility of image saturation. We analyze and demonstrate the benefits of adaptive illumination in two-photon fluorescence microscopy.

  2. Application of Bogolyubov's theory of weakly nonideal Bose gases to the A+A, A+B, B+B reaction-diffusion system

    NASA Astrophysics Data System (ADS)

    Konkoli, Zoran

    2004-01-01

    Theoretical methods for dealing with diffusion-controlled reactions inevitably rely on some kind of approximation, and to find the one that works on a particular problem is not always easy. Here the approximation used by Bogolyubov to study a weakly nonideal Bose gas, referred to as the weakly nonideal Bose gas approximation (WBGA), is applied in the analysis of three reaction-diffusion models: (i) A+A→Ø, (ii) A+B→Ø, and (iii) A+A,B+B,A+B→Ø (the ABBA model). Two types of WBGA are considered, the simpler WBGA-I and the more complicated WBGA-II. All models are defined on the lattice to facilitate comparison with computer experiment (simulation). It is found that the WBGA describes the A+B reaction well, it reproduces the correct d/4 density decay exponent. However, it fails in the case of the A+A reaction and the ABBA model. (To cure the deficiency of WBGA in dealing with the A+A model, a hybrid of the WBGA and Kirkwood superposition approximations is suggested.) It is shown that the WBGA-I is identical to the dressed-tree calculation suggested by Lee [J. Phys. A 27, 2633 (1994)], and that the dressed-tree calculation does not lead to the d/2 density decay exponent when applied to the A+A reaction, as normally believed, but it predicts the d/4 decay exponent. Last, the usage of the small n0 approximation suggested by Mattis and Glasser [Rev. Mod. Phys. 70, 979 (1998)] is questioned if used beyond the A+B reaction-diffusion model.

  3. Molecules Designed to Contain Two Weakly Coupled Spins with a Photoswitchable Spacer.

    PubMed

    Uber, Jorge Salinas; Estrader, Marta; Garcia, Jordi; Lloyd-Williams, Paul; Sadurní, Anna; Dengler, Dominik; van Slageren, Joris; Chilton, Nicholas F; Roubeau, Olivier; Teat, Simon J; Ribas-Ariño, Jordi; Aromí, Guillem

    2017-10-04

    Controlling the charges and spins of molecules lies at the heart of spintronics. A photoswitchable molecule consisting of two independent spins separated by a photoswitchable moiety was designed in the form of new ligand H 4 L, which features a dithienylethene photochromic unit and two lateral coordinating moieties, and yields molecules with [MM⋅⋅⋅MM] topology. Compounds [M 4 L 2 (py) 6 ] (M=Cu, 1; Co, 2; Ni, 3; Zn, 4) were prepared and studied by single-crystal X-ray diffraction (SCXRD). Different metal centers can be selectively distributed among the two chemically distinct sites of the ligand, and this enables the preparation of many double-spin systems. Heterometallic [MM'⋅⋅⋅M'M] analogues with formulas [Cu 2 Ni 2 L 2 (py) 6 ] (5), [Co 2 Ni 2 L 2 (py) 6 ] (6), [Co 2 Cu 2 L 2 (py) 6 ] (7), [Cu 2 Zn 2 L 2 (py) 6 ] (8), and [Ni 2 Zn 2 L 2 (py) 6 ] (9) were prepared and analyzed by SCXRD. Their composition was established unambiguously. All complexes exhibit two weakly interacting [MM'] moieties, some of which embody two-level quantum systems. Compounds 5 and 8 each exhibit a pair of weakly coupled S=1/2 spins that show quantum coherence in pulsed Q-band EPR spectroscopy, as required for quantum computing, with good phase memory times (T M =3.59 and 6.03 μs at 7 K). Reversible photoswitching of all the molecules was confirmed in solution. DFT calculations on 5 indicate that the interaction between the two spins of the molecule can be switched on and off on photocyclization. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.

    PubMed

    Donoho, David; Jin, Jiashun

    2008-09-30

    In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.

  5. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak

    PubMed Central

    Donoho, David; Jin, Jiashun

    2008-01-01

    In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365

  6. On Convergence of Extended Dynamic Mode Decomposition to the Koopman Operator

    NASA Astrophysics Data System (ADS)

    Korda, Milan; Mezić, Igor

    2018-04-01

    Extended dynamic mode decomposition (EDMD) (Williams et al. in J Nonlinear Sci 25(6):1307-1346, 2015) is an algorithm that approximates the action of the Koopman operator on an N-dimensional subspace of the space of observables by sampling at M points in the state space. Assuming that the samples are drawn either independently or ergodically from some measure μ , it was shown in Klus et al. (J Comput Dyn 3(1):51-79, 2016) that, in the limit as M→ ∞, the EDMD operator K_{N,M} converges to K_N, where K_N is the L_2(μ )-orthogonal projection of the action of the Koopman operator on the finite-dimensional subspace of observables. We show that, as N → ∞, the operator K_N converges in the strong operator topology to the Koopman operator. This in particular implies convergence of the predictions of future values of a given observable over any finite time horizon, a fact important for practical applications such as forecasting, estimation and control. In addition, we show that accumulation points of the spectra of K_N correspond to the eigenvalues of the Koopman operator with the associated eigenfunctions converging weakly to an eigenfunction of the Koopman operator, provided that the weak limit of the eigenfunctions is nonzero. As a by-product, we propose an analytic version of the EDMD algorithm which, under some assumptions, allows one to construct K_N directly, without the use of sampling. Finally, under additional assumptions, we analyze convergence of K_{N,N} (i.e., M=N), proving convergence, along a subsequence, to weak eigenfunctions (or eigendistributions) related to the eigenmeasures of the Perron-Frobenius operator. No assumptions on the observables belonging to a finite-dimensional invariant subspace of the Koopman operator are required throughout.

  7. Organ and tissue donation in clinical settings: a systematic review of the impact of interventions aimed at health professionals

    PubMed Central

    2014-01-01

    In countries where presumed consent for organ donation does not apply, health professionals (HP) are key players for identifying donors and obtaining their consent. This systematic review was designed to verify the efficacy of interventions aimed at HPs to promote organ and tissue donation in clinical settings. CINAHL (1982 to 2012), COCHRANE LIBRARY, EMBASE (1974 to 2012), MEDLINE (1966 to 2012), PsycINFO (1960 to 2012), and ProQuest Dissertations and Theses were searched for papers published in French or English until September 2012. Studies were considered if they met the following criteria: aimed at improving HPs’ practices regarding the donation process or at increasing donation rates; HPs working in clinical settings; and interventions with a control group or pre-post assessments. Intervention behavioral change techniques were analyzed using a validated taxonomy. A risk ratio was computed for each study having a control group. A total of 15 studies were identified, of which only 5 had a control group. Interventions were either educational, organizational or a combination of both, and had a weak theoretical basis. The most common behavior change technique was providing instruction. Two sets of interventions showed a significant risk ratio. However, most studies did not report the information needed to compute their efficacy. Therefore, interventions aimed at improving the donation process or at increasing donation rates should be based on sound theoretical frameworks. They would benefit from more rigorous evaluation methods to ensure good knowledge translation and appropriate organizational decisions to improve professional practices. PMID:24628967

  8. Utilizing Weak Indicators to Detect Anomalous Behaviors in Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egid, Adin Ezra

    We consider the use of a novel weak in- dicator alongside more commonly used weak indicators to help detect anomalous behavior in a large computer network. The data of the network which we are studying in this research paper concerns remote log-in information (Virtual Private Network, or VPN sessions) from the internal network of Los Alamos National Laboratory (LANL). The novel indicator we are utilizing is some- thing which, while novel in its application to data science/cyber security research, is a concept borrowed from the business world. The Her ndahl-Hirschman Index (HHI) is a computationally trivial index which provides amore » useful heuristic for regulatory agencies to ascertain the relative competitiveness of a particular industry. Using this index as a lagging indicator in the monthly format we have studied could help to detect anomalous behavior by a particular or small set of users on the network. Additionally, we study indicators related to the speed of movement of a user based on the physical location of their current and previous logins. This data can be ascertained from the IP addresses of the users, and is likely very similar to the fraud detection schemes regularly utilized by credit card networks to detect anomalous activity. In future work we would look to nd a way to combine these indicators for use as an internal fraud detection system.« less

  9. Commercial absorption chiller models for evaluation of control strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koeppel, E.A.; Klein, S.A.; Mitchell, J.W.

    1995-08-01

    A steady-state computer simulation model of a direct fired double-effect water-lithium bromide absorption chiller in the parallel-flow configuration was developed from first principles. Unknown model parameters such as heat transfer coefficients were determined by matching the model`s calculated state points and coefficient of performance (COP) against nominal full-load operating data and COPs obtained from a manufacturer`s catalog. The model compares favorably with the manufacturer`s performance ratings for varying water circuit (chilled and cooling) temperatures at full load conditions and for chiller part-load performance. The model was used (1) to investigate the effect of varying the water circuit flow rates withmore » the chiller load and (2) to optimize chiller part-load performance with respect to the distribution and flow of the weak solution.« less

  10. Bridging, brokerage and betweenness

    PubMed Central

    Everett, Martin G.; Valente, Thomas W.

    2017-01-01

    Valente and Fujimoto (2010) proposed a measure of brokerage in networks based on Granovetter’s classic work on the strength of weak ties. Their paper identified the need for finding node-based measures of brokerage that consider the entire network structure, not just a node’s local environment. The measures they propose, aggregating the average change in cohesion for a node’s links, has several limitations. In this paper we review their method and show how the idea can be modified by using betweenness centrality as an underpinning concept. We explore the properties of the new method and provide point, normalized, and network level variations. This new approach has two advantages, first it provides a more robust means to normalize the measure to control for network size, and second, the modified measure is computationally less demanding making it applicable to larger networks. PMID:28239229

  11. A multimethod approach to examining usability of Web privacy polices and user agents for specifying privacy preferences.

    PubMed

    Proctor, Robert W; Vu, Kim-Phuong L

    2007-05-01

    Because all research methods have strengths and weaknesses, a multimethod approach often provides the best way to understand human behavior in applied settings. We describe how a multimethod approach was employed in a series of studies designed to examine usability issues associated with two aspects of online privacy: comprehension of privacy policies and configuration of privacy preferences for an online user agent. Archival research, user surveys, data mining, quantitative observations, and controlled experiments each yielded unique findings that, together, contributed to increased understanding of online-privacy issues for users. These findings were used to evaluate the accessibility of Web privacy policies to computer-literate users, determine whether people can configure user agents to achieve specific privacy goals, and discover ways in which the usability of those agents can be improved.

  12. Freezing point depression in model Lennard-Jones solutions

    NASA Astrophysics Data System (ADS)

    Koschke, Konstantin; Jörg Limbach, Hans; Kremer, Kurt; Donadio, Davide

    2015-09-01

    Crystallisation of liquid solutions is of uttermost importance in a wide variety of processes in materials, atmospheric and food science. Depending on the type and concentration of solutes the freezing point shifts, thus allowing control on the thermodynamics of complex fluids. Here we investigate the basic principles of solute-induced freezing point depression by computing the melting temperature of a Lennard-Jones fluid with low concentrations of solutes, by means of equilibrium molecular dynamics simulations. The effect of solvophilic and weakly solvophobic solutes at low concentrations is analysed, scanning systematically the size and the concentration. We identify the range of parameters that produce deviations from the linear dependence of the freezing point on the molal concentration of solutes, expected for ideal solutions. Our simulations allow us also to link the shifts in coexistence temperature to the microscopic structure of the solutions.

  13. Distributed Factorization Computation on Multiple Volunteered Mobile Resource to Break RSA Key

    NASA Astrophysics Data System (ADS)

    Jaya, I.; Hardi, S. M.; Tarigan, J. T.; Zamzami, E. M.; Sihombing, P.

    2017-01-01

    Similar to common asymmeric encryption, RSA can be cracked by usmg a series mathematical calculation. The private key used to decrypt the massage can be computed using the public key. However, finding the private key may require a massive amount of calculation. In this paper, we propose a method to perform a distributed computing to calculate RSA’s private key. The proposed method uses multiple volunteered mobile devices to contribute during the calculation process. Our objective is to demonstrate how the use of volunteered computing on mobile devices may be a feasible option to reduce the time required to break a weak RSA encryption and observe the behavior and running time of the application on mobile devices.

  14. Stopping computer crimes

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Two new books about intrusions and computer viruses remind us that attacks against our computers on networks are the actions of human beings. Cliff Stoll's book about the hacker who spent a year, beginning in Aug. 1986, attempting to use the Lawrence Berkeley Computer as a stepping-stone for access to military secrets is a spy thriller that illustrates the weaknesses of our password systems and the difficulties in compiling evidence against a hacker engaged in espionage. Pamela Kane's book about viruses that attack IBM PC's shows that viruses are the modern version of the old problem of a Trojan horse attack. It discusses the most famous viruses and their countermeasures, and it comes with a floppy disk of utility programs that will disinfect your PC and thwart future attack.

  15. Weakly Nonergodic Dynamics in the Gross-Pitaevskii Lattice

    NASA Astrophysics Data System (ADS)

    Mithun, Thudiyangal; Kati, Yagmur; Danieli, Carlo; Flach, Sergej

    2018-05-01

    The microcanonical Gross-Pitaevskii (also known as the semiclassical Bose-Hubbard) lattice model dynamics is characterized by a pair of energy and norm densities. The grand canonical Gibbs distribution fails to describe a part of the density space, due to the boundedness of its kinetic energy spectrum. We define Poincaré equilibrium manifolds and compute the statistics of microcanonical excursion times off them. The tails of the distribution functions quantify the proximity of the many-body dynamics to a weakly nonergodic phase, which occurs when the average excursion time is infinite. We find that a crossover to weakly nonergodic dynamics takes place inside the non-Gibbs phase, being unnoticed by the largest Lyapunov exponent. In the ergodic part of the non-Gibbs phase, the Gibbs distribution should be replaced by an unknown modified one. We relate our findings to the corresponding integrable limit, close to which the actions are interacting through a short range coupling network.

  16. The Kadomtsev-Petviashvili equation under rapid forcing

    NASA Astrophysics Data System (ADS)

    Moroz, Irene M.

    1997-06-01

    We consider the initial value problem for the forced Kadomtsev-Petviashvili equation (KP) when the forcing is assumed to be fast compared to the evolution of the unforced equation. This suggests the introduction of two time scales. Solutions to the forced KP are sought by expanding the dependent variable in powers of a small parameter, which is inversely related to the forcing time scale. The unforced system describes weakly nonlinear, weakly dispersive, weakly two-dimensional wave propagation and is studied in two forms, depending upon whether gravity dominates surface tension or vice versa. We focus on the effect that the forcing has on the one-lump solution to the KPI equation (where surface tension dominates) and on the one- and two-line soliton solutions to the KPII equation (when gravity dominates). Solutions to second order in the expansion are computed analytically for some specific choices of the forcing function, which are related to the choice of initial data.

  17. Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry E.

    2017-07-01

    We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.

  18. A hybridized formulation for the weak Galerkin mixed finite element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    This paper presents a hybridized formulation for the weak Galerkin mixed finite element method (WG-MFEM) which was introduced and analyzed in Wang and Ye (2014) for second order elliptic equations. The WG-MFEM method was designed by using discontinuous piecewise polynomials on finite element partitions consisting of polygonal or polyhedral elements of arbitrary shape. The key to WG-MFEM is the use of a discrete weak divergence operator which is defined and computed by solving inexpensive problems locally on each element. The hybridized formulation of this paper leads to a significantly reduced system of linear equations involving only the unknowns arising frommore » the Lagrange multiplier in hybridization. Optimal-order error estimates are derived for the hybridized WG-MFEM approximations. In conclusion, some numerical results are reported to confirm the theory and a superconvergence for the Lagrange multiplier.« less

  19. A hybridized formulation for the weak Galerkin mixed finite element method

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu

    2016-01-14

    This paper presents a hybridized formulation for the weak Galerkin mixed finite element method (WG-MFEM) which was introduced and analyzed in Wang and Ye (2014) for second order elliptic equations. The WG-MFEM method was designed by using discontinuous piecewise polynomials on finite element partitions consisting of polygonal or polyhedral elements of arbitrary shape. The key to WG-MFEM is the use of a discrete weak divergence operator which is defined and computed by solving inexpensive problems locally on each element. The hybridized formulation of this paper leads to a significantly reduced system of linear equations involving only the unknowns arising frommore » the Lagrange multiplier in hybridization. Optimal-order error estimates are derived for the hybridized WG-MFEM approximations. In conclusion, some numerical results are reported to confirm the theory and a superconvergence for the Lagrange multiplier.« less

  20. Detecting a Weak Association by Testing its Multiple Perturbations: a Data Mining Approach

    NASA Astrophysics Data System (ADS)

    Lo, Min-Tzu; Lee, Wen-Chung

    2014-05-01

    Many risk factors/interventions in epidemiologic/biomedical studies are of minuscule effects. To detect such weak associations, one needs a study with a very large sample size (the number of subjects, n). The n of a study can be increased but unfortunately only to an extent. Here, we propose a novel method which hinges on increasing sample size in a different direction-the total number of variables (p). We construct a p-based `multiple perturbation test', and conduct power calculations and computer simulations to show that it can achieve a very high power to detect weak associations when p can be made very large. As a demonstration, we apply the method to analyze a genome-wide association study on age-related macular degeneration and identify two novel genetic variants that are significantly associated with the disease. The p-based method may set a stage for a new paradigm of statistical tests.

  1. COMOC 2: Two-dimensional aerodynamics sequence, computer program user's guide

    NASA Technical Reports Server (NTRS)

    Manhardt, P. D.; Orzechowski, J. A.; Baker, A. J.

    1977-01-01

    The COMOC finite element fluid mechanics computer program system is applicable to diverse problem classes. The two dimensional aerodynamics sequence was established for solution of the potential and/or viscous and turbulent flowfields associated with subsonic flight of elementary two dimensional isolated airfoils. The sequence is constituted of three specific flowfield options in COMOC for two dimensional flows. These include the potential flow option, the boundary layer option, and the parabolic Navier-Stokes option. By sequencing through these options, it is possible to computationally construct a weak-interaction model of the aerodynamic flowfield. This report is the user's guide to operation of COMOC for the aerodynamics sequence.

  2. The changing landscape of astrostatistics and astroinformatics

    NASA Astrophysics Data System (ADS)

    Feigelson, Eric D.

    2017-06-01

    The history and current status of the cross-disciplinary fields of astrostatistics and astroinformatics are reviewed. Astronomers need a wide range of statistical methods for both data reduction and science analysis. With the proliferation of high-throughput telescopes, efficient large scale computational methods are also becoming essential. However, astronomers receive only weak training in these fields during their formal education. Interest in the fields is rapidly growing with conferences organized by scholarly societies, textbooks and tutorial workshops, and research studies pushing the frontiers of methodology. R, the premier language of statistical computing, can provide an important software environment for the incorporation of advanced statistical and computational methodology into the astronomical community.

  3. Integrated Giant Magnetoresistance Technology for Approachable Weak Biomagnetic Signal Detections

    PubMed Central

    Shen, Hui-Min; Hu, Liang; Fu, Xin

    2018-01-01

    With the extensive applications of biomagnetic signals derived from active biological tissue in both clinical diagnoses and human-computer-interaction, there is an increasing need for approachable weak biomagnetic sensing technology. The inherent merits of giant magnetoresistance (GMR) and its high integration with multiple technologies makes it possible to detect weak biomagnetic signals with micron-sized, non-cooled and low-cost sensors, considering that the magnetic field intensity attenuates rapidly with distance. This paper focuses on the state-of-art in integrated GMR technology for approachable biomagnetic sensing from the perspective of discipline fusion between them. The progress in integrated GMR to overcome the challenges in weak biomagnetic signal detection towards high resolution portable applications is addressed. The various strategies for 1/f noise reduction and sensitivity enhancement in integrated GMR technology for sub-pT biomagnetic signal recording are discussed. In this paper, we review the developments of integrated GMR technology for in vivo/vitro biomagnetic source imaging and demonstrate how integrated GMR can be utilized for biomagnetic field detection. Since the field sensitivity of integrated GMR technology is being pushed to fT/Hz0.5 with the focused efforts, it is believed that the potential of integrated GMR technology will make it preferred choice in weak biomagnetic signal detection in the future. PMID:29316670

  4. Cross-correlation of weak lensing and gamma rays: implications for the nature of dark matter

    NASA Astrophysics Data System (ADS)

    Tröster, Tilman; Camera, Stefano; Fornasa, Mattia; Regis, Marco; van Waerbeke, Ludovic; Harnois-Déraps, Joachim; Ando, Shin'ichiro; Bilicki, Maciej; Erben, Thomas; Fornengo, Nicolao; Heymans, Catherine; Hildebrandt, Hendrik; Hoekstra, Henk; Kuijken, Konrad; Viola, Massimo

    2017-05-01

    We measure the cross-correlation between Fermi gamma-ray photons and over 1000 deg2 of weak lensing data from the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS), the Red Cluster Sequence Lensing Survey (RCSLenS), and the Kilo Degree Survey (KiDS). We present the first measurement of tomographic weak lensing cross-correlations and the first application of spectral binning to cross-correlations between gamma rays and weak lensing. The measurements are performed using an angular power spectrum estimator while the covariance is estimated using an analytical prescription. We verify the accuracy of our covariance estimate by comparing it to two internal covariance estimators. Based on the non-detection of a cross-correlation signal, we derive constraints on weakly interacting massive particle (WIMP) dark matter. We compute exclusion limits on the dark matter annihilation cross-section <σannv>, decay rate Γdec and particle mass mDM. We find that in the absence of a cross-correlation signal, tomography does not significantly improve the constraining power of the analysis. Assuming a strong contribution to the gamma-ray flux due to small-scale clustering of dark matter and accounting for known astrophysical sources of gamma rays, we exclude the thermal relic cross-section for particle masses of mDM ≲ 20 GeV.

  5. Integrated Giant Magnetoresistance Technology for Approachable Weak Biomagnetic Signal Detections.

    PubMed

    Shen, Hui-Min; Hu, Liang; Fu, Xin

    2018-01-07

    With the extensive applications of biomagnetic signals derived from active biological tissue in both clinical diagnoses and human-computer-interaction, there is an increasing need for approachable weak biomagnetic sensing technology. The inherent merits of giant magnetoresistance (GMR) and its high integration with multiple technologies makes it possible to detect weak biomagnetic signals with micron-sized, non-cooled and low-cost sensors, considering that the magnetic field intensity attenuates rapidly with distance. This paper focuses on the state-of-art in integrated GMR technology for approachable biomagnetic sensing from the perspective of discipline fusion between them. The progress in integrated GMR to overcome the challenges in weak biomagnetic signal detection towards high resolution portable applications is addressed. The various strategies for 1/ f noise reduction and sensitivity enhancement in integrated GMR technology for sub-pT biomagnetic signal recording are discussed. In this paper, we review the developments of integrated GMR technology for in vivo/vitro biomagnetic source imaging and demonstrate how integrated GMR can be utilized for biomagnetic field detection. Since the field sensitivity of integrated GMR technology is being pushed to fT/Hz 0.5 with the focused efforts, it is believed that the potential of integrated GMR technology will make it preferred choice in weak biomagnetic signal detection in the future.

  6. Second-order accurate nonoscillatory schemes for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1989-01-01

    Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.

  7. Computational Neuroscience.

    ERIC Educational Resources Information Center

    Sejnowski, Terrence J.; And Others

    1988-01-01

    Describes the use of brain models to connect the microscopic level accessible by molecular and cellular techniques with the systems level accessible by the study of behavior. Discusses classes of brain models, and specific examples of such models. Evaluates the strengths and weaknesses of using brain modelling to understand human brain function.…

  8. The Gulf War on Computer: A Review of "Iraq Stack."

    ERIC Educational Resources Information Center

    Rattan, Dick

    1993-01-01

    Reviews a HyperCard stack designed for use in schools and at home. Describes the program as primarily a database of information on Iraq, Kuwait, and the Gulf War. Contends that the program is pedagogically weak and of marginal use in the classroom. (CFR)

  9. Courseware Review.

    ERIC Educational Resources Information Center

    Risley, John S.

    1983-01-01

    Describes computer program (available on diskette for Apple IIe/II-plus, Commodore PET/CBM, or Commodore 64) providing drill/practice on concepts of electric charge, electric current, and electric potential difference. A second diskette provides a test of fifteen multiple-choice questions, with option to print score and areas of weakness. (JM)

  10. Thermal radiation view factor: Methods, accuracy and computer-aided procedures

    NASA Technical Reports Server (NTRS)

    Kadaba, P. V.

    1982-01-01

    The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

  11. Photogrammetry on glaciers: Old and new knowledge

    NASA Astrophysics Data System (ADS)

    Pfeffer, W. T.; Welty, E.; O'Neel, S.

    2014-12-01

    In the past few decades terrestrial photogrammetry has become a widely used tool for glaciological research, brought about in part by the proliferation of high-quality, low-cost digital cameras, dramatic increases in image-processing power of computers, and very innovative progress in image processing, much of which has come from computer vision research and from the computer gaming industry. At present, glaciologists have developed their capacity to gather images much further than their ability to process them. Many researchers have accumulated vast inventories of imagery, but have no efficient means to extract the data they desire from them. In many cases these are single-image time series where the processing limitation lies in the paucity of methods to obtain 3-dimension object space information from measurements in the 2-dimensional image space; in other cases camera pairs have been operated but no automated means is in hand for conventional stereometric analysis of many thousands of image pairs. Often the processing task is further complicated by weak camera geometry or ground control distribution, either of which will compromise the quality of 3-dimensional object space solutions. Solutions exist for many of these problems, found sometimes among the latest computer vision results, and sometimes buried in decades-old pre-digital terrestrial photogrammetric literature. Other problems, particularly those arising from poorly constrained or underdetermined camera and ground control geometry, may be unsolvable. Small-scale, ground-based photography and photogrammetry of glaciers has grown over the past few decades in an organic and disorganized fashion, with much duplication of effort and little coordination or sharing of knowledge among researchers. Given the utility of terrestrial photogrammetry, its low cost (if properly developed and implemented), and the substantial value of the information to be had from it, some further effort to share knowledge and methods would be a great benefit for the community. We consider some of the main problems to be solved, and aspects of how optimal knowledge sharing might be accomplished.

  12. Biological effects due to weak magnetic field on plants

    NASA Astrophysics Data System (ADS)

    Belyavskaya, N. A.

    2004-01-01

    Throughout the evolution process, Earth's magnetic field (MF, about 50 μT) was a natural component of the environment for living organisms. Biological objects, flying on planned long-term interplanetary missions, would experience much weaker magnetic fields, since galactic MF is known to be 0.1-1 nT. However, the role of weak magnetic fields and their influence on functioning of biological organisms are still insufficiently understood, and is actively studied. Numerous experiments with seedlings of different plant species placed in weak magnetic field have shown that the growth of their primary roots is inhibited during early germination stages in comparison with control. The proliferative activity and cell reproduction in meristem of plant roots are reduced in weak magnetic field. Cell reproductive cycle slows down due to the expansion of G 1 phase in many plant species (and of G 2 phase in flax and lentil roots), while other phases of cell cycle remain relatively stabile. In plant cells exposed to weak magnetic field, the functional activity of genome at early pre-replicate period is shown to decrease. Weak magnetic field causes intensification of protein synthesis and disintegration in plant roots. At ultrastructural level, changes in distribution of condensed chromatin and nucleolus compactization in nuclei, noticeable accumulation of lipid bodies, development of a lytic compartment (vacuoles, cytosegresomes and paramural bodies), and reduction of phytoferritin in plastids in meristem cells were observed in pea roots exposed to weak magnetic field. Mitochondria were found to be very sensitive to weak magnetic field: their size and relative volume in cells increase, matrix becomes electron-transparent, and cristae reduce. Cytochemical studies indicate that cells of plant roots exposed to weak magnetic field show Ca 2+ over-saturation in all organelles and in cytoplasm unlike the control ones. The data presented suggest that prolonged exposures of plants to weak magnetic field may cause different biological effects at the cellular, tissue and organ levels. They may be functionally related to systems that regulate plant metabolism including the intracellular Ca 2+ homeostasis. However, our understanding of very complex fundamental mechanisms and sites of interactions between weak magnetic fields and biological systems is still incomplete and still deserve strong research efforts.

  13. Coherent Backscattering by Polydisperse Discrete Random Media: Exact T-Matrix Results

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2011-01-01

    The numerically exact superposition T-matrix method is used to compute, for the first time to our knowledge, electromagnetic scattering by finite spherical volumes composed of polydisperse mixtures of spherical particles with different size parameters or different refractive indices. The backscattering patterns calculated in the far-field zone of the polydisperse multiparticle volumes reveal unequivocally the classical manifestations of the effect of weak localization of electromagnetic waves in discrete random media, thereby corroborating the universal interference nature of coherent backscattering. The polarization opposition effect is shown to be the least robust manifestation of weak localization fading away with increasing particle size parameter.

  14. Impaired control of weight bearing ankle inversion in subjects with chronic ankle instability.

    PubMed

    Terrier, R; Rose-Dulcina, K; Toschi, B; Forestier, N

    2014-04-01

    Previous studies have proposed that evertor muscle weakness represents an important factor affecting chronic ankle instability. For research purposes, ankle evertor strength is assessed by means of isokinetic evaluations. However, this methodology is constraining for daily clinical use. The present study proposes to assess ankle evertor muscle weakness using a new procedure, one that is easily accessible for rehabilitation specialists. To do so, we compared weight bearing ankle inversion control between patients suffering from chronic ankle instability and healthy subjects. 12 healthy subjects and 11 patients suffering from chronic ankle instability conducted repetitions of one leg weight bearing ankle inversion on a specific ankle destabilization device equipped with a gyroscope. Ankle inversion control was performed by means of an eccentric recruitment of evertor muscles. Instructions were to perform, as slow as possible, the ankle inversion while resisting against full body weight applied on the tested ankle. Data clearly showed higher angular inversion velocity peaks in patients suffering from chronic ankle instability. This illustrates an impaired control of weight bearing ankle inversion and, by extension, an eccentric weakness of evertor muscles. The present study supports the hypothesis of a link between the decrease of ankle joint stability and evertor muscle weakness. Moreover, it appears that the new parameter is of use in a clinical setting. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Weak-field multiphoton femtosecond coherent control in the single-cycle regime.

    PubMed

    Chuntonov, Lev; Fleischer, Avner; Amitay, Zohar

    2011-03-28

    Weak-field coherent phase control of atomic non-resonant multiphoton excitation induced by shaped femtosecond pulses is studied theoretically in the single-cycle regime. The carrier-envelope phase (CEP) of the pulse, which in the multi-cycle regime does not play any control role, is shown here to be a new effective control parameter that its effect is highly sensitive to the spectral position of the ultrabroad spectrum. Rationally chosen position of the ultrabroadband spectrum coherently induces several groups of multiphoton transitions from the ground state to the excited state of the system: transitions involving only absorbed photons as well as Raman transitions involving both absorbed and emitted photons. The intra-group interference is controlled by the relative spectral phase of the different frequency components of the pulse, while the inter-group interference is controlled jointly by the CEP and the relative spectral phase. Specifically, non-resonant two- and three-photon excitation is studied in a simple model system within the perturbative frequency-domain framework. The developed intuition is then applied to weak-field multiphoton excitation of atomic cesium (Cs), where the simplified model is verified by non-perturbative numerical solution of the time-dependent Schrödinger equation. We expect this work to serve as a basis for a new line of femtosecond coherent control experiments.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quon, Eliot; Platt, Andrew; Yu, Yi-Hsiang

    Extreme loads are often a key cost driver for wave energy converters (WECs). As an alternative to exhaustive Monte Carlo or long-term simulations, the most likely extreme response (MLER) method allows mid- and high-fidelity simulations to be used more efficiently in evaluating WEC response to events at the edges of the design envelope, and is therefore applicable to system design analysis. The study discussed in this paper applies the MLER method to investigate the maximum heave, pitch, and surge force of a point absorber WEC. Most likely extreme waves were obtained from a set of wave statistics data based onmore » spectral analysis and the response amplitude operators (RAOs) of the floating body; the RAOs were computed from a simple radiation-and-diffraction-theory-based numerical model. A weakly nonlinear numerical method and a computational fluid dynamics (CFD) method were then applied to compute the short-term response to the MLER wave. Effects of nonlinear wave and floating body interaction on the WEC under the anticipated 100-year waves were examined by comparing the results from the linearly superimposed RAOs, the weakly nonlinear model, and CFD simulations. Overall, the MLER method was successfully applied. In particular, when coupled to a high-fidelity CFD analysis, the nonlinear fluid dynamics can be readily captured.« less

  17. Geometric flow control of shear bands by suppression of viscous sliding

    NASA Astrophysics Data System (ADS)

    Sagapuram, Dinakar; Viswanathan, Koushik; Mahato, Anirban; Sundaram, Narayan K.; M'Saoubi, Rachid; Trumble, Kevin P.; Chandrasekar, Srinivasan

    2016-08-01

    Shear banding is a plastic flow instability with highly undesirable consequences for metals processing. While band characteristics have been well studied, general methods to control shear bands are presently lacking. Here, we use high-speed imaging and micro-marker analysis of flow in cutting to reveal the common fundamental mechanism underlying shear banding in metals. The flow unfolds in two distinct phases: an initiation phase followed by a viscous sliding phase in which most of the straining occurs. We show that the second sliding phase is well described by a simple model of two identical fluids being sheared across their interface. The equivalent shear band viscosity computed by fitting the model to experimental displacement profiles is very close in value to typical liquid metal viscosities. The observation of similar displacement profiles across different metals shows that specific microstructure details do not affect the second phase. This also suggests that the principal role of the initiation phase is to generate a weak interface that is susceptible to localized deformation. Importantly, by constraining the sliding phase, we demonstrate a material-agnostic method-passive geometric flow control-that effects complete band suppression in systems which otherwise fail via shear banding.

  18. A finite element based method for solution of optimal control problems

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.; Calise, Anthony J.

    1989-01-01

    A temporal finite element based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables that are expanded in terms of elemental values and simple shape functions. Unlike other variational approaches to optimal control problems, however, time derivatives of the states and costates do not appear in the governing variational equation. Instead, the only quantities whose time derivatives appear therein are virtual states and virtual costates. Also noteworthy among characteristics of the finite element formulation is the fact that in the algebraic equations which contain costates, they appear linearly. Thus, the remaining equations can be solved iteratively without initial guesses for the costates; this reduces the size of the problem by about a factor of two. Numerical results are presented herein for an elementary trajectory optimization problem which show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The goal is to evaluate the feasibility of this approach for real-time guidance applications. To this end, a simplified two-stage, four-state model for an advanced launch vehicle application is presented which is suitable for finite element solution.

  19. Functional connectivity changes detected with magnetoencephalography after mild traumatic brain injury

    PubMed Central

    Dimitriadis, Stavros I.; Zouridakis, George; Rezaie, Roozbeh; Babajani-Feremi, Abbas; Papanicolaou, Andrew C.

    2015-01-01

    Mild traumatic brain injury (mTBI) may affect normal cognition and behavior by disrupting the functional connectivity networks that mediate efficient communication among brain regions. In this study, we analyzed brain connectivity profiles from resting state Magnetoencephalographic (MEG) recordings obtained from 31 mTBI patients and 55 normal controls. We used phase-locking value estimates to compute functional connectivity graphs to quantify frequency-specific couplings between sensors at various frequency bands. Overall, normal controls showed a dense network of strong local connections and a limited number of long-range connections that accounted for approximately 20% of all connections, whereas mTBI patients showed networks characterized by weak local connections and strong long-range connections that accounted for more than 60% of all connections. Comparison of the two distinct general patterns at different frequencies using a tensor representation for the connectivity graphs and tensor subspace analysis for optimal feature extraction showed that mTBI patients could be separated from normal controls with 100% classification accuracy in the alpha band. These encouraging findings support the hypothesis that MEG-based functional connectivity patterns may be used as biomarkers that can provide more accurate diagnoses, help guide treatment, and monitor effectiveness of intervention in mTBI. PMID:26640764

  20. You Should Be the Specialist! Weak Mental Rotation Performance in Aviation Security Screeners - Reduced Performance Level in Aviation Security with No Gender Effect.

    PubMed

    Krüger, Jenny K; Suchan, Boris

    2016-01-01

    Aviation security screeners analyze a large number of X-ray images per day and seem to be experts in mentally rotating diverse kinds of visual objects. A robust gender-effect that men outperform women in the Vandenberg & Kuse mental rotation task has been well documented over the last years. In addition it has been shown that training can positively influence the overall task-performance. Considering this, the aim of the present study was to investigate whether security screeners show better performance in the Mental Rotation Test (MRT) independently of gender. Forty-seven security screeners of both sexes from two German airports were examined with a computer based MRT. Their performance was compared to a large sample of control subjects. The well-known gender-effect favoring men on mental rotation was significant within the control group. However, the security screeners did not show any sex differences suggesting an effect of training and professional performance. Surprisingly this specialized group showed a lower level of overall MRT performance than the control participants. Possible aviation related influences such as secondary effects of work-shift or expertise which can cumulatively cause this result are discussed.

  1. Deformation of Fold-and-Thrust Belts above a Viscous Detachment: New Insights from Analogue Modelling Experiments

    NASA Astrophysics Data System (ADS)

    Nogueira, Carlos R.; Marques, Fernando O.

    2015-04-01

    Theoretical and experimental studies on fold-and-thrusts belts (FTB) have shown that, under Coulomb conditions, deformation of brittle thrust wedges above a dry frictional basal contact is characterized by dominant frontward vergent thrusts (forethrusts) with thrust spacing and taper angle being directly influenced by the basal strength (increase in basal strength leading to narrower thrust spacing and higher taper angles); whereas thrust wedges deformed above a weak viscous detachment, such as salt, show a more symmetric thrust style (no prevailing vergence of thrusting) with wider thrust spacing and shallower wedges. However, different deformation patterns can be found on this last group of thrust wedges both in nature and experimentally. Therefore we focused on the strength (friction) of the wedge basal contact, the basal detachment. We used a parallelepiped box with four fixed walls and one mobile that worked as a vertical piston drove by a computer controlled stepping motor. Fine dry sand was used as the analogue of brittle rocks and silicone putty (PDMS) with Newtonian behaviour as analogue of the weak viscous detachment. To investigate the strength of basal contact on thrust wedge deformation, two configurations were used: 1) a horizontal sand pack with a dry frictional basal contact; and 2) a horizontal sand pack above a horizontal PDMS layer, acting as a basal weak viscous contact. Results of the experiments show that: the model with a dry frictional basal detachment support the predictions for the Coulomb wedges, showing a narrow wedge with dominant frontward vergence of thrusting, close spacing between FTs and high taper angle. The model with a weak viscous frictional basal detachment show that: 1) forethrusts (FT) are dominant showing clearly an imbricate asymmetric geometry, with wider spaced thrusts than the dry frictional basal model; 2) after FT initiation, the movement on the thrust can last up to 15% model shortening, leading to great amount of displacement along the FT; 3) intermittent reactivation of FTs also occurs despite the steepening of the FT plane and existence of new FT ahead, creating a high critical taper angle; 4) injection of PDMS from the basal weak layer into the FTs planes also favours to the long living of FTs and to the high critical taper angle; 5) vertical sand thickening in the hanging block also added to the taper angle.

  2. Transient AC voltage related phenomena for HVDC schemes connected to weak AC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilotto, L.A.S.; Szechtman, M.; Hammad, A.E.

    1992-07-01

    In this paper a didactic explanation of voltage stability associated phenomena at HVDC terminals is presented. Conditions leading to ac voltage collapse problems are identified. A mechanism that excites control-induced voltage oscillations is shown. The voltage stability factor is used for obtaining the maximum power limits of ac/dc systems operating with different control strategies. Correlation to Pd {times} Id curves is given. Solutions for eliminating the risks of voltage collapse and for avoiding control-induced oscillations are discussed. The results are supported by detailed digital simulations of a weak ac/dc system using EMTP.

  3. Are computer and cell phone use associated with body mass index and overweight? A population study among twin adolescents.

    PubMed

    Lajunen, Hanna-Reetta; Keski-Rahkonen, Anna; Pulkkinen, Lea; Rose, Richard J; Rissanen, Aila; Kaprio, Jaakko

    2007-02-26

    Overweight in children and adolescents has reached dimensions of a global epidemic during recent years. Simultaneously, information and communication technology use has rapidly increased. A population-based sample of Finnish twins born in 1983-1987 (N = 4098) was assessed by self-report questionnaires at 17 y during 2000-2005. The association of overweight (defined by Cole's BMI-for-age cut-offs) with computer and cell phone use and ownership was analyzed by logistic regression and their association with BMI by linear regression models. The effect of twinship was taken into account by correcting for clustered sampling of families. All models were adjusted for gender, physical exercise, and parents' education and occupational class. The proportion of adolescents who did not have a computer at home decreased from 18% to 8% from 2000 to 2005. Compared to them, having a home computer (without an Internet connection) was associated with a higher risk of overweight (odds ratio 2.3, 95% CI 1.4 to 3.8) and BMI (beta coefficient 0.57, 95% CI 0.15 to 0.98). However, having a computer with an Internet connection was not associated with weight status. Belonging to the highest quintile (OR 1.8 95% CI 1.2 to 2.8) and second-highest quintile (OR 1.6 95% CI 1.1 to 2.4) of weekly computer use was positively associated with overweight. The proportion of adolescents without a personal cell phone decreased from 12% to 1% across 2000 to 2005. There was a positive linear trend of increasing monthly phone bill with BMI (beta 0.18, 95% CI 0.06 to 0.30), but the association of a cell phone bill with overweight was very weak. Time spent using a home computer was associated with an increased risk of overweight. Cell phone use correlated weakly with BMI. Increasing use of information and communication technology may be related to the obesity epidemic among adolescents.

  4. Rubrics and Exemplars in Text-Conferencing

    ERIC Educational Resources Information Center

    Zahara, Allan

    2005-01-01

    The author draws on his K-12 teaching experiences in analyzing the strengths and weaknesses of asynchronous, text-based conferencing in online education. Issues relating to Web-based versus client-driven systems in computer-mediated conferencing (CMC) are examined. The paper also discusses pedagogical and administrative implications of choosing a…

  5. Concept Learning and Heuristic Classification in Weak-Theory Domains

    DTIC Science & Technology

    1990-03-01

    age and noise-induced cochlear age..gt.60 noise-induced cochlear air(mild) age-induced cochlear history(noise) norma ]_ear speechpoor)acousticneuroma...Annual review of computer science. Machine Learning, 4, 1990. (to appear). [18] R.T. Duran . Concept learning with incomplete data sets. Master’s thesis

  6. Full Angular Profile of the Coherent Polarization Opposition Effect

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Luck, Jean-Marc; Nieuwenhuizen, Theo M.

    1999-01-01

    We use the rigorous vector theory of weak photon localization for a semi-infinite medium composed of nonabsorbing Rayleigh scatterers to compute the full angular profile of the polarization opposition effect. The latter is caused by coherent backscattering of unpolarized incident light and accompanies the renowned backscattering intensity peak.

  7. UNIX Systems in Higher Education: A Paradoxical Success.

    ERIC Educational Resources Information Center

    McCredie, John W.

    1983-01-01

    Bell Laboratories' much acclaimed UNIX operating system is widely used in educational computing environments. Discusses history of the system, system features and weaknesses, and policy issues. Also discusses some ways UNIX systems are used and recent developments at American Telephone and Telegraph (AT&T) impacting UNIX systems. (JN)

  8. An Experimental Study of the Emergence of Human Communication Systems

    ERIC Educational Resources Information Center

    Galantucci, Bruno

    2005-01-01

    The emergence of human communication systems is typically investigated via 2 approaches with complementary strengths and weaknesses: naturalistic studies and computer simulations. This study was conducted with a method that combines these approaches. Pairs of participants played video games requiring communication. Members of a pair were…

  9. Identifying the Key Weaknesses in Network Security at Colleges.

    ERIC Educational Resources Information Center

    Olsen, Florence

    2000-01-01

    A new study identifies and ranks the 10 security gaps responsible for most outsider attacks on college computer networks. The list is intended to help campus system administrators establish priorities as they work to increase security. One network security expert urges that institutions utilize multiple security layers. (DB)

  10. Simulation and virtual reality in medical education and therapy: a protocol.

    PubMed

    Roy, Michael J; Sticha, Deborah L; Kraus, Patricia L; Olsen, Dale E

    2006-04-01

    Continuing medical education has historically been provided primarily by didactic lectures, though adult learners prefer experiential or self-directed learning. Young physicians have extensive experience with computer-based or "video" games, priming them for medical education--and treating their patients--via new technologies. We report our use of standardized patients (SPs) to educate physicians on the diagnosis and treatment of biological and chemical warfare agent exposure. We trained professional actors to serve as SPs representing exposure to biological agents such as anthrax and smallpox. We rotated workshop participants through teaching stations to interview, examine, diagnose and treat SPs. We also trained SPs to simulate a chemical mass casualty (MASCAL) incident. Workshop participants worked together to treat MASCAL victims, followed by discussion of key teaching points. More recently, we developed computer-based simulation (CBS) modules of patients exposed to biological agents. We compare the strengths and weaknesses of CBS vs. live SPs. Finally, we detail plans for a randomized controlled trial to assess the efficacy of virtual reality (VR) exposure therapy compared to pharmacotherapy for post-traumatic stress disorder (PTSD). PTSD is associated with significant disability and healthcare costs, which may be ameliorated by the identification of more effective therapy.

  11. Inversion of potential field data using the finite element method on parallel computers

    NASA Astrophysics Data System (ADS)

    Gross, L.; Altinay, C.; Shaw, S.

    2015-11-01

    In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.

  12. Correlating Lagrangian structures with forcing in two-dimensional flow

    NASA Astrophysics Data System (ADS)

    Ouellette, Nicholas; Hogg, Charlie; Liao, Yang

    2015-11-01

    Lagrangian coherent structures (LCSs) are the dominant transport barriers in unsteady, aperiodic flows, and their role in organizing mixing and transport has been well documented. However, nearly all that is known about LCSs has been gleaned from passive observations: they are computed in a post-processing step after a flow has been observed, and used to understand why the mixing and transport proceeded as it did. Here, we instead take a first step toward controlling the presence or locations of LCSs by studying the relationship between LCSs and external forcing in an experimental quasi-two-dimensional weakly turbulent flow. We find that the likelihood of finding a repelling LCS at a given location is positively correlated with the mean strain rate injected at that point and negatively correlated with the mean speed, and that it is not correlated with the vorticity. We also find that mean time between successive LCSs appearing at a fixed location is related to the structure of the forcing field. Finally, we demonstrate a surprising difference in our results between LCSs computed forward and backwards in time, with forward-time (repelling) LCSs showing much more correlation with the forcing than backwards-time (attracting) LCSs.

  13. Report on Project Action Sheet PP05 task 3 between the U.S. Department of Energy and the Republic of Korea Ministry of Education, Science, and Technology (MEST).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snell, Mark Kamerer

    2013-01-01

    This report documents the results of Task 3 of Project Action Sheet PP05 between the United States Department of Energy (DOE) and the Republic of Korea (ROK) Ministry of Education, Science, and Technology (MEST) for Support with Review of an ROK Risk Evaluation Process. This task was to have Sandia National Laboratories collaborate with the Korea Institute of Nuclear Nonproliferation and Control (KINAC) on several activities concerning how to determine the Probability of Neutralization, PN, and the Probability of System Effectiveness, PE, to include: providing descriptions on how combat simulations are used to determine PN and PE; comparisons of themore » strengths and weaknesses of two neutralization models (the Neutralization.xls spreadsheet model versus the Brief Adversary Threat-Loss Estimator (BATLE) software); and demonstrating how computer simulations can be used to determine PN. Note that the computer simulation used for the demonstration was the Scenario Toolkit And Generation Environment (STAGE) simulation, which is a stand-alone synthetic tactical simulation sold by Presagis Canada Incorporated. The demonstration is provided in a separate Audio Video Interleave (.AVI) file.« less

  14. QCD next-to-leading-order predictions matched to parton showers for vector-like quark models.

    PubMed

    Fuks, Benjamin; Shao, Hua-Sheng

    2017-01-01

    Vector-like quarks are featured by a wealth of beyond the Standard Model theories and are consequently an important goal of many LHC searches for new physics. Those searches, as well as most related phenomenological studies, however, rely on predictions evaluated at the leading-order accuracy in QCD and consider well-defined simplified benchmark scenarios. Adopting an effective bottom-up approach, we compute next-to-leading-order predictions for vector-like-quark pair production and single production in association with jets, with a weak or with a Higgs boson in a general new physics setup. We additionally compute vector-like-quark contributions to the production of a pair of Standard Model bosons at the same level of accuracy. For all processes under consideration, we focus both on total cross sections and on differential distributions, most these calculations being performed for the first time in our field. As a result, our work paves the way to precise extraction of experimental limits on vector-like quarks thanks to an accurate control of the shapes of the relevant observables and emphasise the extra handles that could be provided by novel vector-like-quark probes never envisaged so far.

  15. Quantification of sensitivity and resistance of breast cancer cell lines to anti-cancer drugs using GR metrics

    PubMed Central

    Hafner, Marc; Heiser, Laura M.; Williams, Elizabeth H.; Niepel, Mario; Wang, Nicholas J.; Korkola, James E.; Gray, Joe W.; Sorger, Peter K.

    2017-01-01

    Traditional means for scoring the effects of anti-cancer drugs on the growth and survival of cell lines is based on relative cell number in drug-treated and control samples and is seriously confounded by unequal division rates arising from natural biological variation and differences in culture conditions. This problem can be overcome by computing drug sensitivity on a per-division basis. The normalized growth rate inhibition (GR) approach yields per-division metrics for drug potency (GR50) and efficacy (GRmax) that are analogous to the more familiar IC50 and Emax values. In this work, we report GR-based, proliferation-corrected, drug sensitivity metrics for ~4,700 pairs of breast cancer cell lines and perturbagens. Such data are broadly useful in understanding the molecular basis of therapeutic response and resistance. Here, we use them to investigate the relationship between different measures of drug sensitivity and conclude that drug potency and efficacy exhibit high variation that is only weakly correlated. To facilitate further use of these data, computed GR curves and metrics can be browsed interactively at http://www.GRbrowser.org/. PMID:29112189

  16. Mass-corrections for the conservative coupling of flow and transport on collocated meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waluga, Christian, E-mail: waluga@ma.tum.de; Wohlmuth, Barbara; Rüde, Ulrich

    2016-01-15

    Buoyancy-driven flow models demand a careful treatment of the mass-balance equation to avoid spurious source and sink terms in the non-linear coupling between flow and transport. In the context of finite-elements, it is therefore commonly proposed to employ sufficiently rich pressure spaces, containing piecewise constant shape functions to obtain local or even strong mass-conservation. In three-dimensional computations, this usually requires nonconforming approaches, special meshes or higher order velocities, which make these schemes prohibitively expensive for some applications and complicate the implementation into legacy code. In this paper, we therefore propose a lean and conservatively coupled scheme based on standard stabilizedmore » linear equal-order finite elements for the Stokes part and vertex-centered finite volumes for the energy equation. We show that in a weak mass-balance it is possible to recover exact conservation properties by a local flux-correction which can be computed efficiently on the control volume boundaries of the transport mesh. We discuss implementation aspects and demonstrate the effectiveness of the flux-correction by different two- and three-dimensional examples which are motivated by geophysical applications.« less

  17. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less

  18. Shale Failure Mechanics and Intervention Measures in Underground Coal Mines: Results From 50 Years of Ground Control Safety Research

    PubMed Central

    2015-01-01

    Ground control research in underground coal mines has been ongoing for over 50 years. One of the most problematic issues in underground coal mines is roof failures associated with weak shale. This paper will present a historical narrative on the research the National Institute for Occupational Safety and Health has conducted in relation to rock mechanics and shale. This paper begins by first discussing how shale is classified in relation to coal mining. Characterizing and planning for weak roof sequences is an important step in developing an engineering solution to prevent roof failures. Next, the failure mechanics associated with the weak characteristics of shale will be discussed. Understanding these failure mechanics also aids in applying the correct engineering solutions. The various solutions that have been implemented in the underground coal mining industry to control the different modes of failure will be summarized. Finally, a discussion on current and future research relating to rock mechanics and shale is presented. The overall goal of the paper is to share the collective ground control experience of controlling roof structures dominated by shale rock in underground coal mining. PMID:26549926

  19. Shale Failure Mechanics and Intervention Measures in Underground Coal Mines: Results From 50 Years of Ground Control Safety Research.

    PubMed

    Murphy, M M

    2016-02-01

    Ground control research in underground coal mines has been ongoing for over 50 years. One of the most problematic issues in underground coal mines is roof failures associated with weak shale. This paper will present a historical narrative on the research the National Institute for Occupational Safety and Health has conducted in relation to rock mechanics and shale. This paper begins by first discussing how shale is classified in relation to coal mining. Characterizing and planning for weak roof sequences is an important step in developing an engineering solution to prevent roof failures. Next, the failure mechanics associated with the weak characteristics of shale will be discussed. Understanding these failure mechanics also aids in applying the correct engineering solutions. The various solutions that have been implemented in the underground coal mining industry to control the different modes of failure will be summarized. Finally, a discussion on current and future research relating to rock mechanics and shale is presented. The overall goal of the paper is to share the collective ground control experience of controlling roof structures dominated by shale rock in underground coal mining.

  20. Shale Failure Mechanics and Intervention Measures in Underground Coal Mines: Results From 50 Years of Ground Control Safety Research

    NASA Astrophysics Data System (ADS)

    Murphy, M. M.

    2016-02-01

    Ground control research in underground coal mines has been ongoing for over 50 years. One of the most problematic issues in underground coal mines is roof failures associated with weak shale. This paper will present a historical narrative on the research the National Institute for Occupational Safety and Health has conducted in relation to rock mechanics and shale. This paper begins by first discussing how shale is classified in relation to coal mining. Characterizing and planning for weak roof sequences is an important step in developing an engineering solution to prevent roof failures. Next, the failure mechanics associated with the weak characteristics of shale will be discussed. Understanding these failure mechanics also aids in applying the correct engineering solutions. The various solutions that have been implemented in the underground coal mining industry to control the different modes of failure will be summarized. Finally, a discussion on current and future research relating to rock mechanics and shale is presented. The overall goal of the paper is to share the collective ground control experience of controlling roof structures dominated by shale rock in underground coal mining.

  1. Strategic model of national rabies control in Korea.

    PubMed

    Cheong, Yeotaek; Kim, Bongjun; Lee, Ki Joong; Park, Donghwa; Kim, Sooyeon; Kim, Hyeoncheol; Park, Eunyeon; Lee, Hyeongchan; Bae, Chaewun; Oh, Changin; Park, Seung-Yong; Song, Chang-Seon; Lee, Sang-Won; Choi, In-Soo; Lee, Joong-Bok

    2014-01-01

    Rabies is an important zoonosis in the public and veterinary healthy arenas. This article provides information on the situation of current rabies outbreak, analyzes the current national rabies control system, reviews the weaknesses of the national rabies control strategy, and identifies an appropriate solution to manage the current situation. Current rabies outbreak was shown to be present from rural areas to urban regions. Moreover, the situation worldwide demonstrates that each nation struggles to prevent or control rabies. Proper application and execution of the rabies control program require the overcoming of existing weaknesses. Bait vaccines and other complex programs are suggested to prevent rabies transmission or infection. Acceleration of the rabies control strategy also requires supplementation of current policy and of public information. In addition, these prevention strategies should be executed over a mid- to long-term period to control rabies.

  2. A Quantum Proxy Weak Blind Signature Scheme Based on Controlled Quantum Teleportation

    NASA Astrophysics Data System (ADS)

    Cao, Hai-Jing; Yu, Yao-Feng; Song, Qin; Gao, Lan-Xiang

    2015-04-01

    Proxy blind signature is applied to the electronic paying system, electronic voting system, mobile agent system, security of internet, etc. A quantum proxy weak blind signature scheme is proposed in this paper. It is based on controlled quantum teleportation. Five-qubit entangled state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement message blinding, so it could guarantee not only the unconditional security of the scheme but also the anonymity of the messages owner.

  3. Post hoc support vector machine learning for impedimetric biosensors based on weak protein-ligand interactions.

    PubMed

    Rong, Y; Padron, A V; Hagerty, K J; Nelson, N; Chi, S; Keyhani, N O; Katz, J; Datta, S P A; Gomes, C; McLamore, E S

    2018-04-30

    Impedimetric biosensors for measuring small molecules based on weak/transient interactions between bioreceptors and target analytes are a challenge for detection electronics, particularly in field studies or in the analysis of complex matrices. Protein-ligand binding sensors have enormous potential for biosensing, but achieving accuracy in complex solutions is a major challenge. There is a need for simple post hoc analytical tools that are not computationally expensive, yet provide near real time feedback on data derived from impedance spectra. Here, we show the use of a simple, open source support vector machine learning algorithm for analyzing impedimetric data in lieu of using equivalent circuit analysis. We demonstrate two different protein-based biosensors to show that the tool can be used for various applications. We conclude with a mobile phone-based demonstration focused on the measurement of acetone, an important biomarker related to the onset of diabetic ketoacidosis. In all conditions tested, the open source classifier was capable of performing as well as, or better, than the equivalent circuit analysis for characterizing weak/transient interactions between a model ligand (acetone) and a small chemosensory protein derived from the tsetse fly. In addition, the tool has a low computational requirement, facilitating use for mobile acquisition systems such as mobile phones. The protocol is deployed through Jupyter notebook (an open source computing environment available for mobile phone, tablet or computer use) and the code was written in Python. For each of the applications, we provide step-by-step instructions in English, Spanish, Mandarin and Portuguese to facilitate widespread use. All codes were based on scikit-learn, an open source software machine learning library in the Python language, and were processed in Jupyter notebook, an open-source web application for Python. The tool can easily be integrated with the mobile biosensor equipment for rapid detection, facilitating use by a broad range of impedimetric biosensor users. This post hoc analysis tool can serve as a launchpad for the convergence of nanobiosensors in planetary health monitoring applications based on mobile phone hardware.

  4. Finite-data-size study on practical universal blind quantum computation

    NASA Astrophysics Data System (ADS)

    Zhao, Qiang; Li, Qiong

    2018-07-01

    The universal blind quantum computation with weak coherent pulses protocol is a practical scheme to allow a client to delegate a computation to a remote server while the computation hidden. However, in the practical protocol, a finite data size will influence the preparation efficiency in the remote blind qubit state preparation (RBSP). In this paper, a modified RBSP protocol with two decoy states is studied in the finite data size. The issue of its statistical fluctuations is analyzed thoroughly. The theoretical analysis and simulation results show that two-decoy-state case with statistical fluctuation is closer to the asymptotic case than the one-decoy-state case with statistical fluctuation. Particularly, the two-decoy-state protocol can achieve a longer communication distance than the one-decoy-state case in this statistical fluctuation situation.

  5. Corruption and Coercion: University Autonomy versus State Control

    ERIC Educational Resources Information Center

    Osipian, Ararat L.

    2008-01-01

    A substantial body of literature considers excessive corruption an indicator of a weak state. However, in nondemocratic societies, corruption--whether informally approved, imposed, or regulated by public authorities--is often an indicator of a vertical power rather than an indicator of a weak state. This article explores the interrelations between…

  6. Is Attention Impaired in ADHD?

    ERIC Educational Resources Information Center

    Wilding, John

    2005-01-01

    Explanations of Attention Deficit Hyperactivity Disorder (ADHD) in terms of a weakness in Executive Function (EF) or related concepts, such as inhibition, are briefly reviewed. Some alternative views are considered, in particular a proposal by Manly and others that ADHD is a weakness primarily of sustained attention (plus control of attention),…

  7. Exploring the challenges faced by polytechnic students

    NASA Astrophysics Data System (ADS)

    Matore, Mohd Effendi @ Ewan Mohd; Khairani, Ahmad Zamri

    2015-02-01

    This study aims to identify other challenges besides those already faced by students, in seven polytechnics in Malaysia as a continuation to the previous research that had identified 52 main challenges faced by students using the Rasch Model. The explorative study focuses on the challenges that are not included in the Mooney Problem Checklist (MPCL). A total of 121 polytechnic students submitted 183 written responses through the open questions provided. Two hundred fifty two students had responded from a students' perspective on the dichotomous questions regarding their view on the challenges faced. The data was analysed qualitatively using the NVivo 8.0. The findings showed that students from Politeknik Seberang Perai (PSP) gave the highest response, which was 56 (30.6%) and Politeknik Metro Kuala Lumpur (PMKL) had the lowest response of 2 (1.09%). Five dominant challenges were identified, which were the English language (32, 17.5%), learning (14, 7.7%), vehicles (13, 7.1%), information technology and communication (ICT) (13, 7.1%), and peers (11, 6.0%). This article, however, focus on three apparent challenges, namely, English language, vehicles, as well as computer and ICT, as the challenges of learning and peers had been analysed in the previous MPCL. The challenge of English language that had been raised was regarding the weakness in commanding the aspects of speech and fluency. The computer and ICT challenge covered the weakness in mastering ICT and computers, as well as computer breakdowns and low-performance computers. The challenge of vehicles emphasized the unavailability of vehicles to attend lectures and go elsewhere, lack of transportation service in the polytechnic and not having a valid driving license. These challenges are very relevant and need to be discussed in an effort to prepare polytechnics in facing the transformational process of polytechnics.

  8. Determining the activation of gluteus medius and the validity of the single leg stance test in chronic, nonspecific low back pain.

    PubMed

    Penney, Tracy; Ploughman, Michelle; Austin, Mark W; Behm, David G; Byrne, Jeannette M

    2014-10-01

    To determine the activation of the gluteus medius in persons with chronic, nonspecific low back pain compared with that in control subjects, and to determine the association of the clinical rating of the single leg stance (SLS) with chronic low back pain (CLBP) and gluteus medius weakness. Cohort-control comparison. Academic research laboratory. Convenience sample of people (n=21) with CLBP (>12wk) recruited by local physiotherapists, and age- and sex-matched controls (n=22). Subjects who received specific pain diagnoses were excluded. Not applicable. Back pain using the visual analog scale (mm); back-related disability using the Oswestry Back Disability Index (%); strength of gluteus medius measured using a hand dynamometer (N/kg); SLS test; gluteus medius onset and activation using electromyography during unipedal stance on a forceplate. Individuals in the CLBP group exhibited significant weakness in the gluteus medius compared with controls (right, P=.04; left, P=.002). They also had more pain (CLBP: mean, 20.50mm; 95% confidence interval [CI], 13.11-27.9mm; control subjects: mean, 1.77mm; 95% CI, -.21 to 3.75mm) and back-related disability (CLBP: mean, 18.52%; 95% CI, 14.46%-22.59%; control subjects: mean, .68%; 95% CI, -.41% to 1.77%), and reported being less physically active. Weakness was accompanied by increased gluteus medius activation during unipedal stance (R=.50, P=.001) but by no difference in muscle onset times. Although greater gluteus medius weakness was associated with greater pain and disability, there was no difference in muscle strength between those scoring positive and negative on the SLS test (right: F=.002, P=.96; left: F=.1.75, P=.19). Individuals with CLBP had weaker gluteus medius muscles than control subjects without back pain. Even though there was no significant difference in onset time of the gluteus medius when moving to unipedal stance between the groups, the CLBP group had greater gluteus medius activation. A key finding was that a positive SLS test did not distinguish the CLBP group from the control group, nor was it a sign of gluteus medius weakness. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. Hybrid inverter for HVDC/weak AC system interconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tam, K.S.

    1985-01-01

    The concept of the hybrid converter is introduced. By independently controlling a naturally commutated converter (NCC) and an artificially commutated converter (ACC), real power and reactive power can be controlled independently. Alternatively, the ac bus voltage can be regulated without affecting the real power transfer. Independent control is feasible only within certain operating boundaries. Twelve pulse operation, sequential control, and complementary circuits may be viewed as variations of the hybrid converter. The concept of the hybrid converter is demonstrated by digital simulation. At the current state of technology, the NCC is best implemented by a 6-pulse bridge using thyristors asmore » the switching elements. A survey of power electronics applicable to HVDC applications reveals that the capacitively commutated current-sourced converters are either technically or economically better than the other alternatives for the implementation of the ACC. The digital simulation results show that the problems of operating an HVDC system into a weak ac system can be solved by using a hybrid inverter. A new control scheme, the zero Q control, is developed. With no reactive power interaction between the dc system and the ac system, the stability of the HVDC/weak ac system operation is significantly improved. System start-up and fault recovery is fast and stable.« less

  10. Regulation of Cortical Dynamic Range by Background Synaptic Noise and Feedforward Inhibition.

    PubMed

    Khubieh, Ayah; Ratté, Stéphanie; Lankarany, Milad; Prescott, Steven A

    2016-08-01

    The cortex encodes a broad range of inputs. This breadth of operation requires sensitivity to weak inputs yet non-saturating responses to strong inputs. If individual pyramidal neurons were to have a narrow dynamic range, as previously claimed, then staggered all-or-none recruitment of those neurons would be necessary for the population to achieve a broad dynamic range. Contrary to this explanation, we show here through dynamic clamp experiments in vitro and computer simulations that pyramidal neurons have a broad dynamic range under the noisy conditions that exist in the intact brain due to background synaptic input. Feedforward inhibition capitalizes on those noise effects to control neuronal gain and thereby regulates the population dynamic range. Importantly, noise allows neurons to be recruited gradually and occludes the staggered recruitment previously attributed to heterogeneous excitation. Feedforward inhibition protects spike timing against the disruptive effects of noise, meaning noise can enable the gain control required for rate coding without compromising the precise spike timing required for temporal coding. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Design of an Internal Model Control strategy for single-phase grid-connected PWM inverters and its performance analysis with a non-linear local load and weak grid.

    PubMed

    Chaves, Eric N; Coelho, Ernane A A; Carvalho, Henrique T M; Freitas, Luiz C G; Júnior, João B V; Freitas, Luiz C

    2016-09-01

    This paper presents the design of a controller based on Internal Model Control (IMC) applied to a grid-connected single-phase PWM inverter. The mathematical modeling of the inverter and the LCL output filter, used to project the 1-DOF IMC controller, is presented and the decoupling of grid voltage by a Feedforward strategy is analyzed. A Proportional - Resonant Controller (P+Res) was used for the control of the same plant in the running of experimental results, thus moving towards the discussion of differences regarding IMC and P+Res performances, which arrived at the evaluation of the proposed control strategy. The results are presented for typical conditions, for weak-grid and for non-linear local load, in order to verify the behavior of the controller against such situations. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. OH-LIF measurement of H2/O2/N2 flames in a micro flow reactor with a controlled temperature profile

    NASA Astrophysics Data System (ADS)

    Shimizu, T.; Nakamura, H.; Tezuka, T.; Hasegawa, S.; Maruta, K.

    2014-11-01

    This paper presents combustion and ignition characteristic of H2/O2/N2 flames in a micro flow reactor with a controlled temperature profile. OH-LIF measurement was conducted to capture flame images. Flame responses were investigated for variable inlet flow velocity, U, and equivalence ratio, phi. Three kinds of flame responses were experimentally observed for the inlet flow velocities: stable flat flames (normal flames) in the high inlet flow velocity regime; unstable flames called Flames with Repetitive Extinction and Ignition (FREI) in the intermediate flow velocity regime; and stable weak flames in the low flow velocity regime, at phi = 0.6, 1.0 and 1.2. On the other hand, weak flame was not observed at phi = 3.0 by OH-LIF measurement. Computational OH mole fractions showed lower level at the rich conditions than those at stoichiometric and lean conditions. To examine this response of OH signal to equivalence ratio, rate of production analysis was conducted and four kinds of major contributed reaction for OH production: R3(O + H2 <=> H + OH); R38(H + O2 <=> O + OH); R46(H + HO2 <=> 2OH); and R86(2OH <=> O + H2O), were found. Three reactions among them, R3, R38 and R46, did not showed significant difference in rate of OH production for different equivalence ratios. On the other hand, rate of OH production from R86 at phi = 3.0 was extremely lower than those at phi = 0.6 and 1.0. Therefore, R86 was considered to be a key reaction for the reduction of the OH production at phi = 3.0.

  13. X-ray attenuation of the liver and kidney in cats considered at varying risk of hepatic lipidosis.

    PubMed

    Lam, Richard; Niessen, Stijn J; Lamb, Christopher R

    2014-01-01

    X-ray attenuation of the liver has been measured using computed tomography (CT) and reported to decrease in cats with experimentally induced hepatic lipidosis. To assess the clinical utility of this technique, medical records and noncontrast CT scans of a series of cats were retrospectively reviewed. A total of 112 cats met inclusion criteria and were stratified into three hepatic lipidosis risk groups. Group 1 cats were considered low-risk based on no history of inappetence or weight loss, and normal serum chemistry values; Group 2 cats were considered intermediate risk based on weight loss, serum hepatic enzymes above normal limits, or reasonably controlled diabetes mellitus; and Group 3 cats were considered high risk based on poorly controlled diabetes mellitus due to hypersomatotropism. Mean CT attenuation values (Hounsfield units, HU) were measured using regions of interest placed within the liver and cranial pole of the right kidney. Hepatic and renal attenuation were weakly positively correlated with each other (r = 0.2, P = 0.03) and weakly negatively correlated with body weight (r = -0.21, P = 0.05, and r = -0.34, P = 0.001, respectively). Mean (SD) hepatic and renal cortical attenuation values were 70.7 (8.7) HU and 49.6 (9.2) HU for Group 1 cats, 71.4 (7.9) HU and 48.6 (9.1) HU for Group 2, and 68.9 (7.6) HU and 47.6 (7.2) HU for Group 3. There were no significant differences in hepatic or renal attenuation among groups. Findings indicated that CT measures of X-ray attenuation in the liver and kidney may not be accurate predictors of naturally occurring hepatic lipidosis in cats. © 2013 American College of Veterinary Radiology.

  14. Electromyographic and biomechanical analysis of step negotiation in Charcot Marie Tooth subjects whose level walk is not impaired.

    PubMed

    Lencioni, Tiziana; Piscosquito, Giuseppe; Rabuffetti, Marco; Sipio, Enrica Di; Diverio, Manuela; Moroni, Isabella; Padua, Luca; Pagliano, Emanuela; Schenone, Angelo; Pareyson, Davide; Ferrarin, Maurizio

    2018-05-01

    Charcot-Marie-Tooth (CMT) is a slowly progressive disease characterized by muscular weakness and wasting with a length-dependent pattern. Mildly affected CMT subjects showed slight alteration of walking compared to healthy subjects (HS). To investigate the biomechanics of step negotiation, a task that requires greater muscle strength and balance control compared to level walking, in CMT subjects without primary locomotor deficits (foot drop and push off deficit) during walking. We collected data (kinematic, kinetic, and surface electromyographic) during walking on level ground and step negotiation, from 98 CMT subjects with mild-to-moderate impairment. Twenty-one CMT subjects (CMT-NLW, normal-like-walkers) were selected for analysis, as they showed values of normalized ROM during swing and produced work at push-off at ankle joint comparable to those of 31 HS. Step negotiation tasks consisted in climbing and descending a two-step stair. Only the first step provided the ground reaction force data. To assess muscle activity, each EMG profile was integrated over 100% of task duration and the activation percentage was computed in four phases that constitute the step negotiation tasks. In both tasks, CMT-NLW showed distal muscle hypoactivation. In addition, during step-ascending CMT-NLW subjects had relevant lower activities of vastus medialis and rectus femoris than HS in weight-acceptance, and, on the opposite, a greater activation as compared to HS in forward-continuance. During step-descending, CMT-NLW showed a reduced activity of tibialis anterior during controlled-lowering phase. Step negotiation revealed adaptive motor strategies related to muscle weakness due to disease in CMT subjects without any clinically apparent locomotor deficit during level walking. In addition, this study provided results useful for tailored rehabilitation of CMT patients. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Weakly Ionized Plasmas in Hypersonics: Fundamental Kinetics and Flight Applications

    NASA Astrophysics Data System (ADS)

    Macheret, Sergey

    2005-05-01

    The paper reviews some of the recent studies of applications of weakly ionized plasmas to supersonic/hypersonic flight. Plasmas can be used simply as means of delivering energy (heating) to the flow, and also for electromagnetic flow control and magnetohydrodynamic (MHD) power generation. Plasma and MHD control can be especially effective in transient off-design flight regimes. In cold air flow, nonequilibrium plasmas must be created, and the ionization power budget determines design, performance envelope, and the very practicality of plasma/MHD devices. The minimum power budget is provided by electron beams and repetitive high-voltage nanosecond pulses, and the paper describes theoretical and computational modeling of plasmas created by the beams and repetitive pulses. The models include coupled equations for non-local and unsteady electron energy distribution function (modeled in forward-back approximation), plasma kinetics, and electric field. Recent experimental studies at Princeton University have successfully demonstrated stable diffuse plasmas sustained by repetitive nanosecond pulses in supersonic air flow, and for the first time have demonstrated the existence of MHD effects in such plasmas. Cold-air hypersonic MHD devices are shown to permit optimization of scramjet inlets at Mach numbers higher than the design value, while operating in self-powered regime. Plasma energy addition upstream of the inlet throat can increase the thrust by capturing more air (Virtual Cowl), or it can reduce the flow Mach number and thus eliminate the need for an isolator duct. In the latter two cases, the power that needs to be supplied to the plasma would be generated by an MHD generator downstream of the combustor, thus forming the "reverse energy bypass" scheme. MHD power generation on board reentry vehicles is also discussed.

  16. A knowledge-based system with learning for computer communication network design

    NASA Technical Reports Server (NTRS)

    Pierre, Samuel; Hoang, Hai Hoc; Tropper-Hausen, Evelyne

    1990-01-01

    Computer communication network design is well-known as complex and hard. For that reason, the most effective methods used to solve it are heuristic. Weaknesses of these techniques are listed and a new approach based on artificial intelligence for solving this problem is presented. This approach is particularly recommended for large packet switched communication networks, in the sense that it permits a high degree of reliability and offers a very flexible environment dealing with many relevant design parameters such as link cost, link capacity, and message delay.

  17. Assessment of nonequilibrium radiation computation methods for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Sharma, Surendra

    1993-01-01

    The present understanding of shock-layer radiation in the low density regime, as appropriate to hypersonic vehicles, is surveyed. Based on the relative importance of electron excitation and radiation transport, the hypersonic flows are divided into three groups: weakly ionized, moderately ionized, and highly ionized flows. In the light of this division, the existing laboratory and flight data are scrutinized. Finally, an assessment of the nonequilibrium radiation computation methods for the three regimes in hypersonic flows is presented. The assessment is conducted by comparing experimental data against the values predicted by the physical model.

  18. The relationship between physical activity and 2-hydroxyestrone, 16alpha-hydroxyestrone, and the 2/16 ratio in premenopausal women (United States).

    PubMed

    Bentz, Ann T; Schneider, Carole M; Westerlind, Kim C

    2005-05-01

    Estrogen is metabolized in the body through two mutually exclusive pathways yielding metabolites with different biological activities: the low estrogenic 2-hydroxyestrone (2-OHE1) and the highly estrogenic 16alpha-hydroxyestrone (16alpha-OHE1). The ratio of these metabolites (2/16) may be predictive of risk for developing breast cancer. Early evidence has demonstrated that exercise may alter estrogen metabolism to favor the weak estrogen, 2-OHE1. Seventy-seven eumenorrheic females completed physical activity logs for two weeks prior to providing a luteal phase urine sample. Concentrations of 2-OHE1 and 16alpha-OHE1 were measured and the 2/16 ratio computed. Hierarchical regression, controlling for age and body mass index (BMI), was used to determine relationships between estrogen metabolites and daily physical activity. Regression analyses indicated significant positive relationships between physical activity and 2-OHE1 and the 2/16 ratio (p < 0.05) that appears to be independent of BMI. 16alpha-OHE1 was not significantly related to physical activity. These results indicate that physical activity may modulate estrogen metabolism to favor the weak estrogen, 2-OHE1, thus producing a higher 2/16 ratio. This alteration in estrogen metabolism may represent one of the mechanisms by which increased physical activity reduces breast cancer risk.

  19. Comparing large covariance matrices under weak conditions on the dependence structure and its application to gene clustering.

    PubMed

    Chang, Jinyuan; Zhou, Wen; Zhou, Wen-Xin; Wang, Lan

    2017-03-01

    Comparing large covariance matrices has important applications in modern genomics, where scientists are often interested in understanding whether relationships (e.g., dependencies or co-regulations) among a large number of genes vary between different biological states. We propose a computationally fast procedure for testing the equality of two large covariance matrices when the dimensions of the covariance matrices are much larger than the sample sizes. A distinguishing feature of the new procedure is that it imposes no structural assumptions on the unknown covariance matrices. Hence, the test is robust with respect to various complex dependence structures that frequently arise in genomics. We prove that the proposed procedure is asymptotically valid under weak moment conditions. As an interesting application, we derive a new gene clustering algorithm which shares the same nice property of avoiding restrictive structural assumptions for high-dimensional genomics data. Using an asthma gene expression dataset, we illustrate how the new test helps compare the covariance matrices of the genes across different gene sets/pathways between the disease group and the control group, and how the gene clustering algorithm provides new insights on the way gene clustering patterns differ between the two groups. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2016, The International Biometric Society.

  20. NON-GAUSSIANITIES IN THE LOCAL CURVATURE OF THE FIVE-YEAR WMAP DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rudjord, Oeystein; Groeneboom, Nicolaas E.; Hansen, Frode K.

    Using the five-year WMAP data, we re-investigate claims of non-Gaussianities and asymmetries detected in local curvature statistics of the one-year WMAP data. In Hansen et al., it was found that the northern ecliptic hemisphere was non-Gaussian at the {approx}1% level testing the densities of hill, lake, and saddle points based on the second derivatives of the cosmic microwave background temperature map. The five-year WMAP data have a much lower noise level and better control of systematics. Using these, we find that the anomalies are still present at a consistent level. Also the direction of maximum non-Gaussianity remains. Due to limitedmore » availability of computer resources, Hansen et al. were unable to calculate the full covariance matrix for the {chi}{sup 2}-test used. Here, we apply the full covariance matrix instead of the diagonal approximation and find that the non-Gaussianities disappear and there is no preferred non-Gaussian direction. We compare with simulations of weak lensing to see if this may cause the observed non-Gaussianity when using a diagonal covariance matrix. We conclude that weak lensing does not produce non-Gaussianity in the local curvature statistics at the scales investigated in this paper. The cause of the non-Gaussian detection in the case of a diagonal matrix remains unclear.« less

  1. Indoor radon and childhood leukaemia.

    PubMed

    Raaschou-Nielsen, Ole

    2008-01-01

    This paper summarises the epidemiological literature on domestic exposure to radon and risk for childhood leukaemia. The results of 12 ecological studies show a consistent pattern of higher incidence and mortality rates for childhood leukaemia in areas with higher average indoor radon concentrations. Although the results of such studies are useful to generate hypotheses, they must be interpreted with caution, as the data were aggregated and analysed for geographical areas and not for individuals. The seven available case-control studies of childhood leukaemia with measurement of radon concentrations in the residences of cases and controls gave mixed results, however, with some indication of a weak (relative risk < 2) association with acute lymphoblastic leukaemia. The epidemiological evidence to date suggests that an association between indoor exposure to radon and childhood leukaemia might exist, but is weak. More case-control studies are needed, with sufficient statistical power to detect weak associations and based on designs and methods that minimise misclassification of exposure and provide a high participation rate and low potential selection bias.

  2. Multiple cognitive capabilities/deficits in children with an autism spectrum disorder: "weak" central coherence and its relationship to theory of mind and executive control.

    PubMed

    Pellicano, Elizabeth; Maybery, Murray; Durkin, Kevin; Maley, Alana

    2006-01-01

    This study examined the validity of "weak" central coherence (CC) in the context of multiple cognitive capabilities/deficits in autism. Children with an autism spectrum disorder (ASD) and matched typically developing children were administered tasks tapping visuospatial coherence, false-belief understanding and aspects of executive control. Significant group differences were found in all three cognitive domains. Evidence of local processing on coherence tasks was widespread in the ASD group, but difficulties in attributing false beliefs and in components of executive functioning were present in fewer of the children with ASD. This cognitive profile was generally similar for younger and older children with ASD. Furthermore, weak CC was unrelated to false-belief understanding, but aspects of coherence (related to integration) were associated with aspects of executive control. Few associations were found between cognitive variables and indices of autistic symptomatology. Implications for CC theory are discussed.

  3. Applications of high-dimensional photonic entaglement

    NASA Astrophysics Data System (ADS)

    Broadbent, Curtis J.

    This thesis presents the results of four experiments related to applications of higher dimensional photonic entanglement. (1) We use energy-time entangled biphotons from spontaneous parametric down-conversion (SPDC) to implement a large-alphabet quantum key distribution (QKD) system which securely transmits up to 10 bits of the random key per photon. An advantage over binary alphabet QKD is demonstrated for quantum channels with a single-photon transmission-rate ceiling. The security of the QKD system is based on the measurable reduction of entanglement in the presence of eavesdropping. (2) We demonstrate the preservation of energy-time entanglement in a tunable slow-light medium. The fine-structure resonances of a hot Rubidium vapor are used to slow one photon from an energy-time entangled biphoton generated with non-degenerate SPDC. The slow-light medium is placed in one arm of a Franson interferometer. The observed Franson fringes witness the presence of entanglement and quantify a delay of 1.3 biphoton correlation lengths. (3) We utilize holograms to discriminate between two spatially-coherent single-photon images. Heralded single photons are created with degenerate SPDC and sent through one of two transmission masks to make single-photon images with no spatial overlap. The single-photon images are sent through a previously prepared holographic filter. The filter discriminates the single-photon images with an average confidence level of 95%. (4) We employ polarization entangled biphotons generated from non-collinear SPDC to violate a generalized Leggett-Garg inequality with non-local weak measurements. The weak measurement is implemented with Fresnel reflection of a microscope coverslip on one member of the entangled biphoton. Projective measurement with computer-controlled polarizers on the entangled state after the weak measurement yields a joint probability with three degrees of freedom. Contextual values are then used to determine statistical averages of measurement operations from the joint probability. Correlations between the measured averages are shown to violate the upper bound of three distinct two-object Leggett-Garg inequalities derived from assumptions of macro-realism. A relationship between the violation of two-object Leggett-Garg inequalities and strange non-local weak values is derived and experimentally demonstrated.

  4. Problem Solving and Computational Skill: Are They Shared or Distinct Aspects of Mathematical Cognition?

    PubMed Central

    Fuchs, Lynn S.; Fuchs, Douglas; Hamlett, Carol L.; Lambert, Warren; Stuebing, Karla; Fletcher, Jack M.

    2009-01-01

    The purpose of this study was to explore patterns of difficulty in 2 domains of mathematical cognition: computation and problem solving. Third graders (n = 924; 47.3% male) were representatively sampled from 89 classrooms; assessed on computation and problem solving; classified as having difficulty with computation, problem solving, both domains, or neither domain; and measured on 9 cognitive dimensions. Difficulty occurred across domains with the same prevalence as difficulty with a single domain; specific difficulty was distributed similarly across domains. Multivariate profile analysis on cognitive dimensions and chi-square tests on demographics showed that specific computational difficulty was associated with strength in language and weaknesses in attentive behavior and processing speed; problem-solving difficulty was associated with deficient language as well as race and poverty. Implications for understanding mathematics competence and for the identification and treatment of mathematics difficulties are discussed. PMID:20057912

  5. Unsteady Aerodynamic Validation Experiences From the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Chawlowski, Pawel

    2014-01-01

    The AIAA Aeroelastic Prediction Workshop (AePW) was held in April 2012, bringing together communities of aeroelasticians, computational fluid dynamicists and experimentalists. The extended objective was to assess the state of the art in computational aeroelastic methods as practical tools for the prediction of static and dynamic aeroelastic phenomena. As a step in this process, workshop participants analyzed unsteady aerodynamic and weakly-coupled aeroelastic cases. Forced oscillation and unforced system experiments and computations have been compared for three configurations. This paper emphasizes interpretation of the experimental data, computational results and their comparisons from the perspective of validation of unsteady system predictions. The issues examined in detail are variability introduced by input choices for the computations, post-processing, and static aeroelastic modeling. The final issue addressed is interpreting unsteady information that is present in experimental data that is assumed to be steady, and the resulting consequences on the comparison data sets.

  6. Tunable Fano resonance using weak-value amplification with asymmetric spectral response as a natural pointer

    NASA Astrophysics Data System (ADS)

    Singh, Ankit K.; Ray, Subir K.; Chandel, Shubham; Pal, Semanti; Gupta, Angad; Mitra, P.; Ghosh, N.

    2018-05-01

    Weak measurement enables faithful amplification and high-precision measurement of small physical parameters and is under intensive investigation as an effective tool in metrology and for addressing foundational questions in quantum mechanics. Here we demonstrate weak-value amplification using the asymmetric spectral response of Fano resonance as the pointer arising naturally in precisely designed metamaterials, namely, waveguided plasmonic crystals. The weak coupling between the polarization degree of freedom and the spectral response of Fano resonance arises due to a tiny shift in the asymmetric spectral response between two orthogonal linear polarizations. By choosing the preselected and postselected polarization states to be nearly mutually orthogonal, we observe both real and imaginary weak-value amplifications manifested as a spectacular shift of the Fano-resonance peak and narrowing (or broadening) of the resonance linewidth, respectively. The remarkable control and tunability of Fano resonance in a single device enabled by weak-value amplification may enhance active Fano-resonance-based applications in the nano-optical domain. In general, weak measurements using Fano-type spectral response broadens the domain of applicability of weak measurements using natural spectral line shapes as a pointer in a wide range of physical systems.

  7. Weak nanoscale chaos and anomalous relaxation in DNA

    NASA Astrophysics Data System (ADS)

    Mazur, Alexey K.

    2017-06-01

    Anomalous nonexponential relaxation in hydrated biomolecules is commonly attributed to the complexity of the free-energy landscapes, similarly to polymers and glasses. It was found recently that the hydrogen-bond breathing of terminal DNA base pairs exhibits a slow power-law relaxation attributable to weak Hamiltonian chaos, with parameters similar to experimental data. Here, the relationship is studied between this motion and spectroscopic signals measured in DNA with a small molecular photoprobe inserted into the base-pair stack. To this end, the earlier computational approach in combination with an analytical theory is applied to the experimental DNA fragment. It is found that the intensity of breathing dynamics is strongly increased in the internal base pairs that flank the photoprobe, with anomalous relaxation quantitatively close to that in terminal base pairs. A physical mechanism is proposed to explain the coupling between the relaxation of base-pair breathing and the experimental response signal. It is concluded that the algebraic relaxation observed experimentally is very likely a manifestation of weakly chaotic dynamics of hydrogen-bond breathing in the base pairs stacked to the photoprobe and that the weak nanoscale chaos can represent an ubiquitous hidden source of nonexponential relaxation in ultrafast spectroscopy.

  8. A weakly-compressible Cartesian grid approach for hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.

    2017-11-01

    The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.

  9. Thermalization and light cones in a model with weak integrability breaking

    DOE PAGES

    Bertini, Bruno; Essler, Fabian H. L.; Groha, Stefan; ...

    2016-12-09

    Here, we employ equation-of-motion techniques to study the nonequilibrium dynamics in a lattice model of weakly interacting spinless fermions. Our model provides a simple setting for analyzing the effects of weak integrability-breaking perturbations on the time evolution after a quantum quench. We establish the accuracy of the method by comparing results at short and intermediate times to time-dependent density matrix renormalization group computations. For sufficiently weak integrability-breaking interactions we always observe prethermalization plateaus, where local observables relax to nonthermal values at intermediate time scales. At later times a crossover towards thermal behavior sets in. We determine the associated time scale,more » which depends on the initial state, the band structure of the noninteracting theory, and the strength of the integrability-breaking perturbation. Our method allows us to analyze in some detail the spreading of correlations and in particular the structure of the associated light cones in our model. We find that the interior and exterior of the light cone are separated by an intermediate region, the temporal width of which appears to scale with a universal power law t 1/3.« less

  10. Weak nanoscale chaos and anomalous relaxation in DNA.

    PubMed

    Mazur, Alexey K

    2017-06-01

    Anomalous nonexponential relaxation in hydrated biomolecules is commonly attributed to the complexity of the free-energy landscapes, similarly to polymers and glasses. It was found recently that the hydrogen-bond breathing of terminal DNA base pairs exhibits a slow power-law relaxation attributable to weak Hamiltonian chaos, with parameters similar to experimental data. Here, the relationship is studied between this motion and spectroscopic signals measured in DNA with a small molecular photoprobe inserted into the base-pair stack. To this end, the earlier computational approach in combination with an analytical theory is applied to the experimental DNA fragment. It is found that the intensity of breathing dynamics is strongly increased in the internal base pairs that flank the photoprobe, with anomalous relaxation quantitatively close to that in terminal base pairs. A physical mechanism is proposed to explain the coupling between the relaxation of base-pair breathing and the experimental response signal. It is concluded that the algebraic relaxation observed experimentally is very likely a manifestation of weakly chaotic dynamics of hydrogen-bond breathing in the base pairs stacked to the photoprobe and that the weak nanoscale chaos can represent an ubiquitous hidden source of nonexponential relaxation in ultrafast spectroscopy.

  11. Gravitational lensing by rotating naked singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyulchev, Galin N.; Yazadjiev, Stoytcho S.; Institut fuer Theoretische Physik, Universitaet Goettingen, Friedrich-Hund-Platz 1, D-37077 Goettingen

    We model massive compact objects in galactic nuclei as stationary, axially symmetric naked singularities in the Einstein-massless scalar field theory and study the resulting gravitational lensing. In the weak deflection limit we study analytically the position of the two weak field images, the corresponding signed and absolute magnifications as well as the centroid up to post-Newtonian order. We show that there are static post-Newtonian corrections to the signed magnification and their sum as well as to the critical curves, which are functions of the scalar charge. The shift of the critical curves as a function of the lens angular momentummore » is found, and it is shown that they decrease slightly for the weakly naked and vastly for the strongly naked singularities with the increase of the scalar charge. The pointlike caustics drift away from the optical axis and do not depend on the scalar charge. In the strong deflection limit approximation, we compute numerically the position of the relativistic images and their separability for weakly naked singularities. All of the lensing quantities are compared to particular cases as Schwarzschild and Kerr black holes as well as Janis-Newman-Winicour naked singularities.« less

  12. Propagation of electromagnetic soliton in a spin polarized current driven weak ferromagnetic nanowire

    NASA Astrophysics Data System (ADS)

    Senthil Kumar, V.; Kavitha, L.; Gopi, D.

    2017-11-01

    We investigate the nonlinear spin dynamics of a spin polarized current driven anisotropic ferromagnetic nanowire with Dzyaloshinskii-Moriya interaction (DMI) under the influence of electromagnetic wave (EMW) propagating along the axis of the nanowire. The magnetization dynamics and electromagnetic wave propagation in the ferromagnetic nanowire with weak anti-symmetric interaction is governed by a coupled vector Landau-Lifshitz-Gilbert and Maxwell's equations. These coupled nonlinear vector equations are recasted into the extended derivative nonlinear Schrödinger (EDNLS) equation in the framework of reductive perturbation method. As it is well known, the modulational instability is a precursor for the emergence of localized envelope structures of various kinds, we compute the instability criteria for the weak ferromagnetic nanowire through linear stability analysis. Further, we invoke the homogeneous balance method to construct kink and anti-solitonic like electromagnetic (EM) soliton profiles for the EDNLS equation. We also explore the appreciable effect of the anti-symmetric weak interaction on the magnetization components of the propagating EM soliton. We find that the combination of spin-polarized current and the anti-symmetric DMI have a profound effect on the propagating EMW in a weak ferromagnetic nanowire. Thus, the anti-symmetric DMI in a spin polarized current driven ferromagnetic nanowire supports the lossless propagation of EM solitons, which may have potential applications in magnetic data storage devices.

  13. Precise control of molecular dynamics with a femtosecond frequency comb.

    PubMed

    Pe'er, Avi; Shapiro, Evgeny A; Stowe, Matthew C; Shapiro, Moshe; Ye, Jun

    2007-03-16

    We present a general and highly efficient scheme for performing narrow-band Raman transitions between molecular vibrational levels using a coherent train of weak pump-dump pairs of shaped ultrashort pulses. The use of weak pulses permits an analytic description within the framework of coherent control in the perturbative regime, while coherent accumulation of many pulse pairs enables near unity transfer efficiency with a high spectral selectivity, thus forming a powerful combination of pump-dump control schemes and the precision of the frequency comb. Simulations verify the feasibility and robustness of this concept, with the aim to form deeply bound, ultracold molecules.

  14. Sensor Compromise Detection in Multiple-Target Tracking Systems

    PubMed Central

    Doucette, Emily A.; Curtis, Jess W.

    2018-01-01

    Tracking multiple targets using a single estimator is a problem that is commonly approached within a trusted framework. There are many weaknesses that an adversary can exploit if it gains control over the sensors. Because the number of targets that the estimator has to track is not known with anticipation, an adversary could cause a loss of information or a degradation in the tracking precision. Other concerns include the introduction of false targets, which would result in a waste of computational and material resources, depending on the application. In this work, we study the problem of detecting compromised or faulty sensors in a multiple-target tracker, starting with the single-sensor case and then considering the multiple-sensor scenario. We propose an algorithm to detect a variety of attacks in the multiple-sensor case, via the application of finite set statistics (FISST), one-class classifiers and hypothesis testing using nonparametric techniques. PMID:29466314

  15. The entropic cost of quantum generalized measurements

    NASA Astrophysics Data System (ADS)

    Mancino, Luca; Sbroscia, Marco; Roccia, Emanuele; Gianani, Ilaria; Somma, Fabrizia; Mataloni, Paolo; Paternostro, Mauro; Barbieri, Marco

    2018-03-01

    Landauer's principle introduces a symmetry between computational and physical processes: erasure of information, a logically irreversible operation, must be underlain by an irreversible transformation dissipating energy. Monitoring micro- and nano-systems needs to enter into the energetic balance of their control; hence, finding the ultimate limits is instrumental to the development of future thermal machines operating at the quantum level. We report on the experimental investigation of a lower bound to the irreversible entropy associated to generalized quantum measurements on a quantum bit. We adopted a quantum photonics gate to implement a device interpolating from the weakly disturbing to the fully invasive and maximally informative regime. Our experiment prompted us to introduce a bound taking into account both the classical result of the measurement and the outcoming quantum state; unlike previous investigation, our entropic bound is based uniquely on measurable quantities. Our results highlight what insights the information-theoretic approach provides on building blocks of quantum information processors.

  16. Discrete solitons and vortices in anisotropic hexagonal and honeycomb lattices

    DOE PAGES

    Hoq, Q. E.; Kevrekidis, P. G.; Bishop, A. R.

    2016-01-14

    We consider the self-focusing discrete nonlinear Schrödinger equation on hexagonal and honeycomb lattice geometries. Our emphasis is on the study of the effects of anisotropy, motivated by the tunability afforded in recent optical and atomic physics experiments. We find that multi-soliton and discrete vortex states undergo destabilizing bifurcations as the relevant anisotropy control parameter is varied. Furthermore, we quantify these bifurcations by means of explicit analytical calculations of the solutions, as well as of their spectral linearization eigenvalues. Finally, we corroborate the relevant stability picture through direct numerical computations. In the latter, we observe the prototypical manifestation of these instabilitiesmore » to be the spontaneous rearrangement of the solution, for larger values of the coupling, into localized waveforms typically centered over fewer sites than the original unstable structure. In weak coupling, the instability appears to result in a robust breathing of the relevant waveforms.« less

  17. Discrete solitons and vortices in anisotropic hexagonal and honeycomb lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoq, Q. E.; Kevrekidis, P. G.; Bishop, A. R.

    We consider the self-focusing discrete nonlinear Schrödinger equation on hexagonal and honeycomb lattice geometries. Our emphasis is on the study of the effects of anisotropy, motivated by the tunability afforded in recent optical and atomic physics experiments. We find that multi-soliton and discrete vortex states undergo destabilizing bifurcations as the relevant anisotropy control parameter is varied. Furthermore, we quantify these bifurcations by means of explicit analytical calculations of the solutions, as well as of their spectral linearization eigenvalues. Finally, we corroborate the relevant stability picture through direct numerical computations. In the latter, we observe the prototypical manifestation of these instabilitiesmore » to be the spontaneous rearrangement of the solution, for larger values of the coupling, into localized waveforms typically centered over fewer sites than the original unstable structure. In weak coupling, the instability appears to result in a robust breathing of the relevant waveforms.« less

  18. Power Spectrum of a Noisy System Close to a Heteroclinic Orbit

    NASA Astrophysics Data System (ADS)

    Giner-Baldó, Jordi; Thomas, Peter J.; Lindner, Benjamin

    2017-07-01

    We consider a two-dimensional dynamical system that possesses a heteroclinic orbit connecting four saddle points. This system is not able to show self-sustained oscillations on its own. If endowed with white Gaussian noise it displays stochastic oscillations, the frequency and quality factor of which are controlled by the noise intensity. This stochastic oscillation of a nonlinear system with noise is conveniently characterized by the power spectrum of suitable observables. In this paper we explore different analytical and semianalytical ways to compute such power spectra. Besides a number of explicit expressions for the power spectrum, we find scaling relations for the frequency, spectral width, and quality factor of the stochastic heteroclinic oscillator in the limit of weak noise. In particular, the quality factor shows a slow logarithmic increase with decreasing noise of the form Q˜ [ln (1/D)]^2. Our results are compared to numerical simulations of the respective Langevin equations.

  19. Selection on Network Dynamics Drives Differential Rates of Protein Domain Evolution

    PubMed Central

    Mannakee, Brian K.; Gutenkunst, Ryan N.

    2016-01-01

    The long-held principle that functionally important proteins evolve slowly has recently been challenged by studies in mice and yeast showing that the severity of a protein knockout only weakly predicts that protein’s rate of evolution. However, the relevance of these studies to evolutionary changes within proteins is unknown, because amino acid substitutions, unlike knockouts, often only slightly perturb protein activity. To quantify the phenotypic effect of small biochemical perturbations, we developed an approach to use computational systems biology models to measure the influence of individual reaction rate constants on network dynamics. We show that this dynamical influence is predictive of protein domain evolutionary rate within networks in vertebrates and yeast, even after controlling for expression level and breadth, network topology, and knockout effect. Thus, our results not only demonstrate the importance of protein domain function in determining evolutionary rate, but also the power of systems biology modeling to uncover unanticipated evolutionary forces. PMID:27380265

  20. Quantum spin transistor with a Heisenberg spin chain

    PubMed Central

    Marchukov, O. V.; Volosniev, A. G.; Valiente, M.; Petrosyan, D.; Zinner, N. T.

    2016-01-01

    Spin chains are paradigmatic systems for the studies of quantum phases and phase transitions, and for quantum information applications, including quantum computation and short-distance quantum communication. Here we propose and analyse a scheme for conditional state transfer in a Heisenberg XXZ spin chain which realizes a quantum spin transistor. In our scheme, the absence or presence of a control spin excitation in the central gate part of the spin chain results in either perfect transfer of an arbitrary state of a target spin between the weakly coupled input and output ports, or its complete blockade at the input port. We also discuss a possible proof-of-concept realization of the corresponding spin chain with a one-dimensional ensemble of cold atoms with strong contact interactions. Our scheme is generally applicable to various implementations of tunable spin chains, and it paves the way for the realization of integrated quantum logic elements. PMID:27721438

  1. Quantum spin transistor with a Heisenberg spin chain.

    PubMed

    Marchukov, O V; Volosniev, A G; Valiente, M; Petrosyan, D; Zinner, N T

    2016-10-10

    Spin chains are paradigmatic systems for the studies of quantum phases and phase transitions, and for quantum information applications, including quantum computation and short-distance quantum communication. Here we propose and analyse a scheme for conditional state transfer in a Heisenberg XXZ spin chain which realizes a quantum spin transistor. In our scheme, the absence or presence of a control spin excitation in the central gate part of the spin chain results in either perfect transfer of an arbitrary state of a target spin between the weakly coupled input and output ports, or its complete blockade at the input port. We also discuss a possible proof-of-concept realization of the corresponding spin chain with a one-dimensional ensemble of cold atoms with strong contact interactions. Our scheme is generally applicable to various implementations of tunable spin chains, and it paves the way for the realization of integrated quantum logic elements.

  2. Observing quantum control of up-conversion luminescence in Dy3+ ion doped glass from weak to intermediate shaped femtosecond laser fields

    NASA Astrophysics Data System (ADS)

    Liu, Pei; Cheng, Wenjing; Yao, Yunhua; Xu, Cheng; Zheng, Ye; Deng, Lianzhong; Jia, Tianqing; Qiu, Jianrong; Sun, Zhenrong; Zhang, Shian

    2017-11-01

    Controlling the up-conversion luminescence of rare-earth ions in real-time, in a dynamical and reversible manner, is very important for their application in laser sources, fiber-optic communications, light-emitting diodes, color displays and biological systems. In previous studies, the up-conversion luminescence control mainly focused on the weak femtosecond laser field. Here, we further extend this control behavior from weak to intermediate femtosecond laser fields. In this work, we experimentally and theoretically demonstrate that the up-conversion luminescence in Dy3+ ion doped glass can be artificially controlled by a π phase step modulation, but the up-conversion luminescence control behavior will be affected by the femtosecond laser intensity, and the up-conversion luminescence is suppressed by lower laser intensity while enhanced by higher laser intensity. We establish a new theoretical model (i.e. the fourth-order perturbation theory) to explain the physical control mechanism by considering the two- and four-photon absorption processes, and the theoretical results show that the relative weight of four-photon absorption in the whole excitation process will increase with the increase in laser intensity, and the interference between two- and four-photon absorptions results in up-conversion luminescence control modulation under different laser intensities. These theoretical and experimental works can provide a new method to control and understand up-conversion luminescence in rare-earth ions, and also may open a new opportunity to the related application areas of rare-earth ions.

  3. How weak values emerge in joint measurements on cloned quantum systems.

    PubMed

    Hofmann, Holger F

    2012-07-13

    A statistical analysis of optimal universal cloning shows that it is possible to identify an ideal (but nonpositive) copying process that faithfully maps all properties of the original Hilbert space onto two separate quantum systems, resulting in perfect correlations for all observables. The joint probabilities for noncommuting measurements on separate clones then correspond to the real parts of the complex joint probabilities observed in weak measurements on a single system, where the measurements on the two clones replace the corresponding sequence of weak measurement and postselection. The imaginary parts of weak measurement statics can be obtained by replacing the cloning process with a partial swap operation. A controlled-swap operation combines both processes, making the complete weak measurement statistics accessible as a well-defined contribution to the joint probabilities of fully resolved projective measurements on the two output systems.

  4. Simulating the effect of muscle weakness and contracture on neuromuscular control of normal gait in children.

    PubMed

    Fox, Aaron S; Carty, Christopher P; Modenese, Luca; Barber, Lee A; Lichtwark, Glen A

    2018-03-01

    Altered neural control of movement and musculoskeletal deficiencies are common in children with spastic cerebral palsy (SCP), with muscle weakness and contracture commonly experienced. Both neural and musculoskeletal deficiencies are likely to contribute to abnormal gait, such as equinus gait (toe-walking), in children with SCP. However, it is not known whether the musculoskeletal deficiencies prevent normal gait or if neural control could be altered to achieve normal gait. This study examined the effect of simulated muscle weakness and contracture of the major plantarflexor/dorsiflexor muscles on the neuromuscular requirements for achieving normal walking gait in children. Initial muscle-driven simulations of walking with normal musculoskeletal properties by typically developing children were undertaken. Additional simulations with altered musculoskeletal properties were then undertaken; with muscle weakness and contracture simulated by reducing the maximum isometric force and tendon slack length, respectively, of selected muscles. Muscle activations and forces required across all simulations were then compared via waveform analysis. Maintenance of normal gait appeared robust to muscle weakness in isolation, with increased activation of weakened muscles the major compensatory strategy. With muscle contracture, reduced activation of the plantarflexors was required across the mid-portion of stance suggesting a greater contribution from passive forces. Increased activation and force during swing was also required from the tibialis anterior to counteract the increased passive forces from the simulated dorsiflexor muscle contracture. Improvements in plantarflexor and dorsiflexor motor function and muscle strength, concomitant with reductions in plantarflexor muscle stiffness may target the deficits associated with SCP that limit normal gait. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Learning Science through Computer Games and Simulations

    ERIC Educational Resources Information Center

    Honey, Margaret A., Ed.; Hilton, Margaret, Ed.

    2011-01-01

    At a time when scientific and technological competence is vital to the nation's future, the weak performance of U.S. students in science reflects the uneven quality of current science education. Although young children come to school with innate curiosity and intuitive ideas about the world around them, science classes rarely tap this potential.…

  6. Detecting Strengths and Weaknesses in Learning Mathematics through a Model Classifying Mathematical Skills

    ERIC Educational Resources Information Center

    Karagiannakis, Giannis N.; Baccaglini-Frank, Anna E.; Roussos, Petros

    2016-01-01

    Through a review of the literature on mathematical learning disabilities (MLD) and low achievement in mathematics (LA) we have proposed a model classifying mathematical skills involved in learning mathematics into four domains (Core number, Memory, Reasoning, and Visual-spatial). In this paper we present a new experimental computer-based battery…

  7. Receiver-Coupling Schemes Based On Optimal-Estimation Theory

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1992-01-01

    Two schemes for reception of weak radio signals conveying digital data via phase modulation provide for mutual coupling of multiple receivers, and coherent combination of outputs of receivers. In both schemes, optimal mutual-coupling weights computed according to Kalman-filter theory, but differ in manner of transmission and combination of outputs of receivers.

  8. Computer-Assisted Simulation Methods of Learning Process

    ERIC Educational Resources Information Center

    Mayer, Robert V.

    2015-01-01

    In this article we analyse: 1) one-component models of training; 2) the multi-component models considering transition of weak knowledge in strong and vice versa; and 3) the models considering change of working efficiency of the pupil during the day. The results of imitating modeling are presented, graphs of dependences of the pupil's knowledge on…

  9. A Formative Analysis of Resources Used to Learn Software

    ERIC Educational Resources Information Center

    Kay, Robin

    2007-01-01

    A comprehensive, formal comparison of resources used to learn computer software has yet to be researched. Understanding the relative strengths and weakness of resources would provide useful guidance to teachers and students. The purpose of the current study was to explore the effectiveness of seven key resources: human assistance, the manual, the…

  10. Exotic Gauge Bosons in the 331 Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, D.; Ravinez, O.; Diaz, H.

    We analize the bosonic sector of the 331 model which contains exotic leptons, quarks and bosons (E,J,U,V) in order to satisfy the weak gauge SU(3){sub L} invariance. We develop the Feynman rules of the entire kinetic bosonic sector which will let us to compute some of the Z(0)' decays modes.

  11. Word Learning Deficits in Children with Dyslexia

    ERIC Educational Resources Information Center

    Alt, Mary; Hogan, Tiffany; Green, Samuel; Gray, Shelley; Cabbage, Kathryn; Cowan, Nelson

    2017-01-01

    Purpose: The purpose of this study is to investigate word learning in children with dyslexia to ascertain their strengths and weaknesses during the configuration stage of word learning. Method: Children with typical development (N = 116) and dyslexia (N = 68) participated in computer-based word learning games that assessed word learning in 4 sets…

  12. Checklists for the Evaluation of Educational Software: Critical Review and Prospects.

    ERIC Educational Resources Information Center

    Tergan, Sigmar-Olaf

    1998-01-01

    Reviews strengths and weaknesses of check lists for the evaluation of computer software and outlines consequences for their practical application. Suggests an approach based on an instructional design model and a comprehensive framework to cope with problems of validity and predictive power of software evaluation. Discusses prospects of the…

  13. Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models

    ERIC Educational Resources Information Center

    Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent

    2015-01-01

    When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…

  14. Communication: HK propagator uniformized along a one-dimensional manifold in weakly anharmonic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocia, Lucas, E-mail: lkocia@fas.harvard.edu; Heller, Eric J.

    2014-11-14

    A simplification of the Heller-Herman-Kluk-Kay (HK) propagator is presented that does not suffer from the need for an increasing number of trajectories with dimensions of the system under study. This is accomplished by replacing HK’s uniformizing integral over all of phase space by a one-dimensional curve that is appropriately selected to lie along the fastest growing manifold of a defining trajectory. It is shown that this modification leads to eigenspectra of quantum states in weakly anharmonic systems that can outperform the comparatively computationally cheap thawed Gaussian approximation method and frequently approach the accuracy of spectra obtained with the full HKmore » propagator.« less

  15. Weak lensing in a plasma medium and gravitational deflection of massive particles using the Gauss-Bonnet theorem. A unified treatment

    NASA Astrophysics Data System (ADS)

    Crisnejo, Gabriel; Gallo, Emanuel

    2018-06-01

    We apply the Gauss-Bonnet theorem to the study of light rays in a plasma medium in a static and spherically symmetric gravitational field and also to the study of timelike geodesics followed for test massive particles in a spacetime with the same symmetries. The possibility of using the theorem follows from a correspondence between timelike curves followed by light rays in a plasma medium and spatial geodesics in an associated Riemannian optical metric. A similar correspondence follows for massive particles. For some examples and applications, we compute the deflection angle in weak gravitational fields for different plasma density profiles and gravitational fields.

  16. FloPSy - Search-Based Floating Point Constraint Solving for Symbolic Execution

    NASA Astrophysics Data System (ADS)

    Lakhotia, Kiran; Tillmann, Nikolai; Harman, Mark; de Halleux, Jonathan

    Recently there has been an upsurge of interest in both, Search-Based Software Testing (SBST), and Dynamic Symbolic Execution (DSE). Each of these two approaches has complementary strengths and weaknesses, making it a natural choice to explore the degree to which the strengths of one can be exploited to offset the weakness of the other. This paper introduces an augmented version of DSE that uses a SBST-based approach to handling floating point computations, which are known to be problematic for vanilla DSE. The approach has been implemented as a plug in for the Microsoft Pex DSE testing tool. The paper presents results from both, standard evaluation benchmarks, and two open source programs.

  17. Numerical simulation of the Earth satellites motion using parallel computing. accounting of weak disturbances. (Russian Title: Прогнозирование движения ИСЗ с использованием параллельных вычислений. учет слабых возмущений)

    NASA Astrophysics Data System (ADS)

    Chuvashov, I. N.

    2010-12-01

    The features of high-precision numerical simulation of the Earth satellite motion using parallel computing are discussed on example the implementation of the cluster "Skiff Cyberia" software complex "Numerical model of the motion of system satellites". It is shown that the use of 128 bit word length allows considering weak perturbations from the high-order harmonics in the expansion of the geopotential and the effect of strain geopotential harmonics arising due to the combination of tidal perturbations associated with exposure to the moon and sun on the solid Earth and its oceans.

  18. Personalization and perceived personal relevance in computer-tailored persuasion in smoking cessation.

    PubMed

    Dijkstra, Arie; Ballast, Karien

    2012-02-01

    In most computer-tailored interventions, the recipient's name is used to personalize the information. This is done to increase the process of persuasion but few empirical data exist that support this notion. An experimental laboratory study was conducted to test the effects of mentioning the participants name and to study whether it was related to the depth of processing in a 2 (personalization/standard) × 2 (weak/strong arguments) design. Over 120 student smokers were randomly assigned to one of the four experimental conditions in which they read smoking cessation messages offering (pre-tested) strong or weak arguments. Personalization was applied by mentioning the recipient's first name three times in the text. The intention to quit smoking was the dependent variable. Personalization increased persuasion when perceived personal relevance was high, but it decreased persuasion when perceived personal relevance was low. The effects on persuasion were only present in the case of strong arguments. Personalization is not always effective, and it may even lead to less persuasion. Therefore, this often used way to tailor messages must be applied with care. ©2011 The British Psychological Society.

  19. Temporal analysis of laser beam propagation in the atmosphere using computer-generated long phase screens.

    PubMed

    Dios, Federico; Recolons, Jaume; Rodríguez, Alejandro; Batet, Oscar

    2008-02-04

    Temporal analysis of the irradiance at the detector plane is intended as the first step in the study of the mean fade time in a free optical communication system. In the present work this analysis has been performed for a Gaussian laser beam propagating in the atmospheric turbulence by means of computer simulation. To this end, we have adapted a previously known numerical method to the generation of long phase screens. The screens are displaced in a transverse direction as the wave is propagated, in order to simulate the wind effect. The amplitude of the temporal covariance and its power spectrum have been obtained at the optical axis, at the beam centroid and at a certain distance from these two points. Results have been worked out for weak, moderate and strong turbulence regimes and when possible they have been compared with theoretical models. These results show a significant contribution of beam wander to the temporal behaviour of the irradiance, even in the case of weak turbulence. We have also found that the spectral bandwidth of the covariance is hardly dependent on the Rytov variance.

  20. Numerical solution of the Hele-Shaw equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, N.

    1987-04-01

    An algorithm is presented for approximating the motion of the interface between two immiscible fluids in a Hele-Shaw cell. The interface is represented by a set of volume fractions. We use the Simple Line Interface Calculation method along with the method of fractional steps to transport the interface. The equation of continuity leads to a Poisson equation for the pressure. The Poisson equation is discretized. Near the interface where the velocity field is discontinuous, the discretization is based on a weak formulation of the continuity equation. Interpolation is used on each side of the interface to increase the accuracy ofmore » the algorithm. The weak formulation as well as the interpolation are based on the computed volume fractions. This treatment of the interface is new. The discretized equations are solved by a modified conjugate gradient method. Surface tension is included and the curvature is computed through the use of osculating circles. For perturbations of small amplitude, a surprisingly good agreement is found between the numerical results and linearized perturbation theory. Numerical results are presented for the finite amplitude growth of unstable fingers. 62 refs., 13 figs.« less

  1. Development of cyberblog-based intelligent tutorial system to improve students learning ability algorithm

    NASA Astrophysics Data System (ADS)

    Wahyudin; Riza, L. S.; Putro, B. L.

    2018-05-01

    E-learning as a learning activity conducted online by the students with the usual tools is favoured by students. The use of computer media in learning provides benefits that are not owned by other learning media that is the ability of computers to interact individually with students. But the weakness of many learning media is to assume that all students have a uniform ability, when in reality this is not the case. The concept of Intelligent Tutorial System (ITS) combined with cyberblog application can overcome the weaknesses in neglecting diversity. An Intelligent Tutorial System-based Cyberblog application (ITS) is a web-based interactive application program that implements artificial intelligence which can be used as a learning and evaluation media in the learning process. The use of ITS-based Cyberblog in learning is one of the alternative learning media that is interesting and able to help students in measuring ability in understanding the material. This research will be associated with the improvement of logical thinking ability (logical thinking) of students, especially in algorithm subjects.

  2. Applications of Ko Displacement Theory to the Deformed Shape Predictions of the Doubly-Tapered Ikhana Wing

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Richards, W. Lance; Fleischer, Van Tran

    2009-01-01

    The Ko displacement theory, formulated for weak nonuniform (slowly changing cross sections) cantilever beams, was applied to the deformed shape analysis of the doubly-tapered wings of the Ikhana unmanned aircraft. The two-line strain-sensing system (along the wingspan) was used for sensing the bending strains needed for the wing-deformed shapes (deflections and cross-sectional twist) analysis. The deflection equation for each strain-sensing line was expressed in terms of the bending strains evaluated at multiple numbers of strain-sensing stations equally spaced along the strain-sensing line. For the preflight shape analysis of the Ikhana wing, the strain data needed for input to the displacement equations for the shape analysis were obtained from the nodal-stress output of the finite-element analysis. The wing deflections and cross-sectional twist angles calculated from the displacement equations were then compared with those computed from the finite-element computer program. The Ko displacement theory formulated for weak nonlinear cantilever beams was found to be highly accurate in the deformed shape predictions of the doubly-tapered Ikhana wing.

  3. Laminar soot processes

    NASA Technical Reports Server (NTRS)

    Sunderland, P. B.; Lin, K.-C.; Faeth, G. M.

    1995-01-01

    Soot processes within hydrocarbon fueled flames are important because they affect the durability and performance of propulsion systems, the hazards of unwanted fires, the pollutant and particulate emissions from combustion processes, and the potential for developing computational combustion. Motivated by these observations, the present investigation is studying soot processes in laminar diffusion and premixed flames in order to better understand the soot and thermal radiation emissions of luminous flames. Laminar flames are being studied due to their experimental and computational tractability, noting the relevance of such results to practical turbulent flames through the laminar flamelet concept. Weakly-buoyant and nonbuoyant laminar diffusion flames are being considered because buoyancy affects soot processes in flames while most practical flames involve negligible effects of buoyancy. Thus, low-pressure weakly-buoyant flames are being observed during ground-based experiments while near atmospheric pressure nonbuoyant flames will be observed during space flight experiments at microgravity. Finally, premixed laminar flames also are being considered in order to observe some aspects of soot formation for simpler flame conditions than diffusion flames. The main emphasis of current work has been on measurements of soot nucleation and growth in laminar diffusion and premixed flames.

  4. Dispersion in tidally averaged transport equation

    USGS Publications Warehouse

    Cheng, R.T.; Casulli, V.

    1992-01-01

    A general governing inter-tidal transport equation for conservative solutes has been derived without invoking the weakly nonlinear approximation. The governing inter-tidal transport equation is a convection-dispersion equation in which the convective velocity is a mean Lagrangian residual current, and the inter-tidal dispersion coefficient is defined by a dispersion patch. When the weakly nonlinear condition is violated, the physical significance of the Stokes' drift, as used in tidal dynamics, becomes questionable. For nonlinear problems, analytical solutions for the mean Lagrangian residual current and for the inter-tidal dispersion coefficient do not exist, they must be determined numerically. A rectangular tidal inlet with a constriction is used in the first example. The solutions of the residual currents and the computed properties of the inter-tidal dispersion coefficient are used to illuminate the mechanisms of the inter-tidal transport processes. Then, the present formulation is tested in a geometrically complex tidal estuary – San Francisco Bay, California. The computed inter-tidal dispersion coefficients are in the range between 5×104 and 5×106 cm2/sec., which are consistent with the values reported in the literature

  5. Essentially nonoscillatory postprocessing filtering methods

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1992-01-01

    High order accurate centered flux approximations used in the computation of numerical solutions to nonlinear partial differential equations produce large oscillations in regions of sharp transitions. Here, we present a new class of filtering methods denoted by Essentially Nonoscillatory Least Squares (ENOLS), which constructs an upgraded filtered solution that is close to the physically correct weak solution of the original evolution equation. Our method relies on the evaluation of a least squares polynomial approximation to oscillatory data using a set of points which is determined via the ENO network. Numerical results are given in one and two space dimensions for both scalar and systems of hyperbolic conservation laws. Computational running time, efficiency, and robustness of method are illustrated in various examples such as Riemann initial data for both Burgers' and Euler's equations of gas dynamics. In all standard cases, the filtered solution appears to converge numerically to the correct solution of the original problem. Some interesting results based on nonstandard central difference schemes, which exactly preserve entropy, and have been recently shown generally not to be weakly convergent to a solution of the conservation law, are also obtained using our filters.

  6. Angular ellipticity correlations in a composite alignment model for elliptical and spiral galaxies and inference from weak lensing

    NASA Astrophysics Data System (ADS)

    Tugendhat, Tim M.; Schäfer, Björn Malte

    2018-05-01

    We investigate a physical, composite alignment model for both spiral and elliptical galaxies and its impact on cosmological parameter estimation from weak lensing for a tomographic survey. Ellipticity correlation functions and angular ellipticity spectra for spiral and elliptical galaxies are derived on the basis of tidal interactions with the cosmic large-scale structure and compared to the tomographic weak-lensing signal. We find that elliptical galaxies cause a contribution to the weak-lensing dominated ellipticity correlation on intermediate angular scales between ℓ ≃ 40 and ℓ ≃ 400 before that of spiral galaxies dominates on higher multipoles. The predominant term on intermediate scales is the negative cross-correlation between intrinsic alignments and weak gravitational lensing (GI-alignment). We simulate parameter inference from weak gravitational lensing with intrinsic alignments unaccounted; the bias induced by ignoring intrinsic alignments in a survey like Euclid is shown to be several times larger than the statistical error and can lead to faulty conclusions when comparing to other observations. The biases generally point into different directions in parameter space, such that in some cases one can observe a partial cancellation effect. Furthermore, it is shown that the biases increase with the number of tomographic bins used for the parameter estimation process. We quantify this parameter estimation bias in units of the statistical error and compute the loss of Bayesian evidence for a model due to the presence of systematic errors as well as the Kullback-Leibler divergence to quantify the distance between the true model and the wrongly inferred one.

  7. Fine gravel controls hydrologic and erodibility responses to trampling disturbance for coarse-textured soils with weak cyanobacterial crusts.

    USDA-ARS?s Scientific Manuscript database

    We compared short-term effects of lug-soled boot trampling disturbance on water infiltration and soil erodibility on coarse-textured soils covered by a mixture of fine gravel and coarse sand over weak cyanobacterially-dominated biological soil crusts. Trampling significantly reduced final infiltrati...

  8. Controlling quantum memory-assisted entropic uncertainty in non-Markovian environments

    NASA Astrophysics Data System (ADS)

    Zhang, Yanliang; Fang, Maofa; Kang, Guodong; Zhou, Qingping

    2018-03-01

    Quantum memory-assisted entropic uncertainty relation (QMA EUR) addresses that the lower bound of Maassen and Uffink's entropic uncertainty relation (without quantum memory) can be broken. In this paper, we investigated the dynamical features of QMA EUR in the Markovian and non-Markovian dissipative environments. It is found that dynamical process of QMA EUR is oscillation in non-Markovian environment, and the strong interaction is favorable for suppressing the amount of entropic uncertainty. Furthermore, we presented two schemes by means of prior weak measurement and posterior weak measurement reversal to control the amount of entropic uncertainty of Pauli observables in dissipative environments. The numerical results show that the prior weak measurement can effectively reduce the wave peak values of the QMA-EUA dynamic process in non-Markovian environment for long periods of time, but it is ineffectual on the wave minima of dynamic process. However, the posterior weak measurement reversal has an opposite effects on the dynamic process. Moreover, the success probability entirely depends on the quantum measurement strength. We hope that our proposal could be verified experimentally and might possibly have future applications in quantum information processing.

  9. Attenuation of harmonic noise in vibroseis data using Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Sharma, S. P.; Tildy, Peter; Iranpour, Kambiz; Scholtz, Peter

    2009-04-01

    Processing of high productivity vibroseis seismic data (such as slip-sweep acquisition records) suffers from the well known disadvantage of harmonic distortion. Harmonic distortions are observed after cross-correlation of the recorded seismic signal with the pilot sweep and affect the signals in negative time (before the actual strong reflection event). Weak reflection events of the earlier sweeps falling in the negative time window of the cross-correlation sequence are being masked by harmonic distortions. Though the amplitude of the harmonic distortion is small (up to 10-20 %) compared to the fundamental amplitude of the reflection events, but it is significant enough to mask weak reflected signals. Elimination of harmonic noise due to source signal distortion from the cross-correlated seismic trace is a challenging task since the application of vibratory sources started and it still needs improvement. An approach has been worked out that minimizes the level of harmonic distortion by designing the signal similar to the harmonic distortion. An arbitrary length filter is optimized using the Simulated Annealing global optimization approach to design a harmonic signal. The approach deals with the convolution of a ratio trace (ratio of the harmonics with respect to the fundamental sweep) with the correlated "positive time" recorded signal and an arbitrary filter. Synthetic data study has revealed that this procedure of designing a signal similar to the desired harmonics using convolution of a suitable filter with theoretical ratio of harmonics with fundamental sweep helps in reducing the problem of harmonic distortion. Once we generate a similar signal for a vibroseis source using an optimized filter, then, this filter could be used to generate harmonics, which can be subtracted from the main cross-correlated trace to get the better, undistorted image of the subsurface. Designing the predicted harmonics to reduce the energy in the trace by considering weak reflection and observed harmonics together yields the desired result (resolution of weak reflected signal from the harmonic distortion). As optimization steps proceeds forward it is possible to observe from the difference plots of desired and predicted harmonics how weak reflections evolved from the harmonic distortion gradually during later iterations of global optimization. The procedure is applied in resolving weak reflections from a number of traces considered together. For a more precise design of harmonics SA procedure needs longer computation time which is impractical to deal with voluminous seismic data. However, the objective of resolving weak reflection signal in the strong harmonic noise can be achieved with fast computation using faster cooling schedule and less number of iterations and number of moves in simulated annealing procedure. This process could help in reducing the harmonics distortion and achieving the objective of resolving the lost weak reflection events in the cross-correlated seismic traces. Acknowledgements: The research was supported under the European Marie Curie Host Fellowships for Transfer of Knowledge (TOK) Development Host Scheme (contract no. MTKD-CT-2006-042537).

  10. A nodally condensed SUPG formulation for free-surface computation of steady-state flows constrained by unilateral contact - Application to rolling

    NASA Astrophysics Data System (ADS)

    Arora, Shitij; Fourment, Lionel

    2018-05-01

    In the context of the simulation of industrial hot forming processes, the resultant time-dependent thermo-mechanical multi-field problem (v →,p ,σ ,ɛ ) can be sped up by 10-50 times using the steady-state methods while compared to the conventional incremental methods. Though the steady-state techniques have been used in the past, but only on simple configurations and with structured meshes, and the modern-days problems are in the framework of complex configurations, unstructured meshes and parallel computing. These methods remove time dependency from the equations, but introduce an additional unknown into the problem: the steady-state shape. This steady-state shape x → can be computed as a geometric correction t → on the domain X → by solving the weak form of the steady-state equation v →.n →(t →)=0 using a Streamline Upwind Petrov Galerkin (SUPG) formulation. There exists a strong coupling between the domain shape and the material flow, hence, a two-step fixed point iterative resolution algorithm was proposed that involves (1) the computation of flow field from the resolution of thermo-mechanical equations on a prescribed domain shape and (2) the computation of steady-state shape for an assumed velocity field. The contact equations are introduced in the penalty form both during the flow computation as well as during the free-surface correction. The fact that the contact description is inhomogeneous, i.e., it is defined in the nodal form in the former, and in the weighted residual form in the latter, is assumed to be critical to the convergence of certain problems. Thus, the notion of nodal collocation is invoked in the weak form of the surface correction equation to homogenize the contact coupling. The surface correction algorithm is tested on certain analytical test cases and the contact coupling is tested with some hot rolling problems.

  11. Modeling weakly-ionized plasmas in magnetic field: A new computationally-efficient approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parent, Bernard, E-mail: parent@pusan.ac.kr; Macheret, Sergey O.; Shneider, Mikhail N.

    2015-11-01

    Despite its success at simulating accurately both non-neutral and quasi-neutral weakly-ionized plasmas, the drift-diffusion model has been observed to be a particularly stiff set of equations. Recently, it was demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions. Although the latter was applicable to multicomponent and multidimensional plasmas, it could not be used for plasmas in which the magnetic field was significant.more » This paper hence proposes a new computationally-efficient set of electron and ion transport equations that can be used not only for a plasma with multiple types of positive and negative ions, but also for a plasma in magnetic field. Because the proposed set of equations is obtained from the same physical model as the conventional drift-diffusion equations without introducing new assumptions or simplifications, it results in the same exact solution when the grid is refined sufficiently while being more computationally efficient: not only is the proposed approach considerably less stiff and hence requires fewer iterations to reach convergence but it yields a converged solution that exhibits a significantly higher resolution. The combined faster convergence and higher resolution is shown to result in a hundredfold increase in computational efficiency for some typical steady and unsteady plasma problems including non-neutral cathode and anode sheaths as well as quasi-neutral regions.« less

  12. SU-E-J-120: Comparing 4D CT Computed Ventilation to Lung Function Measured with Hyperpolarized Xenon-129 MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, B; Chen, Q

    2015-06-15

    Purpose: To correlate ventilation parameters computed from 4D CT to ventilation, profusion, and gas exchange measured with hyperpolarized Xenon-129 MRI for a set of lung cancer patients. Methods: Hyperpolarized Xe-129 MRI lung scans were acquired for lung cancer patients, before and after radiation therapy, measuring ventilation, perfusion, and gas exchange. In the standard clinical workflow, these patients also received 4D CT scans before treatment. Ventilation was computed from 4D CT using deformable image registration (DIR). All phases of the 4D CT scan were registered using a B-spline deformable registration. Ventilation at the voxel level was then computed for each phasemore » based on a Jacobian volume expansion metric, yielding phase sorted ventilation images. Ventilation based upon 4D CT and Xe-129 MRI were co-registered, allowing qualitative visual comparison and qualitative comparison via the Pearson correlation coefficient. Results: Analysis shows a weak correlation between hyperpolarized Xe-129 MRI and 4D CT DIR ventilation, with a Pearson correlation coefficient of 0.17 to 0.22. Further work will refine the DIR parameters to optimize the correlation. The weak correlation could be due to the limitations of 4D CT, registration algorithms, or the Xe-129 MRI imaging. Continued development will refine parameters to optimize correlation. Conclusion: Current analysis yields a minimal correlation between 4D CT DIR and Xe-129 MRI ventilation. Funding provided by the 2014 George Amorino Pilot Grant in Radiation Oncology at the University of Virginia.« less

  13. Recovery Schemes for Primitive Variables in General-relativistic Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Siegel, Daniel M.; Mösta, Philipp; Desai, Dhruv; Wu, Samantha

    2018-05-01

    General-relativistic magnetohydrodynamic (GRMHD) simulations are an important tool to study a variety of astrophysical systems such as neutron star mergers, core-collapse supernovae, and accretion onto compact objects. A conservative GRMHD scheme numerically evolves a set of conservation equations for “conserved” quantities and requires the computation of certain primitive variables at every time step. This recovery procedure constitutes a core part of any conservative GRMHD scheme and it is closely tied to the equation of state (EOS) of the fluid. In the quest to include nuclear physics, weak interactions, and neutrino physics, state-of-the-art GRMHD simulations employ finite-temperature, composition-dependent EOSs. While different schemes have individually been proposed, the recovery problem still remains a major source of error, failure, and inefficiency in GRMHD simulations with advanced microphysics. The strengths and weaknesses of the different schemes when compared to each other remain unclear. Here we present the first systematic comparison of various recovery schemes used in different dynamical spacetime GRMHD codes for both analytic and tabulated microphysical EOSs. We assess the schemes in terms of (i) speed, (ii) accuracy, and (iii) robustness. We find large variations among the different schemes and that there is not a single ideal scheme. While the computationally most efficient schemes are less robust, the most robust schemes are computationally less efficient. More robust schemes may require an order of magnitude more calls to the EOS, which are computationally expensive. We propose an optimal strategy of an efficient three-dimensional Newton–Raphson scheme and a slower but more robust one-dimensional scheme as a fall-back.

  14. Synchronization crossover of polariton condensates in weakly disordered lattices

    NASA Astrophysics Data System (ADS)

    Ohadi, H.; del Valle-Inclan Redondo, Y.; Ramsay, A. J.; Hatzopoulos, Z.; Liew, T. C. H.; Eastham, P. R.; Savvidis, P. G.; Baumberg, J. J.

    2018-05-01

    We demonstrate that the synchronization of a lattice of solid-state condensates when intersite tunneling is switched on depends strongly on the weak local disorder. This finding is vital for implementation of condensate arrays as computation devices. The condensates here are nonlinear bosonic fluids of exciton-polaritons trapped in a weakly disordered Bose-Hubbard potential, where the nearest-neighboring tunneling rate (Josephson coupling) can be dynamically tuned. The system can thus be tuned from a localized to a delocalized fluid as the number density or the Josephson coupling between nearest neighbors increases. The localized fluid is observed as a lattice of unsynchronized condensates emitting at different energies set by the disorder potential. In the delocalized phase, the condensates synchronize and long-range order appears, evidenced by narrowing of momentum and energy distributions, new diffraction peaks in momentum space, and spatial coherence between condensates. Our paper identifies similarities and differences of this nonequilibrium crossover to the traditional Bose-glass to superfluid transition in atomic condensates.

  15. Wang-Landau Reaction Ensemble Method: Simulation of Weak Polyelectrolytes and General Acid-Base Reactions.

    PubMed

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-02-14

    We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.

  16. Synchronizability of nonidentical weakly dissipative systems

    NASA Astrophysics Data System (ADS)

    Sendiña-Nadal, Irene; Letellier, Christophe

    2017-10-01

    Synchronization is a very generic process commonly observed in a large variety of dynamical systems which, however, has been rarely addressed in systems with low dissipation. Using the Rössler, the Lorenz 84, and the Sprott A systems as paradigmatic examples of strongly, weakly, and non-dissipative chaotic systems, respectively, we show that a parameter or frequency mismatch between two coupled such systems does not affect the synchronizability and the underlying structure of the joint attractor in the same way. By computing the Shannon entropy associated with the corresponding recurrence plots, we were able to characterize how two coupled nonidentical chaotic oscillators organize their dynamics in different dissipation regimes. While for strongly dissipative systems, the resulting dynamics exhibits a Shannon entropy value compatible with the one having an average parameter mismatch, for weak dissipation synchronization dynamics corresponds to a more complex behavior with higher values of the Shannon entropy. In comparison, conservative dynamics leads to a less rich picture, providing either similar chaotic dynamics or oversimplified periodic ones.

  17. [Dropped head syndrome as first manifestation of primary hyperparathyroid myopathy].

    PubMed

    Ota, Kiyobumi; Koseki, Sayo; Ikegami, Kenji; Onishi, Iichiroh; Tomimitsu, Hiyoryuki; Shintani, Shuzo

    2018-03-28

    75 years old woman presented with 6-month history of progressive dropped head syndrome. Neurological examination revealed moderate weakness of flexor and extensor of neck and mild weakness of proximal appendicular muscles with normal deep tendon reflexes. The needle electromyography showed short duration and low amplitude motor unit potential. No fibrillation potentials or positive sharp waves were seen. Biopsy of deltoid muscle was normal. Laboratory studies showed elevated levels of serum calcium (11.8 mg/dl, upper limit of normal 10.1) and intact parathyroid hormone (104 pg/ml, upper limit of normal 65), and decreased level of serum phosphorus (2.3 mg/dl, lower limit of normal 2.7). Ultrasonography and enhanced computed tomography revealed a parathyroid tumor. The tumor was removed surgically. Pathological examination proved tumor to be parathyroid adenoma. Dropped head and weakness of muscles were dramatically improved within a week after the operation. Although hyperparathyroidism is a rare cause of dropped head syndrome, neurologists must recognize hyperparathyroidism as a treatable cause of dropped head syndrome.

  18. Weighted interior penalty discretization of fully nonlinear and weakly dispersive free surface shallow water flows

    NASA Astrophysics Data System (ADS)

    Di Pietro, Daniele A.; Marche, Fabien

    2018-02-01

    In this paper, we further investigate the use of a fully discontinuous Finite Element discrete formulation for the study of shallow water free surface flows in the fully nonlinear and weakly dispersive flow regime. We consider a decoupling strategy in which we approximate the solutions of the classical shallow water equations supplemented with a source term globally accounting for the non-hydrostatic effects. This source term can be computed through the resolution of elliptic second-order linear sub-problems, which only involve second order partial derivatives in space. We then introduce an associated Symmetric Weighted Internal Penalty discrete bilinear form, allowing to deal with the discontinuous nature of the elliptic problem's coefficients in a stable and consistent way. Similar discrete formulations are also introduced for several recent optimized fully nonlinear and weakly dispersive models. These formulations are validated again several benchmarks involving h-convergence, p-convergence and comparisons with experimental data, showing optimal convergence properties.

  19. Dynamic properties of molecular motors in burnt-bridge models

    NASA Astrophysics Data System (ADS)

    Artyomov, Maxim N.; Morozov, Alexander Yu; Pronina, Ekaterina; Kolomeisky, Anatoly B.

    2007-08-01

    Dynamic properties of molecular motors that fuel their motion by actively interacting with underlying molecular tracks are studied theoretically via discrete-state stochastic 'burnt-bridge' models. The transport of the particles is viewed as an effective diffusion along one-dimensional lattices with periodically distributed weak links. When an unbiased random walker passes the weak link it can be destroyed ('burned') with probability p, providing a bias in the motion of the molecular motor. We present a theoretical approach that allows one to calculate exactly all dynamic properties of motor proteins, such as velocity and dispersion, under general conditions. It is found that dispersion is a decreasing function of the concentration of bridges, while the dependence of dispersion on the burning probability is more complex. Our calculations also show a gap in dispersion for very low concentrations of weak links or for very low burning probabilities which indicates a dynamic phase transition between unbiased and biased diffusion regimes. Theoretical findings are supported by Monte Carlo computer simulations.

  20. Second Order Boltzmann-Gibbs Principle for Polynomial Functions and Applications

    NASA Astrophysics Data System (ADS)

    Gonçalves, Patrícia; Jara, Milton; Simon, Marielle

    2017-01-01

    In this paper we give a new proof of the second order Boltzmann-Gibbs principle introduced in Gonçalves and Jara (Arch Ration Mech Anal 212(2):597-644, 2014). The proof does not impose the knowledge on the spectral gap inequality for the underlying model and it relies on a proper decomposition of the antisymmetric part of the current of the system in terms of polynomial functions. In addition, we fully derive the convergence of the equilibrium fluctuations towards (1) a trivial process in case of super-diffusive systems, (2) an Ornstein-Uhlenbeck process or the unique energy solution of the stochastic Burgers equation, as defined in Gubinelli and Jara (SPDEs Anal Comput (1):325-350, 2013) and Gubinelli and Perkowski (Arxiv:1508.07764, 2015), in case of weakly asymmetric diffusive systems. Examples and applications are presented for weakly and partial asymmetric exclusion processes, weakly asymmetric speed change exclusion processes and hamiltonian systems with exponential interactions.

  1. Nearly deterministic quantum Fredkin gate based on weak cross-Kerr nonlinearity

    NASA Astrophysics Data System (ADS)

    Wu, Yun-xiang; Zhu, Chang-hua; Pei, Chang-xing

    2016-09-01

    A scheme of an optical quantum Fredkin gate is presented based on weak cross-Kerr nonlinearity. By an auxiliary coherent state with the cross-Kerr nonlinearity effect, photons can interact with each other indirectly, and a non-demolition measurement for photons can be implemented. Combined with the homodyne detection, classical feedforward, polarization beam splitters and Pauli-X operations, a controlled-path gate is constructed. Furthermore, a quantum Fredkin gate is built based on the controlled-path gate. The proposed Fredkin gate is simple in structure and feasible by current experimental technology.

  2. Controlling Explosive Sensitivity of Energy-Related Materials by Means of Production and Processing in Electromagnetic Fields

    NASA Astrophysics Data System (ADS)

    Rodzevich, A. P.; Gazenaur, E. G.; Kuzmina, L. V.; Krasheninin, V. I.; Sokolov, P. N.

    2016-08-01

    The present work is one of the world first attempts to develop effective methods for controlling explosive sensitivity of energy-related materials with the help of weak electric (up to 1 mV/cm) and magnetic (0.001 T) fields. The resulting experimental data can be used for purposeful alternation of explosive materials reactivity, which is of great practical importance. The proposed technology of producing and processing materials in a weak electric field allows forecasting long-term stability of these materials under various energy impacts.

  3. Three-dimensional dynamic thermal imaging of structural flaws by dual-band infrared computed tomography

    NASA Astrophysics Data System (ADS)

    DelGrande, Nancy; Dolan, Kenneth W.; Durbin, Philip F.; Gorvad, Michael R.; Kornblum, B. T.; Perkins, Dwight E.; Schneberk, Daniel J.; Shapiro, Arthur B.

    1993-11-01

    We discuss three-dimensional dynamic thermal imaging of structural flaws using dual-band infrared (DBIR) computed tomography. Conventional (single-band) thermal imaging is difficult to interpret. It yields imprecise or qualitative information (e.g., when subsurface flaws produce weak heat flow anomalies masked by surface clutter). We use the DBIR imaging technique to clarify interpretation. We capture the time history of surface temperature difference patterns at the epoxy-glue disbond site of a flash-heated lap joint. This type of flawed structure played a significant role in causing damage to the Aloha Aircraft fuselage on the aged Boeing 737 jetliner. The magnitude of surface-temperature differences versus time for 0.1 mm air layer compared to 0.1 mm glue layer, varies from 0.2 to 1.6 degree(s)C, for simultaneously scanned front and back surfaces. The scans are taken every 42 ms from 0 to 8 s after the heat flash. By ratioing 3 - 5 micrometers and 8 - 12 micrometers DBIR images, we located surface temperature patterns from weak heat flow anomalies at the disbond site and remove the emissivity mask from surface paint of roughness variations. Measurements compare well with calculations based on TOPAX3D, a three-dimensional, finite element computer model. We combine infrared, ultrasound and x-ray imaging methods to study heat transfer, bond quality and material differences associated with the lap joint disbond site.

  4. Computer Simulation Is an Undervalued Tool for Genetic Analysis: A Historical View and Presentation of SHIMSHON – A Web-Based Genetic Simulation Package

    PubMed Central

    Greenberg, David A.

    2011-01-01

    Computer simulation methods are under-used tools in genetic analysis because simulation approaches have been portrayed as inferior to analytic methods. Even when simulation is used, its advantages are not fully exploited. Here, I present SHIMSHON, our package of genetic simulation programs that have been developed, tested, used for research, and used to generated data for Genetic Analysis Workshops (GAW). These simulation programs, now web-accessible, can be used by anyone to answer questions about designing and analyzing genetic disease studies for locus identification. This work has three foci: (1) the historical context of SHIMSHON's development, suggesting why simulation has not been more widely used so far. (2) Advantages of simulation: computer simulation helps us to understand how genetic analysis methods work. It has advantages for understanding disease inheritance and methods for gene searches. Furthermore, simulation methods can be used to answer fundamental questions that either cannot be answered by analytical approaches or cannot even be defined until the problems are identified and studied, using simulation. (3) I argue that, because simulation was not accepted, there was a failure to grasp the meaning of some simulation-based studies of linkage. This may have contributed to perceived weaknesses in linkage analysis; weaknesses that did not, in fact, exist. PMID:22189467

  5. Assessment and protection of esophageal mucosal integrity in patients with heartburn without esophagitis.

    PubMed

    Woodland, Philip; Lee, Chung; Duraisamy, Yasotha; Duraysami, Yasotha; Farré, Ricard; Dettmar, Peter; Sifrim, Daniel

    2013-04-01

    Intact esophageal mucosal integrity is essential to prevent symptoms during gastroesophageal reflux events. Approximately 70% of patients with heartburn have macroscopically normal esophageal mucosa. In patients with heartburn, persistent functional impairment of esophageal mucosal barrier integrity may underlie remaining symptoms. Topical protection of a functionally vulnerable mucosa may be an attractive therapeutic strategy. We aimed to evaluate esophageal mucosal functional integrity in patients with heartburn without esophagitis, and test the feasibility of an alginate-based topical mucosal protection. Three distal esophageal biopsies were obtained from 22 patients with heartburn symptoms, and 22 control subjects. In mini-Ussing chambers, the change in transepithelial electrical resistance (TER) of biopsies when exposed to neutral, weakly acidic, and acidic solutions was measured. The experiment was repeated in a further 10 patients after pretreatment of biopsies with sodium alginate, viscous control, or liquid control "protectant" solutions. Biopsy exposure to neutral solution caused no change in TER. Exposure to weakly acidic and acidic solutions caused a greater reduction in TER in patients than in controls (weakly acid -7.2% (95% confidence interval (CI) -9.9 to -4.5) vs. 3.2% (-2.2 to 8.6), P<0.05; acidic -22.8% (-31.4 to 14.1) vs. -9.4% (-17.2 to -1.6), P<0.01). Topical pretreatment with alginate but not with control solutions prevented the acid-induced decrease in TER (-1% (-5.9 to 3.9) vs. -13.5 (-24.1 to -3.0) vs. -13.2 (-21.7 to -4.8), P<0.05). Esophageal mucosa in patients with heartburn without esophagitis shows distinct vulnerability to acid and weakly acidic exposures. Experiments in vitro suggest that such vulnerable mucosa may be protected by application of an alginate-containing topical solution.

  6. Benchmark calculations of excess electrons in water cluster cavities: balancing the addition of atom-centered diffuse functions versus floating diffuse functions.

    PubMed

    Zhang, Changzhe; Bu, Yuxiang

    2016-09-14

    Diffuse functions have been proved to be especially crucial for the accurate characterization of excess electrons which are usually bound weakly in intermolecular zones far away from the nuclei. To examine the effects of diffuse functions on the nature of the cavity-shaped excess electrons in water cluster surroundings, both the HOMO and LUMO distributions, vertical detachment energies (VDEs) and visible absorption spectra of two selected (H2O)24(-) isomers are investigated in the present work. Two main types of diffuse functions are considered in calculations including the Pople-style atom-centered diffuse functions and the ghost-atom-based floating diffuse functions. It is found that augmentation of atom-centered diffuse functions contributes to a better description of the HOMO (corresponding to the VDE convergence), in agreement with previous studies, but also leads to unreasonable diffuse characters of the LUMO with significant red-shifts in the visible spectra, which is against the conventional point of view that the more the diffuse functions, the better the results. The issue of designing extra floating functions for excess electrons has also been systematically discussed, which indicates that the floating diffuse functions are necessary not only for reducing the computational cost but also for improving both the HOMO and LUMO accuracy. Thus, the basis sets with a combination of partial atom-centered diffuse functions and floating diffuse functions are recommended for a reliable description of the weakly bound electrons. This work presents an efficient way for characterizing the electronic properties of weakly bound electrons accurately by balancing the addition of atom-centered diffuse functions and floating diffuse functions and also by balancing the computational cost and accuracy of the calculated results, and thus is very useful in the relevant calculations of various solvated electron systems and weakly bound anionic systems.

  7. The inherent weaknesses in industrial control systems devices; hacking and defending SCADA systems

    NASA Astrophysics Data System (ADS)

    Bianco, Louis J.

    The North American Electric Reliability Corporation (NERC) is about to enforce their NERC Critical Infrastructure Protection (CIP) Version Five and Six requirements on July 1st 2016. The NERC CIP requirements are a set of cyber security standards designed to protect cyber assets essential the reliable operation of the electric grid. The new Version Five and Six requirements are a major revision to the Version Three (currently enforced) requirements. The new requirements also bring substations into scope alongside Energy Control Centers. When the Version Five requirements were originally drafted they were vague, causing in depth discussions throughout the industry. The ramifications of these requirements has made owners look at their systems in depth, questioning how much money it will take to meet these requirements. Some owners saw backing down from routable networks to non-routable as a means to save money as they would be held to less requirements within the standards. Some owners saw removing routable connections as a proper security move. The purpose of this research was to uncover the inherent weaknesses in Industrial Control Systems (ICS) devices; to show how ICS devices can be hacked and figure out potential protections for these Critical Infrastructure devices. In addition, this research also aimed to validate the decision to move from External Routable connectivity to Non-Routable connectivity, as a security measure and not as a means of savings. The results reveal in order to ultimately protect Industrial Control Systems they must be removed from the Internet and all bi-directional external routable connections must be removed. Furthermore; non-routable serial connections should be utilized, and these non-routable serial connections should be encrypted on different layers of the OSI model. The research concluded that most weaknesses in SCADA systems are due to the inherent weaknesses in ICS devices and because of these weaknesses, human intervention is the biggest threat to SCADA systems.

  8. Continuum Electrostatics Approaches to Calculating pKas and Ems in Proteins

    PubMed Central

    Gunner, MR; Baker, Nathan A.

    2017-01-01

    Proteins change their charge state through protonation and redox reactions as well as through binding charged ligands. The free energy of these reactions are dominated by solvation and electrostatic energies and modulated by protein conformational relaxation in response to the ionization state changes. Although computational methods for calculating these interactions can provide very powerful tools for predicting protein charge states, they include several critical approximations of which users should be aware. This chapter discusses the strengths, weaknesses, and approximations of popular computational methods for predicting charge states and understanding their underlying electrostatic interactions. The goal of this chapter is to inform users about applications and potential caveats of these methods as well as outline directions for future theoretical and computational research. PMID:27497160

  9. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    PubMed

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  10. Measuring coherence of computer-assisted likelihood ratio methods.

    PubMed

    Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

    2015-04-01

    Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Extension of transonic flow computational concepts in the analysis of cavitated bearings

    NASA Technical Reports Server (NTRS)

    Vijayaraghavan, D.; Keith, T. G., Jr.; Brewe, D. E.

    1990-01-01

    An analogy between the mathematical modeling of transonic potential flow and the flow in a cavitating bearing is described. Based on the similarities, characteristics of the cavitated region and jump conditions across the film reformation and rupture fronts are developed using the method of weak solutions. The mathematical analogy is extended by utilizing a few computational concepts of transonic flow to numerically model the cavitating bearing. Methods of shock fitting and shock capturing are discussed. Various procedures used in transonic flow computations are adapted to bearing cavitation applications, for example, type differencing, grid transformation, an approximate factorization technique, and Newton's iteration method. These concepts have proved to be successful and have vastly improved the efficiency of numerical modeling of cavitated bearings.

  12. Designing future dark energy space missions. II. Photometric redshift of space weak lensing optimized surveys

    NASA Astrophysics Data System (ADS)

    Jouvel, S.; Kneib, J.-P.; Bernstein, G.; Ilbert, O.; Jelinsky, P.; Milliard, B.; Ealet, A.; Schimd, C.; Dahlen, T.; Arnouts, S.

    2011-08-01

    Context. With the discovery of the accelerated expansion of the universe, different observational probes have been proposed to investigate the presence of dark energy, including possible modifications to the gravitation laws by accurately measuring the expansion of the Universe and the growth of structures. We need to optimize the return from future dark energy surveys to obtain the best results from these probes. Aims: A high precision weak-lensing analysis requires not an only accurate measurement of galaxy shapes but also a precise and unbiased measurement of galaxy redshifts. The survey strategy has to be defined following both the photometric redshift and shape measurement accuracy. Methods: We define the key properties of the weak-lensing instrument and compute the effective PSF and the overall throughput and sensitivities. We then investigate the impact of the pixel scale on the sampling of the effective PSF, and place upper limits on the pixel scale. We then define the survey strategy computing the survey area including in particular both the Galactic absorption and Zodiacal light variation accross the sky. Using the Le Phare photometric redshift code and realistic galaxy mock catalog, we investigate the properties of different filter-sets and the importance of the u-band photometry quality to optimize the photometric redshift and the dark energy figure of merit (FoM). Results: Using the predicted photometric redshift quality, simple shape measurement requirements, and a proper sky model, we explore what could be an optimal weak-lensing dark energy mission based on FoM calculation. We find that we can derive the most accurate the photometric redshifts for the bulk of the faint galaxy population when filters have a resolution ℛ ~ 3.2. We show that an optimal mission would survey the sky through eight filters using two cameras (visible and near infrared). Assuming a five-year mission duration, a mirror size of 1.5 m and a 0.5 deg2 FOV with a visible pixel scale of 0.15'', we found that a homogeneous survey reaching a survey population of IAB = 25.6 (10σ) with a sky coverage of ~11 000 deg2 maximizes the weak lensing FoM. The effective number density of galaxies used for WL is then ~45 gal/arcmin2, which is at least a factor of two higher than ground-based surveys. Conclusions: This study demonstrates that a full account of the observational strategy is required to properly optimize the instrument parameters and maximize the FoM of the future weak-lensing space dark energy mission.

  13. Universality of the Unruh effect

    NASA Astrophysics Data System (ADS)

    Modesto, Leonardo; Myung, Yun Soo; Yi, Sang-Heon

    2018-02-01

    In this paper we prove the universal nature of the Unruh effect in a general class of weakly nonlocal field theories. At the same time we solve the tension between two conflicting claims published in literature. Our universality statement is based on two independent computations based on the canonical formulation as well as path integral formulation of the quantum theory.

  14. Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design

    ERIC Educational Resources Information Center

    Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff

    2016-01-01

    Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…

  15. Front End Software for Online Database Searching Part 1: Definitions, System Features, and Evaluation.

    ERIC Educational Resources Information Center

    Hawkins, Donald T.; Levy, Louise R.

    1985-01-01

    This initial article in series of three discusses barriers inhibiting use of current online retrieval systems by novice users and notes reasons for front end and gateway online retrieval systems. Definitions, front end features, user interface, location (personal computer, host mainframe), evaluation, and strengths and weaknesses are covered. (16…

  16. Preventive Support for Kindergarteners Most At-Risk for Mathematics Difficulties: Computer-Assisted Intervention

    ERIC Educational Resources Information Center

    Salminen, Jonna; Koponen, Tuire; Räsänen, Pekka; Aro, Mikko

    2015-01-01

    Weaknesses in early number skills have been found to be a risk factor for later difficulties in mathematical performance. Nevertheless, only a few intervention studies with young children have been published. In this study, the responsiveness to early support in kindergarteners with most severe difficulties was examined with two different computer…

  17. Recent Developments in Computational Techniques for Applied Hydrodynamics.

    DTIC Science & Technology

    1979-12-07

    by block number) Numerical Method Fluids Incompressible Flow Finite Difference Methods Poisson Equation Convective Equations -MABSTRACT (Continue on...weaknesses of the different approaches are analyzed. Finite - difference techniques have particularly attractive properties in this framework. Hence it will...be worthwhile to correct, at least partially, the difficulties from which Eulerian and Lagrangian finite - difference techniques suffer, discussed in

  18. Effectiveness of the Computer and Internet Literacy Project in Public High Schools of Tarlac Province, Philippines

    ERIC Educational Resources Information Center

    Lorenzo, Arnold R.

    2016-01-01

    Evaluation is important to gauge the strengths, weaknesses and effectiveness of any activity. This study evaluated the iSchools Project implemented in the Public High Schools of Tarlac Province, Philippines by the Commission on Information and Communications Technology (CICT) in partnership with the selected State Universities and Colleges. Using…

  19. The Study Skills Questionnaire (SSQUES): Preliminary Validation of a Measure for Assessing Students' Perceived Areas of Weakness.

    ERIC Educational Resources Information Center

    McCombs, Barbara L.; Dobrovolny, Jacqueline L.

    The potential reliability and construct and predictive validity of a 30-item Study Skills Questionnaire (SSQUES) was evaluated for its ability to: (1) predict student performance in a self-paced, individualized, or computer-managed instructional environment, and (2) identify students needing some type of study skills remediation. The study was…

  20. Does Cholecystectomy Increase the Esophageal Alkaline Reflux? Evaluation by Impedance-pH Technique.

    PubMed

    Uyanikoglu, Ahmet; Akyuz, Filiz; Ermis, Fatih; Arici, Serpil; Bas, Gurhan; Cakirca, Mustafa; Baran, Bulent; Mungan, Zeynel

    2012-04-01

    The aim of this study is to investigate the reflux patterns in patients with galbladder stone and the change of reflux patterns after cholecystectomy in such patients. Fourteen patients with cholecystolithiasis and a control group including 10 healthy control subjects were enrolled in this prospective study. Demographical findings, reflux symptom score scale and 24-hour impedance pH values of the 14 cholecystolithiasis cases and the control group were evaluated. The impedance pH study was repeated 3 months after cholecystectomy. Age, gender, and BMI were not different between the two groups. Total and supine weakly alkaline reflux time (%) (1.0 vs 22.5, P = 0.028; 201.85 vs 9.65, P = 0.012), the longest episodes of total, upright and supine weakly alkaline reflux mediums (11 vs 2, P = 0.025; 8.5 vs 1.0, P = 0.035; 3 vs 0, P = 0.027), total and supine weakly alkaline reflux time in minutes (287.35 vs 75.10, P = 0.022; 62.5 vs 1.4, P = 0.017), the number of alkaline reflux episodes (162.5 vs 72.5, P = 0.022) were decreased with statistical significance. No statistically significant difference was found in the comparison of symptoms between the subjects in the control group and the patients with cholecystolithiasis, in preoperative, postoperative and postcholecystectomy status. Significant reflux symptoms did not occur after cholecystectomy. Post cholecystectomy weakly alkaline reflux was decreased, but it was determined that acid reflux increased after cholecystectomy by impedance pH-metry in the study group.

  1. The challenges of developing computational physics: the case of South Africa

    NASA Astrophysics Data System (ADS)

    Salagaram, T.; Chetty, N.

    2013-08-01

    Most modern scientific research problems are complex and interdisciplinary in nature. It is impossible to study such problems in detail without the use of computation in addition to theory and experiment. Although it is widely agreed that students should be introduced to computational methods at the undergraduate level, it remains a challenge to do this in a full traditional undergraduate curriculum. In this paper, we report on a survey that we conducted of undergraduate physics curricula in South Africa to determine the content and the approach taken in the teaching of computational physics. We also considered the pedagogy of computational physics at the postgraduate and research levels at various South African universities, research facilities and institutions. We conclude that the state of computational physics training in South Africa, especially at the undergraduate teaching level, is generally weak and needs to be given more attention at all universities. Failure to do so will impact negatively on the countrys capacity to grow its endeavours generally in the field of computational sciences, with negative impacts on research, and in commerce and industry.

  2. Precision measurements and computations of transition energies in rotationally cold triatomic hydrogen ions up to the midvisible spectral range.

    PubMed

    Pavanello, Michele; Adamowicz, Ludwik; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Polyansky, Oleg L; Tennyson, Jonathan; Szidarovszky, Tamás; Császár, Attila G; Berg, Max; Petrignani, Annemieke; Wolf, Andreas

    2012-01-13

    First-principles computations and experimental measurements of transition energies are carried out for vibrational overtone lines of the triatomic hydrogen ion H(3)(+) corresponding to floppy vibrations high above the barrier to linearity. Action spectroscopy is improved to detect extremely weak visible-light spectral lines on cold trapped H(3)(+) ions. A highly accurate potential surface is obtained from variational calculations using explicitly correlated Gaussian wave function expansions. After nonadiabatic corrections, the floppy H(3)(+) vibrational spectrum is reproduced at the 0.1 cm(-1) level up to 16600 cm(-1).

  3. Parallelized seeded region growing using CUDA.

    PubMed

    Park, Seongjin; Lee, Jeongjin; Lee, Hyunna; Shin, Juneseuk; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung

    2014-01-01

    This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests.

  4. Results of data base management system parameterized performance testing related to GSFC scientific applications

    NASA Technical Reports Server (NTRS)

    Carchedi, C. H.; Gough, T. L.; Huston, H. A.

    1983-01-01

    The results of a variety of tests designed to demonstrate and evaluate the performance of several commercially available data base management system (DBMS) products compatible with the Digital Equipment Corporation VAX 11/780 computer system are summarized. The tests were performed on the INGRES, ORACLE, and SEED DBMS products employing applications that were similar to scientific applications under development by NASA. The objectives of this testing included determining the strength and weaknesses of the candidate systems, performance trade-offs of various design alternatives and the impact of some installation and environmental (computer related) influences.

  5. Complex energies and the polyelectronic Stark problem

    NASA Astrophysics Data System (ADS)

    Themelis, Spyros I.; Nicolaides, Cleanthes A.

    2000-12-01

    The problem of computing the energy shifts and widths of ground or excited N-electron atomic states perturbed by weak or strong static electric fields is dealt with by formulating a state-specific complex eigenvalue Schrödinger equation (CESE), where the complex energy contains the field-induced shift and width. The CESE is solved to all orders nonperturbatively, by using separately optimized N-electron function spaces, composed of real and complex one-electron functions, the latter being functions of a complex coordinate. The use of such spaces is a salient characteristic of the theory, leading to economy and manageability of calculation in terms of a two-step computational procedure. The first step involves only Hermitian matrices. The second adds complex functions and the overall computation becomes non-Hermitian. Aspects of the formalism and of computational strategy are compared with those of the complex absorption potential (CAP) method, which was recently applied for the calculation of field-induced complex energies in H and Li. Also compared are the numerical results of the two methods, and the questions of accuracy and convergence that were posed by Sahoo and Ho (Sahoo S and Ho Y K 2000 J. Phys. B: At. Mol. Opt. Phys. 33 2195) are explored further. We draw attention to the fact that, because in the region where the field strength is weak the tunnelling rate (imaginary part of the complex eigenvalue) diminishes exponentially, it is possible for even large-scale nonperturbative complex eigenvalue calculations either to fail completely or to produce seemingly stable results which, however, are wrong. It is in this context that the discrepancy in the width of Li 1s22s 2S between results obtained by the CAP method and those obtained by the CESE method is interpreted. We suggest that the very-weak-field regime must be computed by the golden rule, provided the continuum is represented accurately. In this respect, existing one-particle semiclassical formulae seem to be sufficient. In addition to the aforementioned comparisons and conclusions, we present a number of new results from the application of the state-specific CESE theory to the calculation of field-induced shifts and widths of the H n = 3 levels and of the prototypical Be 1s22s2 1S state, for a range of field strengths. Using the H n = 3 manifold as the example, it is shown how errors may occur for small values of the field, unless the function spaces are optimized carefully for each level.

  6. Morphologies of mid-IR variability-selected AGN host galaxies

    NASA Astrophysics Data System (ADS)

    Polimera, Mugdha; Sarajedini, Vicki; Ashby, Matthew L. N.; Willner, S. P.; Fazio, Giovanni G.

    2018-05-01

    We use multi-epoch 3.6 and 4.5 μm data from the Spitzer Extended Deep Survey (SEDS) to probe the AGN population among galaxies to redshifts ˜3 via their mid-IR variability. About 1 per cent of all galaxies in our survey contain varying nuclei, 80 per cent of which are likely to be AGN. Twenty-three per cent of mid-IR variables are also X-ray sources. The mid-IR variables have a slightly greater fraction of weakly disturbed morphologies compared to a control sample of normal galaxies. The increased fraction of weakly distorted hosts becomes more significant when we remove the X-ray emitting AGN, while the frequency of strongly disturbed hosts remains similar to the control galaxy sample. These results suggest that mid-IR variability identifies a unique population of obscured, Compton-thick AGN revealing elevated levels of weak distortion among their host galaxies.

  7. Reaction-induced rheological weakening enables oceanic plate subduction.

    PubMed

    Hirauchi, Ken-Ichi; Fukushima, Kumi; Kido, Masanori; Muto, Jun; Okamoto, Atsushi

    2016-08-26

    Earth is the only terrestrial planet in our solar system where an oceanic plate subducts beneath an overriding plate. Although the initiation of plate subduction requires extremely weak boundaries between strong plates, the way in which oceanic mantle rheologically weakens remains unknown. Here we show that shear-enhanced hydration reactions contribute to the generation and maintenance of weak mantle shear zones at mid-lithospheric depths. High-pressure friction experiments on peridotite gouge reveal that in the presence of hydrothermal water, increasing strain and reactions lead to an order-of-magnitude reduction in strength. The rate of deformation is controlled by pressure-solution-accommodated frictional sliding on weak hydrous phyllosilicate (talc), providing a mechanism for the 'cutoff' of the high peak strength at the brittle-plastic transition. Our findings suggest that infiltration of seawater into transform faults with long lengths and low slip rates is an important controlling factor on the initiation of plate tectonics on terrestrial planets.

  8. Are computer and cell phone use associated with body mass index and overweight? A population study among twin adolescents

    PubMed Central

    Lajunen, Hanna-Reetta; Keski-Rahkonen, Anna; Pulkkinen, Lea; Rose, Richard J; Rissanen, Aila; Kaprio, Jaakko

    2007-01-01

    Background Overweight in children and adolescents has reached dimensions of a global epidemic during recent years. Simultaneously, information and communication technology use has rapidly increased. Methods A population-based sample of Finnish twins born in 1983–1987 (N = 4098) was assessed by self-report questionnaires at 17 y during 2000–2005. The association of overweight (defined by Cole's BMI-for-age cut-offs) with computer and cell phone use and ownership was analyzed by logistic regression and their association with BMI by linear regression models. The effect of twinship was taken into account by correcting for clustered sampling of families. All models were adjusted for gender, physical exercise, and parents' education and occupational class. Results The proportion of adolescents who did not have a computer at home decreased from 18% to 8% from 2000 to 2005. Compared to them, having a home computer (without an Internet connection) was associated with a higher risk of overweight (odds ratio 2.3, 95% CI 1.4 to 3.8) and BMI (beta coefficient 0.57, 95% CI 0.15 to 0.98). However, having a computer with an Internet connection was not associated with weight status. Belonging to the highest quintile (OR 1.8 95% CI 1.2 to 2.8) and second-highest quintile (OR 1.6 95% CI 1.1 to 2.4) of weekly computer use was positively associated with overweight. The proportion of adolescents without a personal cell phone decreased from 12% to 1% across 2000 to 2005. There was a positive linear trend of increasing monthly phone bill with BMI (beta 0.18, 95% CI 0.06 to 0.30), but the association of a cell phone bill with overweight was very weak. Conclusion Time spent using a home computer was associated with an increased risk of overweight. Cell phone use correlated weakly with BMI. Increasing use of information and communication technology may be related to the obesity epidemic among adolescents. PMID:17324280

  9. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  10. Influence of Hall Effect on Magnetic Control of Stagnation Point Heat Transfer

    NASA Astrophysics Data System (ADS)

    Poggie, Jonathan; Gaitonde, Datta

    2001-11-01

    Electromagnetic control is an appealing possibility for mitigating the thermal loads that occur in hypersonic flight. There was extensive research on this technique in the past (up to about 1970), but enthusiasm waned because of problems of system cost and weight. Renewed interest has arisen recently due to developments in the technology of super-conducting magnets and the understanding of the physics of weakly-ionized, non-equilibrium plasmas. A problem of particular interest is the reduction of stagnation point heating during atmospheric entry by magnetic deceleration of the flow in the shock layer. For the case of hypersonic flow over a sphere, a reduction in heat flux has been observed with the application of a dipole magnetic field (Poggie and Gaitonde, AIAA Paper 2001-0196). The Hall effect has a detrimental influence on this control scheme, tending to rotate the current vector out of the circumferential direction and to reduce the impact of the applied magnetic field on the fluid. In the present work we re-examine this problem by using modern computational methods to simulate flow past a hemispherical-nosed vehicle in which a axially-oriented magnetic dipole has been placed. The deleterious effects of the Hall current are characterized, and are observed to diminish when the surface of the vehicle is conducting.

  11. Security Analysis and Improvements of Authentication and Access Control in the Internet of Things

    PubMed Central

    Ndibanje, Bruce; Lee, Hoon-Jae; Lee, Sang-Gon

    2014-01-01

    Internet of Things is a ubiquitous concept where physical objects are connected over the internet and are provided with unique identifiers to enable their self-identification to other devices and the ability to continuously generate data and transmit it over a network. Hence, the security of the network, data and sensor devices is a paramount concern in the IoT network as it grows very fast in terms of exchanged data and interconnected sensor nodes. This paper analyses the authentication and access control method using in the Internet of Things presented by Jing et al (Authentication and Access Control in the Internet of Things. In Proceedings of the 2012 32nd International Conference on Distributed Computing Systems Workshops, Macau, China, 18–21 June 2012, pp. 588–592). According to our analysis, Jing et al.'s protocol is costly in the message exchange and the security assessment is not strong enough for such a protocol. Therefore, we propose improvements to the protocol to fill the discovered weakness gaps. The protocol enhancements facilitate many services to the users such as user anonymity, mutual authentication, and secure session key establishment. Finally, the performance and security analysis show that the improved protocol possesses many advantages against popular attacks, and achieves better efficiency at low communication cost. PMID:25123464

  12. You Should Be the Specialist! Weak Mental Rotation Performance in Aviation Security Screeners – Reduced Performance Level in Aviation Security with No Gender Effect

    PubMed Central

    Krüger, Jenny K.; Suchan, Boris

    2016-01-01

    Aviation security screeners analyze a large number of X-ray images per day and seem to be experts in mentally rotating diverse kinds of visual objects. A robust gender-effect that men outperform women in the Vandenberg & Kuse mental rotation task has been well documented over the last years. In addition it has been shown that training can positively influence the overall task-performance. Considering this, the aim of the present study was to investigate whether security screeners show better performance in the Mental Rotation Test (MRT) independently of gender. Forty-seven security screeners of both sexes from two German airports were examined with a computer based MRT. Their performance was compared to a large sample of control subjects. The well-known gender-effect favoring men on mental rotation was significant within the control group. However, the security screeners did not show any sex differences suggesting an effect of training and professional performance. Surprisingly this specialized group showed a lower level of overall MRT performance than the control participants. Possible aviation related influences such as secondary effects of work-shift or expertise which can cumulatively cause this result are discussed. PMID:27014142

  13. Security analysis and improvements of authentication and access control in the Internet of Things.

    PubMed

    Ndibanje, Bruce; Lee, Hoon-Jae; Lee, Sang-Gon

    2014-08-13

    Internet of Things is a ubiquitous concept where physical objects are connected over the internet and are provided with unique identifiers to enable their self-identification to other devices and the ability to continuously generate data and transmit it over a network. Hence, the security of the network, data and sensor devices is a paramount concern in the IoT network as it grows very fast in terms of exchanged data and interconnected sensor nodes. This paper analyses the authentication and access control method using in the Internet of Things presented by Jing et al. (Authentication and Access Control in the Internet of Things. In Proceedings of the 2012 32nd International Conference on Distributed Computing Systems Workshops, Macau, China, 18-21 June 2012, pp. 588-592). According to our analysis, Jing et al.'s protocol is costly in the message exchange and the security assessment is not strong enough for such a protocol. Therefore, we propose improvements to the protocol to fill the discovered weakness gaps. The protocol enhancements facilitate many services to the users such as user anonymity, mutual authentication, and secure session key establishment. Finally, the performance and security analysis show that the improved protocol possesses many advantages against popular attacks, and achieves better efficiency at low communication cost.

  14. Enhancing robustness of multiparty quantum correlations using weak measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Uttam, E-mail: uttamsingh@hri.res.in; Mishra, Utkarsh, E-mail: utkarsh@hri.res.in; Dhar, Himadri Shekhar, E-mail: dhar.himadri@gmail.com

    Multipartite quantum correlations are important resources for the development of quantum information and computation protocols. However, the resourcefulness of multipartite quantum correlations in practical settings is limited by its fragility under decoherence due to environmental interactions. Though there exist protocols to protect bipartite entanglement under decoherence, the implementation of such protocols for multipartite quantum correlations has not been sufficiently explored. Here, we study the effect of local amplitude damping channel on the generalized Greenberger–Horne–Zeilinger state, and use a protocol of optimal reversal quantum weak measurement to protect the multipartite quantum correlations. We observe that the weak measurement reversal protocol enhancesmore » the robustness of multipartite quantum correlations. Further it increases the critical damping value that corresponds to entanglement sudden death. To emphasize the efficacy of the technique in protection of multipartite quantum correlation, we investigate two proximately related quantum communication tasks, namely, quantum teleportation in a one sender, many receivers setting and multiparty quantum information splitting, through a local amplitude damping channel. We observe an increase in the average fidelity of both the quantum communication tasks under the weak measurement reversal protocol. The method may prove beneficial, for combating external interactions, in other quantum information tasks using multipartite resources. - Highlights: • Extension of weak measurement reversal scheme to protect multiparty quantum correlations. • Protection of multiparty quantum correlation under local amplitude damping noise. • Enhanced fidelity of quantum teleportation in one sender and many receivers setting. • Enhanced fidelity of quantum information splitting protocol.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wally Melnitchouk; John Tjon

    We compute the corrections from two-photon and \\gamma-Z exchange in parity-violating elastic electron--proton scattering, used to extract the strange form factors of the proton. We use a hadronic formalism that successfully reconciled the earlier discrepancy in the proton's electron to magnetic form factor ratio, suitably extended to the weak sector. Implementing realistic electroweak form factors, we find effects of the order 2-3% at Q^2 <~ 0.1 GeV^2, which are largest at backward angles, and have a strong Q^2 dependence at low Q^2. Two-boson contributions to the weak axial current are found to be enhanced at low Q^2 and for forwardmore » angles. We provide corrections at kinematics relevant for recent and upcoming parity-violating experiments.« less

  16. Finite Deformation of Magnetoelastic Film

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barham, Matthew Ian

    2011-05-31

    A nonlinear two-dimensional theory is developed for thin magnetoelastic lms capable of large deformations. This is derived directly from three-dimensional theory. Signi cant simpli cations emerge in the descent from three dimensions to two, permitting the self eld generated by the body to be computed a posteriori. The model is specialized to isotropic elastomers with two material models. First weak magnetization is investigated leading to a free energy where magnetization and deformation are un-coupled. The second closely couples the magnetization and deformation. Numerical solutions are obtained to equilibrium boundary-value problems in which the membrane is subjected to lateral pressure andmore » an applied magnetic eld. An instability is inferred and investigated for the weak magnetization material model.« less

  17. Computation of Large-Scale Structure Jet Noise Sources With Weak Nonlinear Effects Using Linear Euler

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.

    2003-01-01

    An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.

  18. Density Large Deviations for Multidimensional Stochastic Hyperbolic Conservation Laws

    NASA Astrophysics Data System (ADS)

    Barré, J.; Bernardin, C.; Chetrite, R.

    2018-02-01

    We investigate the density large deviation function for a multidimensional conservation law in the vanishing viscosity limit, when the probability concentrates on weak solutions of a hyperbolic conservation law. When the mobility and diffusivity matrices are proportional, i.e. an Einstein-like relation is satisfied, the problem has been solved in Bellettini and Mariani (Bull Greek Math Soc 57:31-45, 2010). When this proportionality does not hold, we compute explicitly the large deviation function for a step-like density profile, and we show that the associated optimal current has a non trivial structure. We also derive a lower bound for the large deviation function, valid for a more general weak solution, and leave the general large deviation function upper bound as a conjecture.

  19. Mode detuning in systems of weakly coupled oscillators

    NASA Astrophysics Data System (ADS)

    Spencer, Ross L.; Robertson, Richard D.

    2001-11-01

    A system of weakly magnetically coupled oscillating blades is studied experimentally, computationally, and theoretically. It is found that when the uncoupled natural frequencies of the blades are nearly equal, the normal modes produced by the coupling are almost impossible to find experimentally if the random variation level in the system parameters is on the order of (or larger than) the relative differences between mode frequencies. But if the uncoupled natural frequencies are made to vary (detuned) in a smooth way such that the total relative spread in natural frequency exceeds the random variations, normal modes are rather easy to find. And if the detuned uncoupled frequencies of the system are parabolically distributed, the modes are found to be shaped like Hermite functions.

  20. Next-to-Next-to-Leading-Order QCD Corrections to the Transverse Momentum Distribution of Weak Gauge Bosons

    NASA Astrophysics Data System (ADS)

    Gehrmann-De Ridder, A.; Gehrmann, T.; Glover, E. W. N.; Huss, A.; Walker, D. M.

    2018-03-01

    The transverse momentum spectra of weak gauge bosons and their ratios probe the underlying dynamics and are crucial in testing our understanding of the standard model. They are an essential ingredient in precision measurements, such as the W boson mass extraction. To fully exploit the potential of the LHC data, we compute the second-order [next-to-next-to-leading-order (NNLO)] QCD corrections to the inclusive-pTW spectrum as well as to the ratios of spectra for W-/W+ and Z /W . We find that the inclusion of NNLO QCD corrections considerably improves the theoretical description of the experimental CMS data and results in a substantial reduction of the residual scale uncertainties.

Top