Sample records for bounded model checking

  1. Bounded Parametric Model Checking for Elementary Net Systems

    NASA Astrophysics Data System (ADS)

    Knapik, Michał; Szreter, Maciej; Penczek, Wojciech

    Bounded Model Checking (BMC) is an efficient verification method for reactive systems. BMC has been applied so far to verification of properties expressed in (timed) modal logics, but never to their parametric extensions. In this paper we show, for the first time that BMC can be extended to PRTECTL - a parametric extension of the existential version of CTL. To this aim we define a bounded semantics and a translation from PRTECTL to SAT. The implementation of the algorithm for Elementary Net Systems is presented, together with some experimental results.

  2. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    NASA Astrophysics Data System (ADS)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  3. Using State Merging and State Pruning to Address the Path Explosion Problem Faced by Symbolic Execution

    DTIC Science & Technology

    2014-06-19

    urgent and compelling. Recent efforts in this area automate program analysis techniques using model checking and symbolic execution [2, 5–7]. These...bounded model checking tool for x86 binary programs developed at the Air Force Institute of Technology (AFIT). Jiseki creates a bit-vector logic model based...assume there are n different paths through the function foo . The program could potentially call the function foo a bound number of times, resulting in n

  4. The Priority Inversion Problem and Real-Time Symbolic Model Checking

    DTIC Science & Technology

    1993-04-23

    real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties

  5. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  6. An efficient algorithm for computing fixed length attractors based on bounded model checking in synchronous Boolean networks with biochemical applications.

    PubMed

    Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N

    2015-04-28

    Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.

  7. Model Checking A Self-Stabilizing Synchronization Protocol for Arbitrary Digraphs

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2012-01-01

    This report presents the mechanical verification of a self-stabilizing distributed clock synchronization protocol for arbitrary digraphs in the absence of faults. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. The system under study is an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. This protocol deterministically converges within a time bound that is a linear function of the self-stabilization period. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV) for a subset of digraphs. Modeling challenges of the protocol and the system are addressed. The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period.

  8. Temporal Precedence Checking for Switched Models and its Application to a Parallel Landing Protocol

    NASA Technical Reports Server (NTRS)

    Duggirala, Parasara Sridhar; Wang, Le; Mitra, Sayan; Viswanathan, Mahesh; Munoz, Cesar A.

    2014-01-01

    This paper presents an algorithm for checking temporal precedence properties of nonlinear switched systems. This class of properties subsume bounded safety and capture requirements about visiting a sequence of predicates within given time intervals. The algorithm handles nonlinear predicates that arise from dynamics-based predictions used in alerting protocols for state-of-the-art transportation systems. It is sound and complete for nonlinear switch systems that robustly satisfy the given property. The algorithm is implemented in the Compare Execute Check Engine (C2E2) using validated simulations. As a case study, a simplified model of an alerting system for closely spaced parallel runways is considered. The proposed approach is applied to this model to check safety properties of the alerting logic for different operating conditions such as initial velocities, bank angles, aircraft longitudinal separation, and runway separation.

  9. Safe, Multiphase Bounds Check Elimination in Java

    DTIC Science & Technology

    2010-01-28

    production of mobile code from source code, JIT compilation in the virtual ma- chine, and application code execution. The code producer uses...invariants, and inequality constraint analysis) to identify and prove redundancy of bounds checks. During class-loading and JIT compilation, the virtual...unoptimized code if the speculated invariants do not hold. The combined effect of the multiple phases is to shift the effort as- sociated with bounds

  10. A Lower Bound on Adiabatic Heating of Compressed Turbulence for Simulation and Model Validation

    DOE PAGES

    Davidovits, Seth; Fisch, Nathaniel J.

    2017-03-31

    The energy in turbulent flow can be amplied by compression, when the compression occurs on a timescale shorter than the turbulent dissipation time. This mechanism may play a part in sustaining turbulence in various astrophysical systems, including molecular clouds. The amount of turbulent amplification depends on the net effect of the compressive forcing and turbulent dissipation. By giving an argument for a bound on this dissipation, we give a lower bound for the scaling of the turbulent velocity with compression ratio in compressed turbulence. That is, turbulence undergoing compression will be enhanced at least as much as the bound givenmore » here, subject to a set of caveats that will be outlined. Used as a validation check, this lower bound suggests that some models of compressing astrophysical turbulence are too dissipative. As a result, the technique used highlights the relationship between compressed turbulence and decaying turbulence.« less

  11. Towards a Certified Lightweight Array Bound Checker for Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pichardie, David

    2009-01-01

    Dynamic array bound checks are crucial elements for the security of a Java Virtual Machines. These dynamic checks are however expensive and several static analysis techniques have been proposed to eliminate explicit bounds checks. Such analyses require advanced numerical and symbolic manipulations that 1) penalize bytecode loading or dynamic compilation, 2) complexify the trusted computing base. Following the Foundational Proof Carrying Code methodology, our goal is to provide a lightweight bytecode verifier for eliminating array bound checks that is both efficient and trustable. In this work, we define a generic relational program analysis for an imperative, stackoriented byte code language with procedures, arrays and global variables and instantiate it with a relational abstract domain as polyhedra. The analysis has automatic inference of loop invariants and method pre-/post-conditions, and efficient checking of analysis results by a simple checker. Invariants, which can be large, can be specialized for proving a safety policy using an automatic pruning technique which reduces their size. The result of the analysis can be checked efficiently by annotating the program with parts of the invariant together with certificates of polyhedral inclusions. The resulting checker is sufficiently simple to be entirely certified within the Coq proof assistant for a simple fragment of the Java bytecode language. During the talk, we will also report on our ongoing effort to scale this approach for the full sequential JVM.

  12. Prediction Interval Development for Wind-Tunnel Balance Check-Loading

    NASA Technical Reports Server (NTRS)

    Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.

    2014-01-01

    Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.

  13. Absolute Lower Bound on the Bounce Action

    NASA Astrophysics Data System (ADS)

    Sato, Ryosuke; Takimoto, Masahiro

    2018-03-01

    The decay rate of a false vacuum is determined by the minimal action solution of the tunneling field: bounce. In this Letter, we focus on models with scalar fields which have a canonical kinetic term in N (>2 ) dimensional Euclidean space, and derive an absolute lower bound on the bounce action. In the case of four-dimensional space, we show the bounce action is generically larger than 24 /λcr, where λcr≡max [-4 V (ϕ )/|ϕ |4] with the false vacuum being at ϕ =0 and V (0 )=0 . We derive this bound on the bounce action without solving the equation of motion explicitly. Our bound is derived by a quite simple discussion, and it provides useful information even if it is difficult to obtain the explicit form of the bounce solution. Our bound offers a sufficient condition for the stability of a false vacuum, and it is useful as a quick check on the vacuum stability for given models. Our bound can be applied to a broad class of scalar potential with any number of scalar fields. We also discuss a necessary condition for the bounce action taking a value close to this lower bound.

  14. Model-Checking with Edge-Valued Decision Diagrams

    NASA Technical Reports Server (NTRS)

    Roux, Pierre; Siminiceanu, Radu I.

    2010-01-01

    We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library along with state-of-the-art algorithms for building the transition relation and the state space of discrete state systems. We provide efficient algorithms for manipulating EVMDDs and give upper bounds of the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi-Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools: EVMDDs for encoding arithmetic expressions, identity-reduced MDDs for representing the transition relation, and the saturation algorithm for reachability analysis. We compare our new symbolic model checking EVMDD library with the widely used CUDD package and show that, in many cases, our tool is several orders of magnitude faster than CUDD.

  15. Testing and selection of cosmological models with (1+z){sup 6} corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szydlowski, Marek; Marc Kac Complex Systems Research Centre, Jagiellonian University, ul. Reymonta 4, 30-059 Cracow; Godlowski, Wlodzimierz

    2008-02-15

    In the paper we check whether the contribution of (-)(1+z){sup 6} type in the Friedmann equation can be tested. We consider some astronomical tests to constrain the density parameters in such models. We describe different interpretations of such an additional term: geometric effects of loop quantum cosmology, effects of braneworld cosmological models, nonstandard cosmological models in metric-affine gravity, and models with spinning fluid. Kinematical (or geometrical) tests based on null geodesics are insufficient to separate individual matter components when they behave like perfect fluid and scale in the same way. Still, it is possible to measure their overall effect. Wemore » use recent measurements of the coordinate distances from the Fanaroff-Riley type IIb radio galaxy data, supernovae type Ia data, baryon oscillation peak and cosmic microwave background radiation observations to obtain stronger bounds for the contribution of the type considered. We demonstrate that, while {rho}{sup 2} corrections are very small, they can be tested by astronomical observations--at least in principle. Bayesian criteria of model selection (the Bayesian factor, AIC, and BIC) are used to check if additional parameters are detectable in the present epoch. As it turns out, the {lambda}CDM model is favored over the bouncing model driven by loop quantum effects. Or, in other words, the bounds obtained from cosmography are very weak, and from the point of view of the present data this model is indistinguishable from the {lambda}CDM one.« less

  16. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  17. Model Checking a Byzantine-Fault-Tolerant Self-Stabilizing Protocol for Distributed Clock Synchronization Systems

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2007-01-01

    This report presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV) [SMV]. The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space. Also, additional innovative state space reduction techniques are introduced that can be used in future verification efforts applied to this and other protocols.

  18. Population pharmacokinetics of phenytoin in critically ill children.

    PubMed

    Hennig, Stefanie; Norris, Ross; Tu, Quyen; van Breda, Karin; Riney, Kate; Foster, Kelly; Lister, Bruce; Charles, Bruce

    2015-03-01

    The objective was to study the population pharmacokinetics of bound and unbound phenytoin in critically ill children, including influences on the protein binding profile. A population pharmacokinetic approach was used to analyze paired protein-unbound and total phenytoin plasma concentrations (n = 146 each) from 32 critically ill children (0.08-17 years of age) who were admitted to a pediatric hospital, primarily intensive care unit. The pharmacokinetics of unbound and bound phenytoin and the influence of possible influential covariates were modeled and evaluated using visual predictive checks and bootstrapping. The pharmacokinetics of protein-unbound phenytoin was described satisfactorily by a 1-compartment model with first-order absorption in conjunction with a linear partition coefficient parameter to describe the binding of phenytoin to albumin. The partitioning coefficient describing protein binding and distribution to bound phenytoin was estimated to be 8.22. Nonlinear elimination of unbound phenytoin was not supported in this patient group. Weight, allometrically scaled for clearance and volume of distribution for the unbound and bound compartments, and albumin concentration significantly influenced the partition coefficient for protein binding of phenytoin. The population model can be applied to estimate the fraction of unbound phenytoin in critically ill children given an individual's albumin concentration. © 2014, The American College of Clinical Pharmacology.

  19. Bounded-Angle Iterative Decoding of LDPC Codes

    NASA Technical Reports Server (NTRS)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  20. Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Pryadko, Leonid

    Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.

  1. Sanity check for NN bound states in lattice QCD with Lüscher's finite volume formula - Disclosing Symptoms of Fake Plateaux -

    NASA Astrophysics Data System (ADS)

    Aoki, Sinya; Doi, Takumi; Iritani, Takumi

    2018-03-01

    The sanity check is to rule out certain classes of obviously false results, not to catch every possible error. After reviewing such a sanity check for NN bound states with the Lüscher's finite volume formula [1-3], we give further evidences for the operator dependence of plateaux, a symptom of the fake plateau problem, against the claim [4]. We then present our critical comments on [5] by NPLQCD: (i) Operator dependences of plateaux in NPL2013 [6, 7] exist with the P value of 4-5%. (ii) The volume independence of plateaux in NPL2013 does not prove their correctness. (iii) Effective range expansions (EREs) in NPL2013 violate the physical pole condition. (iv) Their comment is partly based on new data and analysis different from the original ones. (v) Their new ERE does not satisfy the Lüscher's finite volume formula.

  2. Limit on graviton mass from galaxy cluster Abell 1689

    NASA Astrophysics Data System (ADS)

    Desai, Shantanu

    2018-02-01

    To date, the only limit on graviton mass using galaxy clusters was obtained by Goldhaber and Nieto in 1974, using the fact that the orbits of galaxy clusters are bound and closed, and extend up to 580 kpc. From positing that only a Newtonian potential gives rise to such stable bound orbits, a limit on the graviton mass m_g<10^{-29} eV was obtained (PRD 9,1119, 1974). Recently, it has been shown that one can obtain closed bound orbits for Yukawa potential (arXiv:1705.02444), thus invalidating the main ansatz used in Goldhaber and Nieto to obtain the graviton mass bound. In order to obtain a revised estimate using galaxy clusters, we use dynamical mass models of the Abell 1689 (A1689) galaxy cluster to check their compatibility with a Yukawa gravitational potential. We assume mass models for the gas, dark matter, and galaxies for A1689 from arXiv:1703.10219 and arXiv:1610.01543, who used this cluster to test various alternate gravity theories, which dispense with the need for dark matter. We quantify the deviations in the acceleration profile using these mass models assuming a Yukawa potential and that obtained assuming a Newtonian potential by calculating the χ^2 residuals between the two profiles. Our estimated bound on the graviton mass (m_g) is thereby given by, m_g < 1.37 × 10^{-29} eV or in terms of the graviton Compton wavelength of, λ_g>9.1 × 10^{19} km at 90% confidence level.

  3. Progress on the three-particle quantization condition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briceno, Raul; Hansen, Mawell T.; Sharpe, Stephen R.

    2016-10-01

    We report progress on extending the relativistic model-independent quantization condition for three particles, derived previously by two of us, to a broader class of theories, as well as progress on checking the formalism. In particular, we discuss the extension to include the possibility of 2->3 and 3->2 transitions and the calculation of the finite-volume energy shift of an Efimov-like three-particle bound state. The latter agrees with the results obtained previously using non-relativistic quantum mechanics.

  4. Regression Verification Using Impact Summaries

    NASA Technical Reports Server (NTRS)

    Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

    2013-01-01

    Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program versions [19]. These techniques compare two programs with a large degree of syntactic similarity to prove that portions of one program version are equivalent to the other. Regression verification can be used for guaranteeing backward compatibility, and for showing behavioral equivalence in programs with syntactic differences, e.g., when a program is refactored to improve its performance, maintainability, or readability. Existing regression verification techniques leverage similarities between program versions by using abstraction and decomposition techniques to improve scalability of the analysis [10, 12, 19]. The abstractions and decomposition in the these techniques, e.g., summaries of unchanged code [12] or semantically equivalent methods [19], compute an over-approximation of the program behaviors. The equivalence checking results of these techniques are sound but not complete-they may characterize programs as not functionally equivalent when, in fact, they are equivalent. In this work we describe a novel approach that leverages the impact of the differences between two programs for scaling regression verification. We partition program behaviors of each version into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Only the impacted program behaviors are used during equivalence checking. We then prove that checking equivalence of the impacted program behaviors is equivalent to checking equivalence of all program behaviors for a given depth bound. In this work we use symbolic execution to generate the program behaviors and leverage control- and data-dependence information to facilitate the partitioning of program behaviors. The impacted program behaviors are termed as impact summaries. The dependence analyses that facilitate the generation of the impact summaries, we believe, could be used in conjunction with other abstraction and decomposition based approaches, [10, 12], as a complementary reduction technique. An evaluation of our regression verification technique shows that our approach is capable of leveraging similarities between program versions to reduce the size of the queries and the time required to check for logical equivalence. The main contributions of this work are: - A regression verification technique to generate impact summaries that can be checked for functional equivalence using an off-the-shelf decision procedure. - A proof that our approach is sound and complete with respect to the depth bound of symbolic execution. - An implementation of our technique using the LLVMcompiler infrastructure, the klee Symbolic Virtual Machine [4], and a variety of Satisfiability Modulo Theory (SMT) solvers, e.g., STP [7] and Z3 [6]. - An empirical evaluation on a set of C artifacts which shows that the use of impact summaries can reduce the cost of regression verification.

  5. Application of hyperspherical harmonics expansion method to the low-lying bound S-states of exotic two-muon three-body systems

    NASA Astrophysics Data System (ADS)

    Khan, Md. Abdul

    2014-09-01

    In this paper, energies of the low-lying bound S-states (L = 0) of exotic three-body systems, consisting a nuclear core of charge +Ze (Z being atomic number of the core) and two negatively charged valence muons, have been calculated by hyperspherical harmonics expansion method (HHEM). The three-body Schrödinger equation is solved assuming purely Coulomb interaction among the binary pairs of the three-body systems XZ+μ-μ- for Z = 1 to 54. Convergence pattern of the energies have been checked with respect to the increasing number of partial waves Λmax. For available computer facilities, calculations are feasible up to Λmax = 28 partial waves, however, calculation for still higher partial waves have been achieved through an appropriate extrapolation scheme. The dependence of bound state energies has been checked against increasing nuclear charge Z and finally, the calculated energies have been compared with the ones of the literature.

  6. Received response based heuristic LDPC code for short-range non-line-of-sight ultraviolet communication.

    PubMed

    Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian

    2017-03-06

    Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.

  7. Verification of a Byzantine-Fault-Tolerant Self-stabilizing Protocol for Clock Synchronization

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2008-01-01

    This paper presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system except for the presence of sufficient good nodes, thus making the weakest possible assumptions and producing the strongest results. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV). The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space.

  8. Border screening vs. community level disease control for infectious diseases: Timing and effectiveness

    NASA Astrophysics Data System (ADS)

    Kim, Sehjeong; Chang, Dong Eui

    2017-06-01

    There have been many studies of the border screening using a simple math model or a statistical analysis to investigate the ineffectiveness of border screening during 2003 and 2009 pandemics. However, the use of border screening is still a controversial issue. It is due to focusing only on the functionality of border screening without considering the timing to use. In this paper, we attempt to qualitatively answer whether the use of border screening is a desirable action during a disease pandemic. Thus, a novel mathematical model with a transition probability of status change during flight and border screening is developed. A condition to check a timing of the border screening is established in terms of a lower bound of the basic reproduction number. If the lower bound is greater than one, which indicates a pandemic, then the border screening may not be effective and the disease persists. In this case, a community level control strategy should be conducted.

  9. Methods for Geometric Data Validation of 3d City Models

    NASA Astrophysics Data System (ADS)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges are detected, among additional criteria. Self-intersection might lead to different results, e.g. intersection points, lines or areas. Depending on the geometric constellation, they might represent gaps between bounding polygons of the solids, overlaps, or violations of the 2-manifoldness. Not least due to the floating point problem in digital numbers, tolerances must be considered in some algorithms, e.g. planarity and solid self-intersection. Effects of different tolerance values and their handling is discussed; recommendations for suitable values are given. The goal of the paper is to give a clear understanding of geometric validation in the context of 3D city models. This should also enable the data holder to get a better comprehension of the validation results and their consequences on the deployment fields of the validated data set.

  10. Structure of the Balmer jump. The isolated hydrogen atom

    NASA Astrophysics Data System (ADS)

    Calvo, F.; Belluzzi, L.; Steiner, O.

    2018-06-01

    Context. The spectrum of the hydrogen atom was explained by Bohr more than one century ago. We revisit here some of the aspects of the underlying quantum structure, with a modern formalism, focusing on the limit of the Balmer series. Aims: We investigate the behaviour of the absorption coefficient of the isolated hydrogen atom in the neighbourhood of the Balmer limit. Methods: We analytically computed the total cross-section arising from bound-bound and bound-free transitions in the isolated hydrogen atom at the Balmer limit, and established a simplified semi-analytical model for the surroundings of that limit. We worked within the framework of the formalism of Landi Degl'Innocenti & Landolfi (2004, Astrophys. Space Sci. Lib., 307), which permits an almost straight-forward generalization of our results to other atoms and molecules, and which is perfectly suitable for including polarization phenomena in the problem. Results: We analytically show that there is no discontinuity at the Balmer limit, even though the concept of a "Balmer jump" is still meaningful. Furthermore, we give a possible definition of the location of the Balmer jump, and we check that this location is dependent on the broadening mechanisms. At the Balmer limit, we compute the cross-section in a fully analytical way. Conclusions: The Balmer jump is produced by a rapid drop of the total Balmer cross-section, yet this variation is smooth and continuous when both bound-bound and bound-free processes are taken into account, and its shape and location is dependent on the broadening mechanisms.

  11. Bounded fractional diffusion in geological media: Definition and Lagrangian approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Green, Christopher T.; LaBolle, Eric M.; Neupauer, Roseanna M.; Sun, HongGuang

    2016-11-01

    Spatiotemporal fractional-derivative models (FDMs) have been increasingly used to simulate non-Fickian diffusion, but methods have not been available to define boundary conditions for FDMs in bounded domains. This study defines boundary conditions and then develops a Lagrangian solver to approximate bounded, one-dimensional fractional diffusion. Both the zero-value and nonzero-value Dirichlet, Neumann, and mixed Robin boundary conditions are defined, where the sign of Riemann-Liouville fractional derivative (capturing nonzero-value spatial-nonlocal boundary conditions with directional superdiffusion) remains consistent with the sign of the fractional-diffusive flux term in the FDMs. New Lagrangian schemes are then proposed to track solute particles moving in bounded domains, where the solutions are checked against analytical or Eulerian solutions available for simplified FDMs. Numerical experiments show that the particle-tracking algorithm for non-Fickian diffusion differs from Fickian diffusion in relocating the particle position around the reflective boundary, likely due to the nonlocal and nonsymmetric fractional diffusion. For a nonzero-value Neumann or Robin boundary, a source cell with a reflective face can be applied to define the release rate of random-walking particles at the specified flux boundary. Mathematical definitions of physically meaningful nonlocal boundaries combined with bounded Lagrangian solvers in this study may provide the only viable techniques at present to quantify the impact of boundaries on anomalous diffusion, expanding the applicability of FDMs from infinite domains to those with any size and boundary conditions.

  12. Testing the Grossman model of medical spending determinants with macroeconomic panel data.

    PubMed

    Hartwig, Jochen; Sturm, Jan-Egbert

    2018-02-16

    Michael Grossman's human capital model of the demand for health has been argued to be one of the major achievements in theoretical health economics. Attempts to test this model empirically have been sparse, however, and with mixed results. These attempts so far relied on using-mostly cross-sectional-micro data from household surveys. For the first time in the literature, we bring in macroeconomic panel data for 29 OECD countries over the period 1970-2010 to test the model. To check the robustness of the results for the determinants of medical spending identified by the model, we include additional covariates in an extreme bounds analysis (EBA) framework. The preferred model specifications (including the robust covariates) do not lend much empirical support to the Grossman model. This is in line with the mixed results of earlier studies.

  13. Fuzzy Energy and Reserve Co-optimization With High Penetration of Renewable Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Cong; Botterud, Audun; Zhou, Zhi

    In this study, we propose a fuzzy-based energy and reserve co-optimization model with consideration of high penetration of renewable energy. Under the assumption of a fixed uncertainty set of renewables, a two-stage robust model is proposed for clearing energy and reserves in the first stage and checking the feasibility and robustness of re-dispatches in the second stage. Fuzzy sets and their membership functions are introduced into the optimization model to represent the satisfaction degree of the variable uncertainty sets. The lower bound of the uncertainty set is expressed as fuzzy membership functions. The solutions are obtained by transforming the fuzzymore » mathematical programming formulation into traditional mixed integer linear programming problems.« less

  14. Fuzzy Energy and Reserve Co-optimization With High Penetration of Renewable Energy

    DOE PAGES

    Liu, Cong; Botterud, Audun; Zhou, Zhi; ...

    2016-10-21

    In this study, we propose a fuzzy-based energy and reserve co-optimization model with consideration of high penetration of renewable energy. Under the assumption of a fixed uncertainty set of renewables, a two-stage robust model is proposed for clearing energy and reserves in the first stage and checking the feasibility and robustness of re-dispatches in the second stage. Fuzzy sets and their membership functions are introduced into the optimization model to represent the satisfaction degree of the variable uncertainty sets. The lower bound of the uncertainty set is expressed as fuzzy membership functions. The solutions are obtained by transforming the fuzzymore » mathematical programming formulation into traditional mixed integer linear programming problems.« less

  15. Real-Time System Verification by Kappa-Induction

    NASA Technical Reports Server (NTRS)

    Pike, Lee S.

    2005-01-01

    We report the first formal verification of a reintegration protocol for a safety-critical, fault-tolerant, real-time distributed embedded system. A reintegration protocol increases system survivability by allowing a node that has suffered a fault to regain state consistent with the operational nodes. The protocol is verified in the Symbolic Analysis Laboratory (SAL), where bounded model checking and decision procedures are used to verify infinite-state systems by k-induction. The protocol and its environment are modeled as synchronizing timeout automata. Because k-induction is exponential with respect to k, we optimize the formal model to reduce the size of k. Also, the reintegrator's event-triggered behavior is conservatively modeled as time-triggered behavior to further reduce the size of k and to make it invariant to the number of nodes modeled. A corollary is that a clique avoidance property is satisfied.

  16. Bounded fractional diffusion in geological media: Definition and Lagrangian approximation

    USGS Publications Warehouse

    Zhang, Yong; Green, Christopher T.; LaBolle, Eric M.; Neupauer, Roseanna M.; Sun, HongGuang

    2016-01-01

    Spatiotemporal Fractional-Derivative Models (FDMs) have been increasingly used to simulate non-Fickian diffusion, but methods have not been available to define boundary conditions for FDMs in bounded domains. This study defines boundary conditions and then develops a Lagrangian solver to approximate bounded, one-dimensional fractional diffusion. Both the zero-value and non-zero-value Dirichlet, Neumann, and mixed Robin boundary conditions are defined, where the sign of Riemann-Liouville fractional derivative (capturing non-zero-value spatial-nonlocal boundary conditions with directional super-diffusion) remains consistent with the sign of the fractional-diffusive flux term in the FDMs. New Lagrangian schemes are then proposed to track solute particles moving in bounded domains, where the solutions are checked against analytical or Eularian solutions available for simplified FDMs. Numerical experiments show that the particle-tracking algorithm for non-Fickian diffusion differs from Fickian diffusion in relocating the particle position around the reflective boundary, likely due to the non-local and non-symmetric fractional diffusion. For a non-zero-value Neumann or Robin boundary, a source cell with a reflective face can be applied to define the release rate of random-walking particles at the specified flux boundary. Mathematical definitions of physically meaningful nonlocal boundaries combined with bounded Lagrangian solvers in this study may provide the only viable techniques at present to quantify the impact of boundaries on anomalous diffusion, expanding the applicability of FDMs from infinite do mains to those with any size and boundary conditions.

  17. Minimal model linking two great mysteries: Neutrino mass and dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farzan, Yasaman

    2009-10-01

    We present an economic model that establishes a link between neutrino masses and properties of the dark matter candidate. The particle content of the model can be divided into two groups: light particles with masses lighter than the electroweak scale and heavy particles. The light particles, which also include the dark matter candidate, are predicted to show up in the low energy experiments such as (K{yields}l+missing energy), making the model testable. The heavy sector can show up at the LHC and may give rise to Br(l{sub i}{yields}l{sub j}{gamma}) close to the present bounds. In principle, the new couplings of themore » model can independently be derived from the data from the LHC and from the information on neutrino masses and lepton flavor violating rare decays, providing the possibility of an intensive cross-check of the model.« less

  18. A Self-Stabilizing Hybrid-Fault Tolerant Synchronization Protocol

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2014-01-01

    In this report we present a strategy for solving the Byzantine general problem for self-stabilizing a fully connected network from an arbitrary state and in the presence of any number of faults with various severities including any number of arbitrary (Byzantine) faulty nodes. Our solution applies to realizable systems, while allowing for differences in the network elements, provided that the number of arbitrary faults is not more than a third of the network size. The only constraint on the behavior of a node is that the interactions with other nodes are restricted to defined links and interfaces. Our solution does not rely on assumptions about the initial state of the system and no central clock nor centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. We also present a mechanical verification of a proposed protocol. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV). The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirming claims of determinism and linear convergence with respect to the self-stabilization period. We believe that our proposed solution solves the general case of the clock synchronization problem.

  19. Warm neutral halos around molecular clouds. VI - Physical and chemical modeling

    NASA Technical Reports Server (NTRS)

    Andersson, B.-G.; Wannier, P. G.

    1993-01-01

    A combined physical and chemical modeling of the halos around molecular clouds is presented, with special emphasis on the H-to-H2 transition. On the basis of H I 21 cm observations, it is shown that the halos are extended. A physical model is employed in conjunction with a chemistry code to provide a self-consistent description of the gas. The radiative transfer code provides a check with H I, CO, and OH observations. It is concluded that the warm neutral halos are not gravitationally bound to the underlying molecular clouds and are isobaric. It is inferred from the observed extent of the H I envelopes and the large observed abundance of OH in them that the generally accepted rate for H2 information on grains is too large by a factor of two to three.

  20. Local conditions for the generalized covariant entropy bound

    NASA Astrophysics Data System (ADS)

    Gao, Sijie; Lemos, José P.

    2005-04-01

    A set of sufficient conditions for the generalized covariant entropy bound given by Strominger and Thompson is as follows: Suppose that the entropy of matter can be described by an entropy current sa. Let ka be any null vector along L and s≡-kasa. Then the generalized bound can be derived from the following conditions: (i) s'≤2πTabkakb, where s'=ka∇as and Tab is the stress-energy tensor; (ii) on the initial 2-surface B, s(0)≤-1/4θ(0), where θ is the expansion of ka. We prove that condition (ii) alone can be used to divide a spacetime into two regions: The generalized entropy bound holds for all light sheets residing in the region where s<-1/4θ and fails for those in the region where s>-1/4θ. We check the validity of these conditions in FRW flat universe and a scalar field spacetime. Some apparent violations of the entropy bounds in the two spacetimes are discussed. These holographic bounds are important in the formulation of the holographic principle.

  1. Formal verification of a set of memory management units

    NASA Technical Reports Server (NTRS)

    Schubert, E. Thomas; Levitt, K.; Cohen, Gerald C.

    1992-01-01

    This document describes the verification of a set of memory management units (MMU). The verification effort demonstrates the use of hierarchical decomposition and abstract theories. The MMUs can be organized into a complexity hierarchy. Each new level in the hierarchy adds a few significant features or modifications to the lower level MMU. The units described include: (1) a page check translation look-aside module (TLM); (2) a page check TLM with supervisor line; (3) a base bounds MMU; (4) a virtual address translation MMU; and (5) a virtual address translation MMU with memory resident segment table.

  2. Maximum drag reduction asymptotes and the cross-over to the Newtonian plug

    NASA Astrophysics Data System (ADS)

    Benzi, R.; de Angelis, E.; L'Vov, V. S.; Procaccia, I.; Tiberkevich, V.

    2006-03-01

    We employ the full FENE-P model of the hydrodynamics of a dilute polymer solution to derive a theoretical approach to drag reduction in wall-bounded turbulence. We recapture the results of a recent simplified theory which derived the universal maximum drag reduction (MDR) asymptote, and complement that theory with a discussion of the cross-over from the MDR to the Newtonian plug when the drag reduction saturates. The FENE-P model gives rise to a rather complex theory due to the interaction of the velocity field with the polymeric conformation tensor, making analytic estimates quite taxing. To overcome this we develop the theory in a computer-assisted manner, checking at each point the analytic estimates by direct numerical simulations (DNS) of viscoelastic turbulence in a channel.

  3. Interactive collision detection for deformable models using streaming AABBs.

    PubMed

    Zhang, Xinyu; Kim, Young J

    2007-01-01

    We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30 approximately 100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB culling algorithm [2] and observed about two times improvement.

  4. Validation Test Report for LAGER 1.0

    DTIC Science & Technology

    2010-04-13

    pairs. 5. Any temperature or salinity observation is more than 5 GDEM standard deviations from the GDEM monthly mean. 6. For only the Slocum glider...33 5.1.2 Global bounds check T ............................................................................. 33 5.1.3 Comparison to GDEM ...35 5.1.10 Comparison to GDEM , S.......................................................................... 35 5.1.11

  5. Arms Control and National Security.

    ERIC Educational Resources Information Center

    Graham, Daniel O.

    1985-01-01

    From the Soviet perspective arms control agreements merely hold the United States in check while the Soviets, who don't feel bound by such agreements, obtain military advantages. The United States must move quickly to redress the strategic military balance that now favors the Soviets. We must emphasize areas like space. (RM)

  6. Metacognition and Passing: Strategic Interactions in the Lives of Students with Learning Disabilities.

    ERIC Educational Resources Information Center

    Rueda, Robert; Mehan, Hugh

    1986-01-01

    Students with learning disabilities work to avoid difficult tasks while trying to appear competent ("passing"). They also check, monitor, and evaluate their actions, a form of "metacognition." These are flip sides of the same coin of strategic interaction and are context-bound, not context-free activities. (Author/LHW)

  7. Using Artificial Intelligence To Teach English to Deaf People. Final Report.

    ERIC Educational Resources Information Center

    Loritz, Donald; Zambrano, Robert

    This report describes a project to develop an English grammar-checking word processor intended for use by college students with hearing impairments. The project succeeded in its first objective, achievement of 92 percent parsing accuracy across the freely written compositions of college-bound deaf students. The second objective, ability to use the…

  8. Quantum Kronecker sum-product low-density parity-check codes with finite rate

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Pryadko, Leonid P.

    2013-07-01

    We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.

  9. The development of a sub-daily gridded rainfall product to improve hydrological predictions in Great Britain

    NASA Astrophysics Data System (ADS)

    Quinn, Niall; Freer, Jim; Coxon, Gemma; O'Loughlin, Fiachra; Woods, Ross; Liguori, Sara

    2015-04-01

    In Great Britain and many other regions of the world, flooding resulting from short duration, high intensity rainfall events can lead to significant economic losses and fatalities. At present, such extreme events are often poorly evaluated using hydrological models due, in part, to their rarity and relatively short duration and a lack of appropriate data. Such storm characteristics are not well represented by daily rainfall records currently available using volumetric gauges and/or derived gridded products. This research aims to address this important data gap by developing a sub-daily gridded precipitation product for Great Britain. Our focus is to better understand these storm events and some of the challenges and uncertainties in quantifying such data across catchment scales. Our goal is to both improve such rainfall characterisation and derive an input to drive hydrological model simulations. Our methodology involves the collation, error checking, and spatial interpolation of approximately 2000 rain gauges located across Great Britain, provided by the Scottish Environment Protection Agency (SEPA) and the Environment Agency (EA). Error checking was conducted over the entirety of the TBR data available, utilising a two stage approach. First, rain gauge data at each site were examined independently, with data exceeding reasonable thresholds marked as suspect. Second, potentially erroneous data were marked using a neighbourhood analysis approach whereby measurements at a given gauge were deemed suspect if they did not fall within defined bounds of measurements at neighbouring gauges. A total of eight error checks were conducted. To provide the user with the greatest flexibility possible, the error markers associated with each check have been recorded at every site. This approach aims to enable the user to choose which checks they deem most suitable for a particular application. The quality assured TBR dataset was then spatially interpolated to produce a national scale gridded rainfall product. Finally, radar rainfall data provided by the UK Met Office was assimilated, where available, to provide an optimal hourly estimate of rainfall, given the error variance associated with both datasets. This research introduces a sub-daily rainfall product that will be of particular value to hydrological modellers requiring rainfall inputs at higher temporal resolutions than those currently available nationally. Further research will aim to quantify the uncertainties in the rainfall product in order to improve our ability to diagnose and identify structural errors in hydrological modelling of extreme events. Here we present our initial findings.

  10. Fatigue damage prognosis using affine arithmetic

    NASA Astrophysics Data System (ADS)

    Gbaguidi, Audrey; Kim, Daewon

    2014-02-01

    Among the essential steps to be taken in structural health monitoring systems, damage prognosis would be the field that is least investigated due to the complexity of the uncertainties. This paper presents the possibility of using Affine Arithmetic for uncertainty propagation of crack damage in damage prognosis. The structures examined are thin rectangular plates made of titanium alloys with central mode I cracks and a composite plate with an internal delamination caused by mixed mode I and II fracture modes, under a harmonic uniaxial loading condition. The model-based method for crack growth rates are considered using the Paris Erdogan law model for the isotropic plates and the delamination growth law model proposed by Kardomateas for the composite plate. The parameters for both models are randomly taken and their uncertainties are considered as defined by an interval instead of a probability distribution. A Monte Carlo method is also applied to check whether Affine Arithmetic (AA) leads to tight bounds on the lifetime of the structure.

  11. A Self-Stabilizing Hybrid Fault-Tolerant Synchronization Protocol

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2015-01-01

    This paper presents a strategy for solving the Byzantine general problem for self-stabilizing a fully connected network from an arbitrary state and in the presence of any number of faults with various severities including any number of arbitrary (Byzantine) faulty nodes. The strategy consists of two parts: first, converting Byzantine faults into symmetric faults, and second, using a proven symmetric-fault tolerant algorithm to solve the general case of the problem. A protocol (algorithm) is also present that tolerates symmetric faults, provided that there are more good nodes than faulty ones. The solution applies to realizable systems, while allowing for differences in the network elements, provided that the number of arbitrary faults is not more than a third of the network size. The only constraint on the behavior of a node is that the interactions with other nodes are restricted to defined links and interfaces. The solution does not rely on assumptions about the initial state of the system and no central clock nor centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. A mechanical verification of a proposed protocol is also present. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV). The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirming claims of determinism and linear convergence with respect to the self-stabilization period.

  12. A fresh look at linear cosmological constraints on a decaying Dark Matter component

    NASA Astrophysics Data System (ADS)

    Poulin, Vivian; Serpico, Pasquale D.; Lesgourgues, Julien

    2016-08-01

    We consider a cosmological model in which a fraction fdcdm of the Dark Matter (DM) is allowed to decay in an invisible relativistic component, and compute the resulting constraints on both the decay width (or inverse lifetime) Γdcdm and fdcdm from purely gravitational arguments. We report a full derivation of the Boltzmann hierarchy, correcting a mistake in previous literature, and compute the impact of the decay—as a function of the lifetime—on the CMB and matter power spectra. From CMB only, we obtain that no more than 3.8% of the DM could have decayed in the time between recombination and today (all bounds quoted at 95% CL). We also comment on the important application of this bound to the case where primordial black holes constitute DM, a scenario notoriously difficult to constrain. For lifetimes longer than the age of the Universe, the bounds can be cast as fdcdmΓdcdm < 6.3×10-3 Gyr-1. For the first time, we also checked that degeneracies with massive neutrinos are broken when information from the large scale structure is used. Even secondary effects like CMB lensing suffice to this purpose. Decaying DM models have been invoked to solve a possible tension between low redshift astronomical measurements of σ8 and Ωm and the ones inferred by Planck. We reassess this claim finding that with the most recent BAO, HST and σ8 data extracted from the CFHT survey, the tension is only slightly reduced despite the two additional free parameters. Nonetheless, the existing tension explains why the bound on fdcdmΓdcdm loosens to fdcdmΓdcdm < 15.9×10-3 Gyr-1 when including such additional data. The bound however improves to fdcdmΓdcdm < 5.9 ×10-3 Gyr-1 if only data consistent with the CMB are included. This highlights the importance of establishing whether the tension is due to real physical effects or unaccounted systematics, for settling the reach of achievable constraints on decaying DM.

  13. Macrostructure from Microstructure: Generating Whole Systems from Ego Networks

    PubMed Central

    Smith, Jeffrey A.

    2014-01-01

    This paper presents a new simulation method to make global network inference from sampled data. The proposed simulation method takes sampled ego network data and uses Exponential Random Graph Models (ERGM) to reconstruct the features of the true, unknown network. After describing the method, the paper presents two validity checks of the approach: the first uses the 20 largest Add Health networks while the second uses the Sociology Coauthorship network in the 1990's. For each test, I take random ego network samples from the known networks and use my method to make global network inference. I find that my method successfully reproduces the properties of the networks, such as distance and main component size. The results also suggest that simpler, baseline models provide considerably worse estimates for most network properties. I end the paper by discussing the bounds/limitations of ego network sampling. I also discuss possible extensions to the proposed approach. PMID:25339783

  14. Finite-density effects in the Fredrickson-Andersen and Kob-Andersen kinetically-constrained models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teomy, Eial, E-mail: eialteom@post.tau.ac.il; Shokef, Yair, E-mail: shokef@tau.ac.il

    2014-08-14

    We calculate the corrections to the thermodynamic limit of the critical density for jamming in the Kob-Andersen and Fredrickson-Andersen kinetically-constrained models, and find them to be finite-density corrections, and not finite-size corrections. We do this by introducing a new numerical algorithm, which requires negligible computer memory since contrary to alternative approaches, it generates at each point only the necessary data. The algorithm starts from a single unfrozen site and at each step randomly generates the neighbors of the unfrozen region and checks whether they are frozen or not. Our results correspond to systems of size greater than 10{sup 7} ×more » 10{sup 7}, much larger than any simulated before, and are consistent with the rigorous bounds on the asymptotic corrections. We also find that the average number of sites that seed a critical droplet is greater than 1.« less

  15. Bounds on stochastic chemical kinetic systems at steady state

    NASA Astrophysics Data System (ADS)

    Dowdy, Garrett R.; Barton, Paul I.

    2018-02-01

    The method of moments has been proposed as a potential means to reduce the dimensionality of the chemical master equation (CME) appearing in stochastic chemical kinetics. However, attempts to apply the method of moments to the CME usually result in the so-called closure problem. Several authors have proposed moment closure schemes, which allow them to obtain approximations of quantities of interest, such as the mean molecular count for each species. However, these approximations have the dissatisfying feature that they come with no error bounds. This paper presents a fundamentally different approach to the closure problem in stochastic chemical kinetics. Instead of making an approximation to compute a single number for the quantity of interest, we calculate mathematically rigorous bounds on this quantity by solving semidefinite programs. These bounds provide a check on the validity of the moment closure approximations and are in some cases so tight that they effectively provide the desired quantity. In this paper, the bounded quantities of interest are the mean molecular count for each species, the variance in this count, and the probability that the count lies in an arbitrary interval. At present, we consider only steady-state probability distributions, intending to discuss the dynamic problem in a future publication.

  16. A switched systems approach to image-based estimation

    NASA Astrophysics Data System (ADS)

    Parikh, Anup

    With the advent of technological improvements in imaging systems and computational resources, as well as the development of image-based reconstruction techniques, it is necessary to understand algorithm performance when subject to real world conditions. Specifically, this dissertation focuses on the stability and performance of a class of image-based observers in the presence of intermittent measurements, caused by e.g., occlusions, limited FOV, feature tracking losses, communication losses, or finite frame rates. Observers or filters that are exponentially stable under persistent observability may have unbounded error growth during intermittent sensing, even while providing seemingly accurate state estimates. In Chapter 3, dwell time conditions are developed to guarantee state estimation error convergence to an ultimate bound for a class of observers while undergoing measurement loss. Bounds are developed on the unstable growth of the estimation errors during the periods when the object being tracked is not visible. A Lyapunov-based analysis for the switched system is performed to develop an inequality in terms of the duration of time the observer can view the moving object and the duration of time the object is out of the field of view. In Chapter 4, a motion model is used to predict the evolution of the states of the system while the object is not visible. This reduces the growth rate of the bounding function to an exponential and enables the use of traditional switched systems Lyapunov analysis techniques. The stability analysis results in an average dwell time condition to guarantee state error convergence with a known decay rate. In comparison with the results in Chapter 3, the estimation errors converge to zero rather than a ball, with relaxed switching conditions, at the cost of requiring additional information about the motion of the feature. In some applications, a motion model of the object may not be available. Numerous adaptive techniques have been developed to compensate for unknown parameters or functions in system dynamics; however, persistent excitation (PE) conditions are typically required to ensure parameter convergence, i.e., learning. Since the motion model is needed in the predictor, model learning is desired; however, PE is difficult to insure a priori and infeasible to check online for nonlinear systems. Concurrent learning (CL) techniques have been developed to use recorded data and a relaxed excitation condition to ensure convergence. In CL, excitation is only required for a finite period of time, and the recorded data can be checked to determine if it is sufficiently rich. However, traditional CL requires knowledge of state derivatives, which are typically not measured and require extensive filter design and tuning to develop satisfactory estimates. In Chapter 5 of this dissertation, a novel formulation of CL is developed in terms of an integral (ICL), removing the need to estimate state derivatives while preserving parameter convergence properties. Using ICL, an estimator is developed in Chapter 6 for simultaneously estimating the pose of an object as well as learning a model of its motion for use in a predictor when the object is not visible. A switched systems analysis is provided to demonstrate the stability of the estimation and prediction with learning scheme. Dwell time conditions as well as excitation conditions are developed to ensure estimation errors converge to an arbitrarily small bound. Experimental results are provided to illustrate the performance of each of the developed estimation schemes. The dissertation concludes with a discussion of the contributions and limitations of the developed techniques, as well as avenues for future extensions.

  17. Addressing Dynamic Issues of Program Model Checking

    NASA Technical Reports Server (NTRS)

    Lerda, Flavio; Visser, Willem

    2001-01-01

    Model checking real programs has recently become an active research area. Programs however exhibit two characteristics that make model checking difficult: the complexity of their state and the dynamic nature of many programs. Here we address both these issues within the context of the Java PathFinder (JPF) model checker. Firstly, we will show how the state of a Java program can be encoded efficiently and how this encoding can be exploited to improve model checking. Next we show how to use symmetry reductions to alleviate some of the problems introduced by the dynamic nature of Java programs. Lastly, we show how distributed model checking of a dynamic program can be achieved, and furthermore, how dynamic partitions of the state space can improve model checking. We support all our findings with results from applying these techniques within the JPF model checker.

  18. Homology modeling and docking studies of a Δ9-fatty acid desaturase from a Cold-tolerant Pseudomonas sp. AMS8

    PubMed Central

    Garba, Lawal; Mohamad Yussoff, Mohamad Ariff; Abd Halim, Khairul Bariyyah; Ishak, Siti Nor Hasmah; Mohamad Ali, Mohd Shukuri; Oslan, Siti Nurbaya

    2018-01-01

    Membrane-bound fatty acid desaturases perform oxygenated desaturation reactions to insert double bonds within fatty acyl chains in regioselective and stereoselective manners. The Δ9-fatty acid desaturase strictly creates the first double bond between C9 and 10 positions of most saturated substrates. As the three-dimensional structures of the bacterial membrane fatty acid desaturases are not available, relevant information about the enzymes are derived from their amino acid sequences, site-directed mutagenesis and domain swapping in similar membrane-bound desaturases. The cold-tolerant Pseudomonas sp. AMS8 was found to produce high amount of monounsaturated fatty acids at low temperature. Subsequently, an active Δ9-fatty acid desaturase was isolated and functionally expressed in Escherichia coli. In this paper we report homology modeling and docking studies of a Δ9-fatty acid desaturase from a Cold-tolerant Pseudomonas sp. AMS8 for the first time to the best of our knowledge. Three dimensional structure of the enzyme was built using MODELLER version 9.18 using a suitable template. The protein model contained the three conserved-histidine residues typical for all membrane-bound desaturase catalytic activity. The structure was subjected to energy minimization and checked for correctness using Ramachandran plots and ERRAT, which showed a good quality model of 91.6 and 65.0%, respectively. The protein model was used to preform MD simulation and docking of palmitic acid using CHARMM36 force field in GROMACS Version 5 and Autodock tool Version 4.2, respectively. The docking simulation with the lowest binding energy, −6.8 kcal/mol had a number of residues in close contact with the docked palmitic acid namely, Ile26, Tyr95, Val179, Gly180, Pro64, Glu203, His34, His206, His71, Arg182, Thr85, Lys98 and His177. Interestingly, among the binding residues are His34, His71 and His206 from the first, second, and third conserved histidine motif, respectively, which constitute the active site of the enzyme. The results obtained are in compliance with the in vivo activity of the Δ9-fatty acid desaturase on the membrane phospholipids. PMID:29576935

  19. Program Model Checking: A Practitioner's Guide

    NASA Technical Reports Server (NTRS)

    Pressburger, Thomas T.; Mansouri-Samani, Masoud; Mehlitz, Peter C.; Pasareanu, Corina S.; Markosian, Lawrence Z.; Penix, John J.; Brat, Guillaume P.; Visser, Willem C.

    2008-01-01

    Program model checking is a verification technology that uses state-space exploration to evaluate large numbers of potential program executions. Program model checking provides improved coverage over testing by systematically evaluating all possible test inputs and all possible interleavings of threads in a multithreaded system. Model-checking algorithms use several classes of optimizations to reduce the time and memory requirements for analysis, as well as heuristics for meaningful analysis of partial areas of the state space Our goal in this guidebook is to assemble, distill, and demonstrate emerging best practices for applying program model checking. We offer it as a starting point and introduction for those who want to apply model checking to software verification and validation. The guidebook will not discuss any specific tool in great detail, but we provide references for specific tools.

  20. Verifying Multi-Agent Systems via Unbounded Model Checking

    NASA Technical Reports Server (NTRS)

    Kacprzak, M.; Lomuscio, A.; Lasica, T.; Penczek, W.; Szreter, M.

    2004-01-01

    We present an approach to the problem of verification of epistemic properties in multi-agent systems by means of symbolic model checking. In particular, it is shown how to extend the technique of unbounded model checking from a purely temporal setting to a temporal-epistemic one. In order to achieve this, we base our discussion on interpreted systems semantics, a popular semantics used in multi-agent systems literature. We give details of the technique and show how it can be applied to the well known train, gate and controller problem. Keywords: model checking, unbounded model checking, multi-agent systems

  1. Toward improved design of check dam systems: A case study in the Loess Plateau, China

    NASA Astrophysics Data System (ADS)

    Pal, Debasish; Galelli, Stefano; Tang, Honglei; Ran, Qihua

    2018-04-01

    Check dams are one of the most common strategies for controlling sediment transport in erosion prone areas, along with soil and water conservation measures. However, existing mathematical models that simulate sediment production and delivery are often unable to simulate how the storage capacity of check dams varies with time. To explicitly account for this process-and to support the design of check dam systems-we developed a modelling framework consisting of two components, namely (1) the spatially distributed Soil Erosion and Sediment Delivery Model (WaTEM/SEDEM), and (2) a network-based model of check dam storage dynamics. The two models are run sequentially, with the second model receiving the initial sediment input to check dams from WaTEM/SEDEM. The framework is first applied to Shejiagou catchment, a 4.26 km2 area located in the Loess Plateau, China, where we study the effect of the existing check dam system on sediment dynamics. Results show that the deployment of check dams altered significantly the sediment delivery ratio of the catchment. Furthermore, the network-based model reveals a large variability in the life expectancy of check dams and abrupt changes in their filling rates. The application of the framework to six alternative check dam deployment scenarios is then used to illustrate its usefulness for planning purposes, and to derive some insights on the effect of key decision variables, such as the number, size, and site location of check dams. Simulation results suggest that better performance-in terms of life expectancy and sediment delivery ratio-could have been achieved with an alternative deployment strategy.

  2. Implementing Model-Check for Employee and Management Satisfaction

    NASA Technical Reports Server (NTRS)

    Jones, Corey; LaPha, Steven

    2013-01-01

    This presentation will discuss methods to which ModelCheck can be implemented to not only improve model quality, but also satisfy both employees and management through different sets of quality checks. This approach allows a standard set of modeling practices to be upheld throughout a company, with minimal interaction required by the end user. The presenter will demonstrate how to create multiple ModelCheck standards, preventing users from evading the system, and how it can improve the quality of drawings and models.

  3. Symbolic LTL Compilation for Model Checking: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Vardi, Moshe Y.

    2007-01-01

    In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.

  4. Finding Feasible Abstract Counter-Examples

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Dwyer, Matthew B.; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2002-01-01

    A strength of model checking is its ability to automate the detection of subtle system errors and produce traces that exhibit those errors. Given the high computational cost of model checking most researchers advocate the use of aggressive property-preserving abstractions. Unfortunately, the more aggressively a system is abstracted the more infeasible behavior it will have. Thus, while abstraction enables efficient model checking it also threatens the usefulness of model checking as a defect detection tool, since it may be difficult to determine whether a counter-example is feasible and hence worth developer time to analyze. We have explored several strategies for addressing this problem by extending an explicit-state model checker, Java PathFinder (JPF), to search for and analyze counter-examples in the presence of abstractions. We demonstrate that these techniques effectively preserve the defect detection ability of model checking in the presence of aggressive abstraction by applying them to check properties of several abstracted multi-threaded Java programs. These new capabilities are not specific to JPF and can be easily adapted to other model checking frameworks; we describe how this was done for the Bandera toolset.

  5. Stability analysis of spectral methods for hyperbolic initial-boundary value systems

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Lustman, L.; Tadmor, E.

    1986-01-01

    A constant coefficient hyperbolic system in one space variable, with zero initial data is discussed. Dissipative boundary conditions are imposed at the two points x = + or - 1. This problem is discretized by a spectral approximation in space. Sufficient conditions under which the spectral numerical solution is stable are demonstrated - moreover, these conditions have to be checked only for scalar equations. The stability theorems take the form of explicit bounds for the norm of the solution in terms of the boundary data. The dependence of these bounds on N, the number of points in the domain (or equivalently the degree of the polynomials involved), is investigated for a class of standard spectral methods, including Chebyshev and Legendre collocations.

  6. A new delay-independent condition for global robust stability of neural networks with time delays.

    PubMed

    Samli, Ruya

    2015-06-01

    This paper studies the problem of robust stability of dynamical neural networks with discrete time delays under the assumptions that the network parameters of the neural system are uncertain and norm-bounded, and the activation functions are slope-bounded. By employing the results of Lyapunov stability theory and matrix theory, new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for delayed neural networks are presented. The results reported in this paper can be easily tested by checking some special properties of symmetric matrices associated with the parameter uncertainties of neural networks. We also present a numerical example to show the effectiveness of the proposed theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. 12 CFR Appendix C to Part 229 - Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... in different states or check processing regions)]. If you make the deposit in person to one of our...] Substitute Checks and Your Rights What Is a Substitute Check? To make check processing faster, federal law...

  8. Model-independent reconstruction of f( T) teleparallel cosmology

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2017-11-01

    We propose a model-independent formalism to numerically solve the modified Friedmann equations in the framework of f( T) teleparallel cosmology. Our strategy is to expand the Hubble parameter around the redshift z=0 up to a given order and to adopt cosmographic bounds as initial settings to determine the corresponding f(z)≡ f(T(H(z))) function. In this perspective, we distinguish two cases: the first expansion is up to the jerk parameter, the second expansion is up to the snap parameter. We show that inside the observed redshift domain z≤ 1, only the net strength of f( z) is modified passing from jerk to snap, whereas its functional behavior and shape turn out to be identical. As first step, we set the cosmographic parameters by means of the most recent observations. Afterwards, we calibrate our numerical solutions with the concordance Λ CDM model. In both cases, there is a good agreement with the cosmological standard model around z≤ 1, with severe discrepancies outer of this limit. We demonstrate that the effective dark energy term evolves following the test-function: f(z)=A+B{z}^2e^{Cz}. Bounds over the set A, B, C are also fixed by statistical considerations, comparing discrepancies between f( z) with data. The approach opens the possibility to get a wide class of test-functions able to frame the dynamics of f( T) without postulating any model a priori. We thus re-obtain the f( T) function through a back-scattering procedure once f( z) is known. We figure out the properties of our f( T) function at the level of background cosmology, to check the goodness of our numerical results. Finally, a comparison with previous cosmographic approaches is carried out giving results compatible with theoretical expectations.

  9. Application of conditional moment tests to model checking for generalized linear models.

    PubMed

    Pan, Wei

    2002-06-01

    Generalized linear models (GLMs) are increasingly being used in daily data analysis. However, model checking for GLMs with correlated discrete response data remains difficult. In this paper, through a case study on marginal logistic regression using a real data set, we illustrate the flexibility and effectiveness of using conditional moment tests (CMTs), along with other graphical methods, to do model checking for generalized estimation equation (GEE) analyses. Although CMTs provide an array of powerful diagnostic tests for model checking, they were originally proposed in the econometrics literature and, to our knowledge, have never been applied to GEE analyses. CMTs cover many existing tests, including the (generalized) score test for an omitted covariate, as special cases. In summary, we believe that CMTs provide a class of useful model checking tools.

  10. Take the Reins on Model Quality with ModelCHECK and Gatekeeper

    NASA Technical Reports Server (NTRS)

    Jones, Corey

    2012-01-01

    Model quality and consistency has been an issue for us due to the diverse experience level and imaginative modeling techniques of our users. Fortunately, setting up ModelCHECK and Gatekeeper to enforce our best practices has helped greatly, but it wasn't easy. There were many challenges associated with setting up ModelCHECK and Gatekeeper including: limited documentation, restrictions within ModelCHECK, and resistance from end users. However, we consider ours a success story. In this presentation we will describe how we overcame these obstacles and present some of the details of how we configured them to work for us.

  11. Cockpit automation - In need of a philosophy

    NASA Technical Reports Server (NTRS)

    Wiener, E. L.

    1985-01-01

    Concern has been expressed over the rapid development and deployment of automatic devices in transport aircraft, due mainly to the human interface and particularly the role of automation in inducing human error. The paper discusses the need for coherent philosophies of automation, and proposes several approaches: (1) flight management by exception, which states that as long as a crew stays within the bounds of regulations, air traffic control and flight safety, it may fly as it sees fit; (2) exceptions by forecasting, where the use of forecasting models would predict boundary penetration, rather than waiting for it to happen; (3) goal-sharing, where a computer is informed of overall goals, and subsequently has the capability of checking inputs and aircraft position for consistency with the overall goal or intentions; and (4) artificial intelligence and expert systems, where intelligent machines could mimic human reason.

  12. Model checking for linear temporal logic: An efficient implementation

    NASA Technical Reports Server (NTRS)

    Sherman, Rivi; Pnueli, Amir

    1990-01-01

    This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.

  13. Chinese-English Aviation and Space Dictionary

    DTIC Science & Technology

    1973-04-01

    i.e., individual characters with re- duced numbers of strokes. this process is in effect eroding the traditional classification of a character by its...side rod 17 b X1&jt d aolyu- edge effect 18 birajie boundary; border; bound; 19 frontier; ,nd binJie jiaoy’,; .iý K! M narginal checking 20 bianjie...range missile; 23 very-long-range missile chaoynianchegb oghli151 .’ . long-range bomber; global 24 bomber; superbomber chaoyue h mhshu 14

  14. Self-Configuration and Localization in Ad Hoc Wireless Sensor Networks

    DTIC Science & Technology

    2010-08-31

    Goddard I. SUMMARY OF CONTRIBUTIONS We explored the error mechanisms of iterative decoding of low-density parity-check ( LDPC ) codes . This work has resulted...important problems in the area of channel coding , as their unpredictable behavior has impeded the deployment of LDPC codes in many real-world applications. We...tree-based decoders of LDPC codes , including the extrinsic tree decoder, and an investigation into their performance and bounding capabilities [5], [6

  15. Analysis of an all-digital maximum likelihood carrier phase and clock timing synchronizer for eight phase-shift keying modulation

    NASA Astrophysics Data System (ADS)

    Degaudenzi, Riccardo; Vanghi, Vieri

    1994-02-01

    In all-digital Trellis-Coded 8PSK (TC-8PSK) demodulator well suited for VLSI implementation, including maximum likelihood estimation decision-directed (MLE-DD) carrier phase and clock timing recovery, is introduced and analyzed. By simply removing the trellis decoder the demodulator can efficiently cope with uncoded 8PSK signals. The proposed MLE-DD synchronization algorithm requires one sample for the phase and two samples per symbol for the timing loop. The joint phase and timing discriminator characteristics are analytically derived and numerical results checked by means of computer simulations. An approximated expression for steady-state carrier phase and clock timing mean square error has been derived and successfully checked with simulation findings. Synchronizer deviation from the Cramer Rao bound is also discussed. Mean acquisition time for the digital synchronizer has also been computed and checked, using the Monte Carlo simulation technique. Finally, TC-8PSK digital demodulator performance in terms of bit error rate and mean time to lose lock, including digital interpolators and synchronization loops, is presented.

  16. Leak test fixture and method for using same

    DOEpatents

    Hawk, Lawrence S.

    1976-01-01

    A method and apparatus are provided which are especially useful for leak testing seams such as an end closure or joint in an article. The test does not require an enclosed pressurized volume within the article or joint section to be leak checked. A flexible impervious membrane is disposed over an area of the seamed surfaces to be leak checked and sealed around the outer edges. A preselected vacuum is applied through an opening in the membrane to evacuate the area between the membrane and the surface being leak checked to essentially collapse the membrane to conform to the article surface or joined adjacent surfaces. A pressure differential is concentrated at the seam bounded by the membrane and only the seam experiences a pressure differential as air or helium molecules are drawn into the vacuum system through a leak in the seam. A helium detector may be placed in a vacuum exhaust line from the membrane to detect the helium. Alternatively, the vacuum system may be isolated at a preselected pressure and leaks may be detected by a subsequent pressure increase in the vacuum system.

  17. Detecting and (not) dealing with plagiarism in an engineering paper: beyond CrossCheck-a case study.

    PubMed

    Zhang, Xin-xin; Huo, Zhao-lin; Zhang, Yue-hong

    2014-06-01

    In papers in areas such as engineering and the physical sciences, figures, tables and formulae are the basic elements to communicate the authors' core ideas, workings and results. As a computational text-matching tool, CrossCheck cannot work on these non-textual elements to detect plagiarism. Consequently, when comparing engineering or physical sciences papers, CrossCheck may return a low similarity index even when plagiarism has in fact taken place. A case of demonstrated plagiarism involving engineering papers with a low similarity index is discussed, and editor's experiences and suggestions are given on how to tackle this problem. The case shows a lack of understanding of plagiarism by some authors or editors, and illustrates the difficulty of getting some editors and publishers to take appropriate action. Consequently, authors, journal editors, and reviewers, as well as research institutions all are duty-bound not only to recognize the differences between ethical and unethical behavior in order to protect a healthy research environment, and also to maintain consistent ethical publishing standards.

  18. 12 CFR Appendix C to Part 229 - Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy Disclosure and Notices C Appendix C to Part 229 Banks and... OF FUNDS AND COLLECTION OF CHECKS (REGULATION CC) Pt. 229, App. C Appendix C to Part 229—Model...

  19. The influence of social anxiety on the body checking behaviors of female college students.

    PubMed

    White, Emily K; Warren, Cortney S

    2014-09-01

    Social anxiety and eating pathology frequently co-occur. However, there is limited research examining the relationship between anxiety and body checking, aside from one study in which social physique anxiety partially mediated the relationship between body checking cognitions and body checking behavior (Haase, Mountford, & Waller, 2007). In an independent sample of 567 college women, we tested the fit of Haase and colleagues' foundational model but did not find evidence of mediation. Thus we tested the fit of an expanded path model that included eating pathology and clinical impairment. In the best-fitting path model (CFI=.991; RMSEA=.083) eating pathology and social physique anxiety positively predicted body checking, and body checking positively predicted clinical impairment. Therefore, women who endorse social physique anxiety may be more likely to engage in body checking behaviors and experience impaired psychosocial functioning. Published by Elsevier Ltd.

  20. 12 CFR Appendix C to Part 229 - Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... in different states or check processing regions)]. If you make the deposit in person to one of our... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... Your Rights What Is a Substitute Check? To make check processing faster, federal law permits banks to...

  1. 12 CFR Appendix C to Part 229 - Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... in different states or check processing regions)]. If you make the deposit in person to one of our... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... Your Rights What Is a Substitute Check? To make check processing faster, federal law permits banks to...

  2. 12 CFR Appendix C to Part 229 - Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... in different states or check processing regions)]. If you make the deposit in person to one of our... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... Your Rights What Is a Substitute Check? To make check processing faster, federal law permits banks to...

  3. Determination of MLC model parameters for Monaco using commercial diode arrays.

    PubMed

    Kinsella, Paul; Shields, Laura; McCavana, Patrick; McClean, Brendan; Langan, Brian

    2016-07-08

    Multileaf collimators (MLCs) need to be characterized accurately in treatment planning systems to facilitate accurate intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). The aim of this study was to examine the use of MapCHECK 2 and ArcCHECK diode arrays for optimizing MLC parameters in Monaco X-ray voxel Monte Carlo (XVMC) dose calculation algorithm. A series of radiation test beams designed to evaluate MLC model parameters were delivered to MapCHECK 2, ArcCHECK, and EBT3 Gafchromic film for comparison. Initial comparison of the calculated and ArcCHECK-measured dose distributions revealed it was unclear how to change the MLC parameters to gain agreement. This ambiguity arose due to an insufficient sampling of the test field dose distributions and unexpected discrepancies in the open parts of some test fields. Consequently, the XVMC MLC parameters were optimized based on MapCHECK 2 measurements. Gafchromic EBT3 film was used to verify the accuracy of MapCHECK 2 measured dose distributions. It was found that adjustment of the MLC parameters from their default values resulted in improved global gamma analysis pass rates for MapCHECK 2 measurements versus calculated dose. The lowest pass rate of any MLC-modulated test beam improved from 68.5% to 93.5% with 3% and 2 mm gamma criteria. Given the close agreement of the optimized model to both MapCHECK 2 and film, the optimized model was used as a benchmark to highlight the relatively large discrepancies in some of the test field dose distributions found with ArcCHECK. Comparison between the optimized model-calculated dose and ArcCHECK-measured dose resulted in global gamma pass rates which ranged from 70.0%-97.9% for gamma criteria of 3% and 2 mm. The simple square fields yielded high pass rates. The lower gamma pass rates were attributed to the ArcCHECK overestimating the dose in-field for the rectangular test fields whose long axis was parallel to the long axis of the ArcCHECK. Considering ArcCHECK measurement issues and the lower gamma pass rates for the MLC-modulated test beams, it was concluded that MapCHECK 2 was a more suitable detector than ArcCHECK for the optimization process. © 2016 The Authors

  4. Propel: Tools and Methods for Practical Source Code Model Checking

    NASA Technical Reports Server (NTRS)

    Mansouri-Samani, Massoud; Mehlitz, Peter; Markosian, Lawrence; OMalley, Owen; Martin, Dale; Moore, Lantz; Penix, John; Visser, Willem

    2003-01-01

    The work reported here is an overview and snapshot of a project to develop practical model checking tools for in-the-loop verification of NASA s mission-critical, multithreaded programs in Java and C++. Our strategy is to develop and evaluate both a design concept that enables the application of model checking technology to C++ and Java, and a model checking toolset for C++ and Java. The design concept and the associated model checking toolset is called Propel. It builds upon the Java PathFinder (JPF) tool, an explicit state model checker for Java applications developed by the Automated Software Engineering group at NASA Ames Research Center. The design concept that we are developing is Design for Verification (D4V). This is an adaption of existing best design practices that has the desired side-effect of enhancing verifiability by improving modularity and decreasing accidental complexity. D4V, we believe, enhances the applicability of a variety of V&V approaches; we are developing the concept in the context of model checking. The model checking toolset, Propel, is based on extending JPF to handle C++. Our principal tasks in developing the toolset are to build a translator from C++ to Java, productize JPF, and evaluate the toolset in the context of D4V. Through all these tasks we are testing Propel capabilities on customer applications.

  5. Program Model Checking as a New Trend

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper introduces a special section of STTT (International Journal on Software Tools for Technology Transfer) containing a selection of papers that were presented at the 7th International SPIN workshop, Stanford, August 30 - September 1, 2000. The workshop was named SPIN Model Checking and Software Verification, with an emphasis on model checking of programs. The paper outlines the motivation for stressing software verification, rather than only design and model verification, by presenting the work done in the Automated Software Engineering group at NASA Ames Research Center within the last 5 years. This includes work in software model checking, testing like technologies and static analysis.

  6. UTP and Temporal Logic Model Checking

    NASA Astrophysics Data System (ADS)

    Anderson, Hugh; Ciobanu, Gabriel; Freitas, Leo

    In this paper we give an additional perspective to the formal verification of programs through temporal logic model checking, which uses Hoare and He Unifying Theories of Programming (UTP). Our perspective emphasizes the use of UTP designs, an alphabetised relational calculus expressed as a pre/post condition pair of relations, to verify state or temporal assertions about programs. The temporal model checking relation is derived from a satisfaction relation between the model and its properties. The contribution of this paper is that it shows a UTP perspective to temporal logic model checking. The approach includes the notion of efficiency found in traditional model checkers, which reduced a state explosion problem through the use of efficient data structures

  7. 75 FR 28480 - Airworthiness Directives; Airbus Model A300 Series Airplanes; Model A300 B4-600, B4-600R, F4-600R...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-21

    ... pressurise the hydraulic reservoirs, due to leakage of the Crissair reservoir air pressurisation check valves. * * * The leakage of the check valves was caused by an incorrect spring material. The affected Crissair check valves * * * were then replaced with improved check valves P/N [part number] 2S2794-1 * * *. More...

  8. Model Checking a Self-Stabilizing Distributed Clock Synchronization Protocol for Arbitrary Digraphs

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2011-01-01

    This report presents the mechanical verification of a self-stabilizing distributed clock synchronization protocol for arbitrary digraphs in the absence of faults. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. The system under study is an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. This protocol deterministically converges within a time bound that is a linear function of the self-stabilization period.

  9. Assessment of check-dam groundwater recharge with water-balance calculations

    NASA Astrophysics Data System (ADS)

    Djuma, Hakan; Bruggeman, Adriana; Camera, Corrado; Eliades, Marinos

    2017-04-01

    Studies on the enhancement of groundwater recharge by check-dams in arid and semi-arid environments mainly focus on deriving water infiltration rates from the check-dam ponding areas. This is usually achieved by applying simple water balance models, more advanced models (e.g., two dimensional groundwater models) and field tests (e.g., infiltrometer test or soil pit tests). Recharge behind the check-dam can be affected by the built-up of sediment as a result of erosion in the upstream watershed area. This natural process can increase the uncertainty in the estimates of the recharged water volume, especially for water balance calculations. Few water balance field studies of individual check-dams have been presented in the literature and none of them presented associated uncertainties of their estimates. The objectives of this study are i) to assess the effect of a check-dam on groundwater recharge from an ephemeral river; and ii) to assess annual sedimentation at the check-dam during a 4-year period. The study was conducted on a check-dam in the semi-arid island of Cyprus. Field campaigns were carried out to measure water flow, water depth and check-dam topography in order to establish check-dam water height, volume, evaporation, outflow and recharge relations. Topographic surveys were repeated at the end of consecutive hydrological years to estimate the sediment built up in the reservoir area of the check dam. Also, sediment samples were collected from the check-dam reservoir area for bulk-density analyses. To quantify the groundwater recharge, a water balance model was applied at two locations: at the check-dam and corresponding reservoir area, and at a 4-km stretch of the river bed without check-dam. Results showed that a check-dam with a storage capacity of 25,000 m3 was able to recharge to the aquifer, in four years, a total of 12 million m3 out of the 42 million m3 of measured (or modelled) streamflow. Recharge from the analyzed 4-km long river section without check-dam was estimated to be 1 million m3. Upper and lower limits of prediction intervals were computed to assess the uncertainties of the results. The model was rerun with these values and resulted in recharge values of 0.4 m3 as lower and 38 million m3 as upper limit. The sediment survey in the check-dam reservoir area showed that the reservoir area was filled with 2,000 to 3,000 tons of sediment after one rainfall season. This amount of sediment corresponds to 0.2 to 2 t h-1 y-1 sediment yield at the watershed level and reduces the check-dam storage capacity by approximately 10%. Results indicate that check-dams are valuable structures for increasing groundwater resources, but special attention should be given to soil erosion occurring in the upstream area and the resulting sediment built-up in the check-dam reservoir area. This study has received funding from the EU FP7 RECARE Project (GA 603498)

  10. Efficient model checking of network authentication protocol based on SPIN

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan

    2013-03-01

    Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.

  11. An Integrated Environment for Efficient Formal Design and Verification

    NASA Technical Reports Server (NTRS)

    1998-01-01

    The general goal of this project was to improve the practicality of formal methods by combining techniques from model checking and theorem proving. At the time the project was proposed, the model checking and theorem proving communities were applying different tools to similar problems, but there was not much cross-fertilization. This project involved a group from SRI that had substantial experience in the development and application of theorem-proving technology, and a group at Stanford that specialized in model checking techniques. Now, over five years after the proposal was submitted, there are many research groups working on combining theorem-proving and model checking techniques, and much more communication between the model checking and theorem proving research communities. This project contributed significantly to this research trend. The research work under this project covered a variety of topics: new theory and algorithms; prototype tools; verification methodology; and applications to problems in particular domains.

  12. Variable Step Integration Coupled with the Method of Characteristics Solution for Water-Hammer Analysis, A Case Study

    NASA Technical Reports Server (NTRS)

    Turpin, Jason B.

    2004-01-01

    One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.

  13. A voice-actuated wind tunnel model leak checking system

    NASA Technical Reports Server (NTRS)

    Larson, William E.

    1989-01-01

    A computer program has been developed that improves the efficiency of wind tunnel model leak checking. The program uses a voice recognition unit to relay a technician's commands to the computer. The computer, after receiving a command, can respond to the technician via a voice response unit. Information about the model pressure orifice being checked is displayed on a gas-plasma terminal. On command, the program records up to 30 seconds of pressure data. After the recording is complete, the raw data and a straight line fit of the data are plotted on the terminal. This allows the technician to make a decision on the integrity of the orifice being checked. All results of the leak check program are stored in a database file that can be listed on the line printer for record keeping purposes or displayed on the terminal to help the technician find unchecked orifices. This program allows one technician to check a model for leaks instead of the two or three previously required.

  14. Use of posterior predictive checks as an inferential tool for investigating individual heterogeneity in animal population vital rates

    PubMed Central

    Chambert, Thierry; Rotella, Jay J; Higgs, Megan D

    2014-01-01

    The investigation of individual heterogeneity in vital rates has recently received growing attention among population ecologists. Individual heterogeneity in wild animal populations has been accounted for and quantified by including individually varying effects in models for mark–recapture data, but the real need for underlying individual effects to account for observed levels of individual variation has recently been questioned by the work of Tuljapurkar et al. (Ecology Letters, 12, 93, 2009) on dynamic heterogeneity. Model-selection approaches based on information criteria or Bayes factors have been used to address this question. Here, we suggest that, in addition to model-selection, model-checking methods can provide additional important insights to tackle this issue, as they allow one to evaluate a model's misfit in terms of ecologically meaningful measures. Specifically, we propose the use of posterior predictive checks to explicitly assess discrepancies between a model and the data, and we explain how to incorporate model checking into the inferential process used to assess the practical implications of ignoring individual heterogeneity. Posterior predictive checking is a straightforward and flexible approach for performing model checks in a Bayesian framework that is based on comparisons of observed data to model-generated replications of the data, where parameter uncertainty is incorporated through use of the posterior distribution. If discrepancy measures are chosen carefully and are relevant to the scientific context, posterior predictive checks can provide important information allowing for more efficient model refinement. We illustrate this approach using analyses of vital rates with long-term mark–recapture data for Weddell seals and emphasize its utility for identifying shortfalls or successes of a model at representing a biological process or pattern of interest. We show how posterior predictive checks can be used to strengthen inferences in ecological studies. We demonstrate the application of this method on analyses dealing with the question of individual reproductive heterogeneity in a population of Antarctic pinnipeds. PMID:24834335

  15. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking

    PubMed Central

    Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems. PMID:27187178

  16. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking.

    PubMed

    Pârvu, Ovidiu; Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems.

  17. Model Checking Temporal Logic Formulas Using Sticker Automata

    PubMed Central

    Feng, Changwei; Wu, Huanmei

    2017-01-01

    As an important complex problem, the temporal logic model checking problem is still far from being fully resolved under the circumstance of DNA computing, especially Computation Tree Logic (CTL), Interval Temporal Logic (ITL), and Projection Temporal Logic (PTL), because there is still a lack of approaches for DNA model checking. To address this challenge, a model checking method is proposed for checking the basic formulas in the above three temporal logic types with DNA molecules. First, one-type single-stranded DNA molecules are employed to encode the Finite State Automaton (FSA) model of the given basic formula so that a sticker automaton is obtained. On the other hand, other single-stranded DNA molecules are employed to encode the given system model so that the input strings of the sticker automaton are obtained. Next, a series of biochemical reactions are conducted between the above two types of single-stranded DNA molecules. It can then be decided whether the system satisfies the formula or not. As a result, we have developed a DNA-based approach for checking all the basic formulas of CTL, ITL, and PTL. The simulated results demonstrate the effectiveness of the new method. PMID:29119114

  18. Foundations of the Bandera Abstraction Tools

    NASA Technical Reports Server (NTRS)

    Hatcliff, John; Dwyer, Matthew B.; Pasareanu, Corina S.; Robby

    2003-01-01

    Current research is demonstrating that model-checking and other forms of automated finite-state verification can be effective for checking properties of software systems. Due to the exponential costs associated with model-checking, multiple forms of abstraction are often necessary to obtain system models that are tractable for automated checking. The Bandera Tool Set provides multiple forms of automated support for compiling concurrent Java software systems to models that can be supplied to several different model-checking tools. In this paper, we describe the foundations of Bandera's data abstraction mechanism which is used to reduce the cardinality (and the program's state-space) of data domains in software to be model-checked. From a technical standpoint, the form of data abstraction used in Bandera is simple, and it is based on classical presentations of abstract interpretation. We describe the mechanisms that Bandera provides for declaring abstractions, for attaching abstractions to programs, and for generating abstracted programs and properties. The contributions of this work are the design and implementation of various forms of tool support required for effective application of data abstraction to software components written in a programming language like Java which has a rich set of linguistic features.

  19. Full implementation of a distributed hydrological model based on check dam trapped sediment volumes

    NASA Astrophysics Data System (ADS)

    Bussi, Gianbattista; Francés, Félix

    2014-05-01

    Lack of hydrometeorological data is one of the most compelling limitations to the implementation of distributed environmental models. Mediterranean catchments, in particular, are characterised by high spatial variability of meteorological phenomena and soil characteristics, which may prevents from transferring model calibrations from a fully gauged catchment to a totally o partially ungauged one. For this reason, new sources of data are required in order to extend the use of distributed models to non-monitored or low-monitored areas. An important source of information regarding the hydrological and sediment cycle is represented by sediment deposits accumulated at the bottom of reservoirs. Since the 60s, reservoir sedimentation volumes were used as proxy data for the estimation of inter-annual total sediment yield rates, or, in more recent years, as a reference measure of the sediment transport for sediment model calibration and validation. Nevertheless, the possibility of using such data for constraining the calibration of a hydrological model has not been exhaustively investigated so far. In this study, the use of nine check dam reservoir sedimentation volumes for hydrological and sedimentological model calibration and spatio-temporal validation was examined. Check dams are common structures in Mediterranean areas, and are a potential source of spatially distributed information regarding both hydrological and sediment cycle. In this case-study, the TETIS hydrological and sediment model was implemented in a medium-size Mediterranean catchment (Rambla del Poyo, Spain) by taking advantage of sediment deposits accumulated behind the check dams located in the catchment headwaters. Reservoir trap efficiency was taken into account by coupling the TETIS model with a pond trap efficiency model. The model was calibrated by adjusting some of its parameters in order to reproduce the total sediment volume accumulated behind a check dam. Then, the model was spatially validated by obtaining the simulated sedimentation volume at the other eight check dams and comparing it to the observed sedimentation volumes. Lastly, the simulated water discharge at the catchment outlet was compared with observed water discharge records in order to check the hydrological sub-model behaviour. Model results provided highly valuable information concerning the spatial distribution of soil erosion and sediment transport. Spatial validation of the sediment sub-model provided very good results at seven check dams out of nine. This study shows that check dams can be a useful tool also for constraining hydrological model calibration, as model results agree with water discharge observations. In fact, the hydrological model validation at a downstream water flow gauge obtained a Nash-Sutcliffe efficiency of 0.8. This technique is applicable to all catchments with presence of check dams, and only requires rainfall and temperature data and soil characteristics maps.

  20. Guidelines for Acute Toxicological Tests

    DTIC Science & Technology

    1979-11-01

    with a group of individuals being exposed is independence (usually assured by randoruiza- tion) of the respondent. In the case of acute studies on plants...reasonable choice. If estimates of the EC50 are available and if the purpose of the pro- posed study is to check the median response rate Finney on page...appropriate test. Figure 2 shois the probit equation and the results on scale 1/X. In the transformed scale, EC50 = 59.83 with bounds of 54.53 and 63.92

  1. Memoized Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Yang, Guowei; Pasareanu, Corina S.; Khurshid, Sarfraz

    2012-01-01

    This paper introduces memoized symbolic execution (Memoise), a novel approach for more efficient application of forward symbolic execution, which is a well-studied technique for systematic exploration of program behaviors based on bounded execution paths. Our key insight is that application of symbolic execution often requires several successive runs of the technique on largely similar underlying problems, e.g., running it once to check a program to find a bug, fixing the bug, and running it again to check the modified program. Memoise introduces a trie-based data structure that stores the key elements of a run of symbolic execution. Maintenance of the trie during successive runs allows re-use of previously computed results of symbolic execution without the need for re-computing them as is traditionally done. Experiments using our prototype embodiment of Memoise show the benefits it holds in various standard scenarios of using symbolic execution, e.g., with iterative deepening of exploration depth, to perform regression analysis, or to enhance coverage.

  2. The Solid Rocket Motor Slag Population: Results of a Radar-based Regressive Statistical Evaluation

    NASA Technical Reports Server (NTRS)

    Horstman, Matthew F.; Xu, Yu-Lin

    2008-01-01

    Solid rocket motor (SRM) slag has been identified as a significant source of man-made orbital debris. The propensity of SRMs to generate particles of 100 m and larger has caused concern regarding their contribution to the debris environment. Radar observation, rather than in-situ gathered evidence, is currently the only measurable source for the NASA/ODPO model of the on-orbit slag population. This simulated model includes the time evolution of the resultant orbital populations using a historical database of SRM launches, propellant masses, and estimated locations and times of tail-off. However, due to the small amount of observational evidence, there can be no direct comparison to check the validity of this model. Rather than using the assumed population developed from purely historical and physical assumptions, a regressional approach was used which utilized the populations observed by the Haystack radar from 1996 to present. The estimated trajectories from the historical model of slag sources, and the corresponding plausible detections by the Haystack radar, were identified. Comparisons with observational data from the ensuing years were made, and the SRM model was altered with respect to size and mass production of slag particles to reflect the historical data obtained. The result is a model SRM population that fits within the bounds of the observed environment.

  3. Towards Symbolic Model Checking for Multi-Agent Systems via OBDDs

    NASA Technical Reports Server (NTRS)

    Raimondi, Franco; Lomunscio, Alessio

    2004-01-01

    We present an algorithm for model checking temporal-epistemic properties of multi-agent systems, expressed in the formalism of interpreted systems. We first introduce a technique for the translation of interpreted systems into boolean formulae, and then present a model-checking algorithm based on this translation. The algorithm is based on OBDD's, as they offer a compact and efficient representation for boolean formulae.

  4. Anticaries Potential of Low Fluoride Dentifrices Found in The Brazilian Market.

    PubMed

    Ortiz, Adriana de Cássia; Tenuta, Livia Maria Andaló; Tabchoury, Cínthia Pereira Machado; Cury, Jaime Aparecido

    2016-01-01

    Low-fluoride (F) dentifrices (<600 µg F/g) are widely available worldwide, but evidence to recommend the use of such dentifrices, with either regular or improved formulations, is still lacking. Therefore, the aim of this study was to evaluate the anticaries potential of low-F dentifrices found in the Brazilian market, using a validated and tested pH-cycling model. Enamel blocks were selected by surface hardness (SH) and randomized into four treatment groups (n=12): non-F dentifrice (negative control), low-F dentifrice (500 μg F/g), low-F acidulated dentifrice (550 μg F/g) and 1,100 μg F/g dentifrice (positive control). The blocks were subjected to pH-cycling regimen for 8 days and were treated 2x/day with dentifrice slurries prepared in water (1:3, w/v). The pH of the slurries was checked, and only the acidulated one had low pH. After the pH cycling, SH was again determined and the percentage of surface hardness loss was calculated as indicator of demineralization. Loosely- and firmly-bound F concentrations in enamel were also determined. The 1,100 μg F/g dentifrice was more effective than the low-F ones to reduce enamel demineralization and was the only one that differed from the non-F (p<0.05). All F dentifrices formed higher concentration of loosely-bound F on enamel than the non-F (p<0.05), but the 1,100 μg F/g was the only one that differed from the non-F in the ability to form firmly-bound F. The findings suggest that the low-F dentifrices available in the Brazilian market, irrespective of their formulation, do not have anticaries potential.

  5. EIA model documentation: World oil refining logistics demand model,``WORLD`` reference manual. Version 1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-04-11

    This manual is intended primarily for use as a reference by analysts applying the WORLD model to regional studies. It also provides overview information on WORLD features of potential interest to managers and analysts. Broadly, the manual covers WORLD model features in progressively increasing detail. Section 2 provides an overview of the WORLD model, how it has evolved, what its design goals are, what it produces, and where it can be taken with further enhancements. Section 3 reviews model management covering data sources, managing over-optimization, calibration and seasonality, check-points for case construction and common errors. Section 4 describes in detailmore » the WORLD system, including: data and program systems in overview; details of mainframe and PC program control and files;model generation, size management, debugging and error analysis; use with different optimizers; and reporting and results analysis. Section 5 provides a detailed description of every WORLD model data table, covering model controls, case and technology data. Section 6 goes into the details of WORLD matrix structure. It provides an overview, describes how regional definitions are controlled and defines the naming conventions for-all model rows, columns, right-hand sides, and bounds. It also includes a discussion of the formulation of product blending and specifications in WORLD. Several Appendices supplement the main sections.« less

  6. A Self-Stabilizing Synchronization Protocol for Arbitrary Digraphs

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2011-01-01

    This paper presents a self-stabilizing distributed clock synchronization protocol in the absence of faults in the system. It is focused on the distributed clock synchronization of an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. This protocol deterministically converges within a time bound that is a linear function of the self-stabilization period. We present an outline of a deductive proof of the correctness of the protocol. A bounded model of the protocol was mechanically verified for a variety of topologies. Results of the mechanical proof of the correctness of the protocol are provided. The model checking results have verified the correctness of the protocol as they apply to the networks with unidirectional and bidirectional links. In addition, the results confirm the claims of determinism and linear convergence. As a result, we conjecture that the protocol solves the general case of this problem. We also present several variations of the protocol and discuss that this synchronization protocol is indeed an emergent system.

  7. Compositional schedulability analysis of real-time actor-based systems.

    PubMed

    Jaghoori, Mohammad Mahdi; de Boer, Frank; Longuet, Delphine; Chothia, Tom; Sirjani, Marjan

    2017-01-01

    We present an extension of the actor model with real-time, including deadlines associated with messages, and explicit application-level scheduling policies, e.g.,"earliest deadline first" which can be associated with individual actors. Schedulability analysis in this setting amounts to checking whether, given a scheduling policy for each actor, every task is processed within its designated deadline. To check schedulability, we introduce a compositional automata-theoretic approach, based on maximal use of model checking combined with testing. Behavioral interfaces define what an actor expects from the environment, and the deadlines for messages given these assumptions. We use model checking to verify that actors match their behavioral interfaces. We extend timed automata refinement with the notion of deadlines and use it to define compatibility of actor environments with the behavioral interfaces. Model checking of compatibility is computationally hard, so we propose a special testing process. We show that the analyses are decidable and automate the process using the Uppaal model checker.

  8. Advanced Numerical Model for Irradiated Concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giorla, Alain B.

    In this report, we establish a numerical model for concrete exposed to irradiation to address these three critical points. The model accounts for creep in the cement paste and its coupling with damage, temperature and relative humidity. The shift in failure mode with the loading rate is also properly represented. The numerical model for creep has been validated and calibrated against different experiments in the literature [Wittmann, 1970, Le Roy, 1995]. Results from a simplified model are shown to showcase the ability of numerical homogenization to simulate irradiation effects in concrete. In future works, the complete model will be appliedmore » to the analysis of the irradiation experiments of Elleuch et al. [1972] and Kelly et al. [1969]. This requires a careful examination of the experimental environmental conditions as in both cases certain critical information are missing, including the relative humidity history. A sensitivity analysis will be conducted to provide lower and upper bounds of the concrete expansion under irradiation, and check if the scatter in the simulated results matches the one found in experiments. The numerical and experimental results will be compared in terms of expansion and loss of mechanical stiffness and strength. Both effects should be captured accordingly by the model to validate it. Once the model has been validated on these two experiments, it can be applied to simulate concrete from nuclear power plants. To do so, the materials used in these concrete must be as well characterized as possible. The main parameters required are the mechanical properties of each constituent in the concrete (aggregates, cement paste), namely the elastic modulus, the creep properties, the tensile and compressive strength, the thermal expansion coefficient, and the drying shrinkage. These can be either measured experimentally, estimated from the initial composition in the case of cement paste, or back-calculated from mechanical tests on concrete. If some are unknown, a sensitivity analysis must be carried out to provide lower and upper bounds of the material behaviour. Finally, the model can be used as a basis to formulate a macroscopic material model for concrete subject to irradiation, which later can be used in structural analyses to estimate the structural impact of irradiation on nuclear power plants.« less

  9. ANSYS duplicate finite-element checker routine

    NASA Technical Reports Server (NTRS)

    Ortega, R.

    1995-01-01

    An ANSYS finite-element code routine to check for duplicated elements within the volume of a three-dimensional (3D) finite-element mesh was developed. The routine developed is used for checking floating elements within a mesh, identically duplicated elements, and intersecting elements with a common face. A space shuttle main engine alternate turbopump development high pressure oxidizer turbopump finite-element model check using the developed subroutine is discussed. Finally, recommendations are provided for duplicate element checking of 3D finite-element models.

  10. Pharmacist and Technician Perceptions of Tech-Check-Tech in Community Pharmacy Practice Settings.

    PubMed

    Frost, Timothy P; Adams, Alex J

    2018-04-01

    Tech-check-tech (TCT) is a practice model in which pharmacy technicians with advanced training can perform final verification of prescriptions that have been previously reviewed for appropriateness by a pharmacist. Few states have adopted TCT in part because of the common view that this model is controversial among members of the profession. This article aims to summarize the existing research on pharmacist and technician perceptions of community pharmacy-based TCT. A literature review was conducted using MEDLINE (January 1990 to August 2016) and Google Scholar (January 1990 to August 2016) using the terms "tech* and check," "tech-check-tech," "checking technician," and "accuracy checking tech*." Of the 7 studies identified we found general agreement among both pharmacists and technicians that TCT in community pharmacy settings can be safely performed. This agreement persisted in studies of theoretical TCT models and in studies assessing participants in actual community-based TCT models. Pharmacists who had previously worked with a checking technician were generally more favorable toward TCT. Both pharmacists and technicians in community pharmacy settings generally perceived TCT to be safe, in both theoretical surveys and in surveys following actual TCT demonstration projects. These perceptions of safety align well with the actual outcomes achieved from community pharmacy TCT studies.

  11. Constraints on the [Formula: see text] form factor from analyticity and unitarity.

    PubMed

    Ananthanarayan, B; Caprini, I; Kubis, B

    Motivated by the discrepancies noted recently between the theoretical calculations of the electromagnetic [Formula: see text] form factor and certain experimental data, we investigate this form factor using analyticity and unitarity in a framework known as the method of unitarity bounds. We use a QCD correlator computed on the spacelike axis by operator product expansion and perturbative QCD as input, and exploit unitarity and the positivity of its spectral function, including the two-pion contribution that can be reliably calculated using high-precision data on the pion form factor. From this information, we derive upper and lower bounds on the modulus of the [Formula: see text] form factor in the elastic region. The results provide a significant check on those obtained with standard dispersion relations, confirming the existence of a disagreement with experimental data in the region around [Formula: see text].

  12. Boudot's Range-Bounded Commitment Scheme Revisited

    NASA Astrophysics Data System (ADS)

    Cao, Zhengjun; Liu, Lihua

    Checking whether a committed integer lies in a specific interval has many cryptographic applications. In Eurocrypt'98, Chan et al. proposed an instantiation (CFT Proof). Based on CFT, Boudot presented a popular range-bounded commitment scheme in Eurocrypt'2000. Both CFT Proof and Boudot Proof are based on the encryption E(x, r)=g^xh^r mod n, where n is an RSA modulus whose factorization is unknown by the prover. They did not use a single base as usual. Thus an increase in cost occurs. In this paper, we show that it suffices to adopt a single base. The cost of the modified Boudot Proof is about half of that of the original scheme. Moreover, the key restriction in the original scheme, i.e., both the discrete logarithm of g in base h and the discrete logarithm of h in base g are unknown by the prover, which is a potential menace to the Boudot Proof, is definitely removed.

  13. Bounding species distribution models

    USGS Publications Warehouse

    Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.

  14. Bounding Species Distribution Models

    NASA Technical Reports Server (NTRS)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  15. Exploring L1 model space in search of conductivity bounds for the MT problem

    NASA Astrophysics Data System (ADS)

    Wheelock, B. D.; Parker, R. L.

    2013-12-01

    Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.

  16. A Note on a Sampling Theorem for Functions over GF(q)n Domain

    NASA Astrophysics Data System (ADS)

    Ukita, Yoshifumi; Saito, Tomohiko; Matsushima, Toshiyasu; Hirasawa, Shigeichi

    In digital signal processing, the sampling theorem states that any real valued function ƒ can be reconstructed from a sequence of values of ƒ that are discretely sampled with a frequency at least twice as high as the maximum frequency of the spectrum of ƒ. This theorem can also be applied to functions over finite domain. Then, the range of frequencies of ƒ can be expressed in more detail by using a bounded set instead of the maximum frequency. A function whose range of frequencies is confined to a bounded set is referred to as bandlimited function. And a sampling theorem for bandlimited functions over Boolean domain has been obtained. Here, it is important to obtain a sampling theorem for bandlimited functions not only over Boolean domain (GF(q)n domain) but also over GF(q)n domain, where q is a prime power and GF(q) is Galois field of order q. For example, in experimental designs, although the model can be expressed as a linear combination of the Fourier basis functions and the levels of each factor can be represented by GF(q)n, the number of levels often take a value greater than two. However, the sampling theorem for bandlimited functions over GF(q)n domain has not been obtained. On the other hand, the sampling points are closely related to the codewords of a linear code. However, the relation between the parity check matrix of a linear code and any distinct error vectors has not been obtained, although it is necessary for understanding the meaning of the sampling theorem for bandlimited functions. In this paper, we generalize the sampling theorem for bandlimited functions over Boolean domain to a sampling theorem for bandlimited functions over GF(q)n domain. We also present a theorem for the relation between the parity check matrix of a linear code and any distinct error vectors. Lastly, we clarify the relation between the sampling theorem for functions over GF(q)n domain and linear codes.

  17. Spherically symmetric vacuum in covariant F (T )=T +α/2 T2+O (Tγ) gravity theory

    NASA Astrophysics Data System (ADS)

    DeBenedictis, Andrew; Ilijić, Saša

    2016-12-01

    Recently, a fully covariant version of the theory of F (T ) torsion gravity has been introduced by M. Kršśák and E. Saridakis [Classical Quantum Gravity 33, 115009 (2016)]. In covariant F (T ) gravity, the Schwarzschild solution is not a vacuum solution for F (T )≠T , and therefore determining the spherically symmetric vacuum is an important open problem. Within the covariant framework, we perturbatively solve the spherically symmetric vacuum gravitational equations around the Schwarzschild solution for the scenario with F (T )=T +(α /2 )T2 , representing the dominant terms in theories governed by Lagrangians analytic in the torsion scalar. From this, we compute the perihelion shift correction to solar system planetary orbits as well as perturbative gravitational effects near neutron stars. This allows us to set an upper bound on the magnitude of the coupling constant, α , which governs deviations from general relativity. We find the bound on this nonlinear torsion coupling constant by specifically considering the uncertainty in the perihelion shift of Mercury. We also analyze a bound from a similar comparison with the periastron orbit of the binary pulsar PSR J0045-7319 as an independent check for consistency. Setting bounds on the dominant nonlinear coupling is important in determining if other effects in the Solar System or greater universe could be attributable to nonlinear torsion.

  18. Model Checking the Remote Agent Planner

    NASA Technical Reports Server (NTRS)

    Khatib, Lina; Muscettola, Nicola; Havelund, Klaus; Norvig, Peter (Technical Monitor)

    2001-01-01

    This work tackles the problem of using Model Checking for the purpose of verifying the HSTS (Scheduling Testbed System) planning system. HSTS is the planner and scheduler of the remote agent autonomous control system deployed in Deep Space One (DS1). Model Checking allows for the verification of domain models as well as planning entries. We have chosen the real-time model checker UPPAAL for this work. We start by motivating our work in the introduction. Then we give a brief description of HSTS and UPPAAL. After that, we give a sketch for the mapping of HSTS models into UPPAAL and we present samples of plan model properties one may want to verify.

  19. Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration

    NASA Technical Reports Server (NTRS)

    Groce, Alex; Joshi, Rajeev

    2008-01-01

    Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.

  20. CheckMATE 2: From the model to the limit

    NASA Astrophysics Data System (ADS)

    Dercks, Daniel; Desai, Nishita; Kim, Jong Soo; Rolbiecki, Krzysztof; Tattersall, Jamie; Weber, Torsten

    2017-12-01

    We present the latest developments to the CheckMATE program that allows models of new physics to be easily tested against the recent LHC data. To achieve this goal, the core of CheckMATE now contains over 60 LHC analyses of which 12 are from the 13 TeV run. The main new feature is that CheckMATE 2 now integrates the Monte Carlo event generation via MadGraph5_aMC@NLO and Pythia 8. This allows users to go directly from a SLHA file or UFO model to the result of whether a model is allowed or not. In addition, the integration of the event generation leads to a significant increase in the speed of the program. Many other improvements have also been made, including the possibility to now combine signal regions to give a total likelihood for a model.

  1. Coverage Metrics for Model Checking

    NASA Technical Reports Server (NTRS)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  2. Posterior Predictive Model Checking in Bayesian Networks

    ERIC Educational Resources Information Center

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  3. Analyzing the cost of screening selectee and non-selectee baggage.

    PubMed

    Virta, Julie L; Jacobson, Sheldon H; Kobza, John E

    2003-10-01

    Determining how to effectively operate security devices is as important to overall system performance as developing more sensitive security devices. In light of recent federal mandates for 100% screening of all checked baggage, this research studies the trade-offs between screening only selectee checked baggage and screening both selectee and non-selectee checked baggage for a single baggage screening security device deployed at an airport. This trade-off is represented using a cost model that incorporates the cost of the baggage screening security device, the volume of checked baggage processed through the device, and the outcomes that occur when the device is used. The cost model captures the cost of deploying, maintaining, and operating a single baggage screening security device over a one-year period. The study concludes that as excess baggage screening capacity is used to screen non-selectee checked bags, the expected annual cost increases, the expected annual cost per checked bag screened decreases, and the expected annual cost per expected number of threats detected in the checked bags screened increases. These results indicate that the marginal increase in security per dollar spent is significantly lower when non-selectee checked bags are screened than when only selectee checked bags are screened.

  4. COVERT: A Framework for Finding Buffer Overflows in C Programs via Software Verification

    DTIC Science & Technology

    2010-08-01

    is greater than the allocated size of B. In the case of a type-safe language or a language with runtime bounds checking (such as Java), an overflow...leads either to a (compile-time) type error or a (runtime) exception. In such languages , a buffer overflow can lead to a denial of service attack (i.e...of current and legacy software is written in unsafe languages (such as C or C++) that allow buffers to be overflowed with impunity. For reasons such as

  5. KSC-04pd1453

    NASA Image and Video Library

    2004-07-06

    KENNEDY SPACE CENTER, FLA. - Workers in the mobile service tower on Pad 17-B, Cape Canaveral Air Force Station, check the progress of the Boeing Delta II Heavy second-stage engine as it descends toward the first stage. The Delta is the launch vehicle for the MESSENGER (Mercury Surface, Space Environment, Geochemistry and Ranging) spacecraft, scheduled to lift off Aug. 2. Bound for Mercury, the spacecraft is expected to reach orbit around the planet in March 2011. MESSENGER was built for NASA by the Johns Hopkins University Applied Physics Laboratory in Laurel, Md.

  6. 40 CFR 86.327-79 - Quench checks; NOX analyzer.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Quench checks; NOX analyzer. (a) Perform the reaction chamber quench check for each model of high vacuum reaction chamber analyzer prior to initial use. (b) Perform the reaction chamber quench check for each new analyzer that has an ambient pressure or “soft vacuum” reaction chamber prior to initial use. Additionally...

  7. 40 CFR 86.327-79 - Quench checks; NOX analyzer.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Quench checks; NOX analyzer. (a) Perform the reaction chamber quench check for each model of high vacuum reaction chamber analyzer prior to initial use. (b) Perform the reaction chamber quench check for each new analyzer that has an ambient pressure or “soft vacuum” reaction chamber prior to initial use. Additionally...

  8. Exact solution for the quench dynamics of a nested integrable system

    NASA Astrophysics Data System (ADS)

    Mestyán, Márton; Bertini, Bruno; Piroli, Lorenzo; Calabrese, Pasquale

    2017-08-01

    Integrable models provide an exact description for a wide variety of physical phenomena. For example nested integrable systems contain different species of interacting particles with a rich phenomenology in their collective behavior, which is the origin of the unconventional phenomenon of spin-charge separation. So far, however, most of the theoretical work in the study of non-equilibrium dynamics of integrable systems has focussed on models with an elementary (i.e. not nested) Bethe ansatz. In this work we explicitly investigate quantum quenches in nested integrable systems, by generalizing the application of the quench action approach. Specifically, we consider the spin-1 Lai-Sutherland model, described, in the thermodynamic limit, by the theory of two different species of Bethe-ansatz particles, each one forming an infinite number of bound states. We focus on the situation where the quench dynamics starts from a simple matrix product state for which the overlaps with the eigenstates of the Hamiltonian are known. We fully characterize the post-quench steady state and perform several consistency checks for the validity of our results. Finally, we provide predictions for the propagation of entanglement and mutual information after the quench, which can be used as signature of the quasi-particle content of the model.

  9. Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking

    NASA Technical Reports Server (NTRS)

    Turgeon, Gregory; Price, Petra

    2010-01-01

    A feasibility study was performed on a representative aerospace system to determine the following: (1) the benefits and limitations to using SCADE , a commercially available tool for model checking, in comparison to using a proprietary tool that was studied previously [1] and (2) metrics for performing the model checking and for assessing the findings. This study was performed independently of the development task by a group unfamiliar with the system, providing a fresh, external perspective free from development bias.

  10. MPST Software: grl_pef_check

    NASA Technical Reports Server (NTRS)

    Call, Jared A.; Kwok, John H.; Fisher, Forest W.

    2013-01-01

    This innovation is a tool used to verify and validate spacecraft sequences at the predicted events file (PEF) level for the GRAIL (Gravity Recovery and Interior Laboratory, see http://www.nasa. gov/mission_pages/grail/main/index. html) mission as part of the Multi-Mission Planning and Sequencing Team (MPST) operations process to reduce the possibility for errors. This tool is used to catch any sequence related errors or issues immediately after the seqgen modeling to streamline downstream processes. This script verifies and validates the seqgen modeling for the GRAIL MPST process. A PEF is provided as input, and dozens of checks are performed on it to verify and validate the command products including command content, command ordering, flight-rule violations, modeling boundary consistency, resource limits, and ground commanding consistency. By performing as many checks as early in the process as possible, grl_pef_check streamlines the MPST task of generating GRAIL command and modeled products on an aggressive schedule. By enumerating each check being performed, and clearly stating the criteria and assumptions made at each step, grl_pef_check can be used as a manual checklist as well as an automated tool. This helper script was written with a focus on enabling the user with the information they need in order to evaluate a sequence quickly and efficiently, while still keeping them informed and active in the overall sequencing process. grl_pef_check verifies and validates the modeling and sequence content prior to investing any more effort into the build. There are dozens of various items in the modeling run that need to be checked, which is a time-consuming and errorprone task. Currently, no software exists that provides this functionality. Compared to a manual process, this script reduces human error and saves considerable man-hours by automating and streamlining the mission planning and sequencing task for the GRAIL mission.

  11. Practical Formal Verification of Diagnosability of Large Models via Symbolic Model Checking

    NASA Technical Reports Server (NTRS)

    Cavada, Roberto; Pecheur, Charles

    2003-01-01

    This document reports on the activities carried out during a four-week visit of Roberto Cavada at the NASA Ames Research Center. The main goal was to test the practical applicability of the framework proposed, where a diagnosability problem is reduced to a Symbolic Model Checking problem. Section 2 contains a brief explanation of major techniques currently used in Symbolic Model Checking, and how these techniques can be tuned in order to obtain good performances when using Model Checking tools. Diagnosability is performed on large and structured models of real plants. Section 3 describes how these plants are modeled, and how models can be simplified to improve the performance of Symbolic Model Checkers. Section 4 reports scalability results. Three test cases are briefly presented, and several parameters and techniques have been applied on those test cases in order to produce comparison tables. Furthermore, comparison between several Model Checkers is reported. Section 5 summarizes the application of diagnosability verification to a real application. Several properties have been tested, and results have been highlighted. Finally, section 6 draws some conclusions, and outlines future lines of research.

  12. Non-equilibrium dog-flea model

    NASA Astrophysics Data System (ADS)

    Ackerson, Bruce J.

    2017-11-01

    We develop the open dog-flea model to serve as a check of proposed non-equilibrium theories of statistical mechanics. The model is developed in detail. Then it is applied to four recent models for non-equilibrium statistical mechanics. Comparison of the dog-flea solution with these different models allows checking claims and giving a concrete example of the theoretical models.

  13. Abstraction and Assume-Guarantee Reasoning for Automated Software Verification

    NASA Technical Reports Server (NTRS)

    Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.

    2004-01-01

    Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.

  14. Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration

    NASA Astrophysics Data System (ADS)

    Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola

    In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.

  15. Physical Uncertainty Bounds (PUB)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less

  16. Incremental checking of Master Data Management model based on contextual graphs

    NASA Astrophysics Data System (ADS)

    Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan

    2015-10-01

    The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.

  17. Model Checking - My 27-Year Quest to Overcome the State Explosion Problem

    NASA Technical Reports Server (NTRS)

    Clarke, Ed

    2009-01-01

    Model Checking is an automatic verification technique for state-transition systems that are finite=state or that have finite-state abstractions. In the early 1980 s in a series of joint papers with my graduate students E.A. Emerson and A.P. Sistla, we proposed that Model Checking could be used for verifying concurrent systems and gave algorithms for this purpose. At roughly the same time, Joseph Sifakis and his student J.P. Queille at the University of Grenoble independently developed a similar technique. Model Checking has been used successfully to reason about computer hardware and communication protocols and is beginning to be used for verifying computer software. Specifications are written in temporal logic, which is particularly valuable for expressing concurrency properties. An intelligent, exhaustive search is used to determine if the specification is true or not. If the specification is not true, the Model Checker will produce a counterexample execution trace that shows why the specification does not hold. This feature is extremely useful for finding obscure errors in complex systems. The main disadvantage of Model Checking is the state-explosion problem, which can occur if the system under verification has many processes or complex data structures. Although the state-explosion problem is inevitable in worst case, over the past 27 years considerable progress has been made on the problem for certain classes of state-transition systems that occur often in practice. In this talk, I will describe what Model Checking is, how it works, and the main techniques that have been developed for combating the state explosion problem.

  18. Robust Bounded Influence Tests in Linear Models

    DTIC Science & Technology

    1988-11-01

    sensitivity analysis and bounded influence estimation. In: Evaluation of Econometric Models, J. Kmenta and J.B. Ramsey (eds.) Academic Press, New York...1R’OBUST bOUNDED INFLUENCE TESTS IN LINEA’ MODELS and( I’homas P. [lettmansperger* Tim [PennsylvanLa State UJniversity A M i0d fix pu111 rsos.p JJ 1 0...November 1988 ROBUST BOUNDED INFLUENCE TESTS IN LINEAR MODELS Marianthi Markatou The University of Iowa and Thomas P. Hettmansperger* The Pennsylvania

  19. Synthetic velocity gradient map of the San Francisco Bay region, California, supports use of average block velocities to estimate fault slip rate where effective locking depth is small relative to inter-fault distance

    NASA Astrophysics Data System (ADS)

    Graymer, R. W.; Simpson, R. W.

    2014-12-01

    Graymer and Simpson (2013, AGU Fall Meeting) showed that in a simple 2D multi-fault system (vertical, parallel, strike-slip faults bounding blocks without strong material property contrasts) slip rate on block-bounding faults can be reasonably estimated by the difference between the mean velocity of adjacent blocks if the ratio of the effective locking depth to the distance between the faults is 1/3 or less ("effective" locking depth is a synthetic parameter taking into account actual locking depth, fault creep, and material properties of the fault zone). To check the validity of that observation for a more complex 3D fault system and a realistic distribution of observation stations, we developed a synthetic suite of GPS velocities from a dislocation model, with station location and fault parameters based on the San Francisco Bay region. Initial results show that if the effective locking depth is set at the base of the seismogenic zone (about 12-15 km), about 1/2 the interfault distance, the resulting synthetic velocity observations, when clustered, do a poor job of returning the input fault slip rates. However, if the apparent locking depth is set at 1/2 the distance to the base of the seismogenic zone, or about 1/4 the interfault distance, the synthetic velocity field does a good job of returning the input slip rates except where the fault is in a strong restraining orientation relative to block motion or where block velocity is not well defined (for example west of the northern San Andreas Fault where there are no observations to the west in the ocean). The question remains as to where in the real world a low effective locking depth could usefully model fault behavior. Further tests are planned to define the conditions where average cluster-defined block velocities can be used to reliably estimate slip rates on block-bounding faults. These rates are an important ingredient in earthquake hazard estimation, and another tool to provide them should be useful.

  20. A Multidimensional Item Response Model: Constrained Latent Class Analysis Using the Gibbs Sampler and Posterior Predictive Checks.

    ERIC Educational Resources Information Center

    Hoijtink, Herbert; Molenaar, Ivo W.

    1997-01-01

    This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)

  1. Construct validity and reliability of the Single Checking Administration of Medications Scale.

    PubMed

    O'Connell, Beverly; Hawkins, Mary; Ockerby, Cherene

    2013-06-01

    Research indicates that single checking of medications is as safe as double checking; however, many nurses are averse to independently checking medications. To assist with the introduction and use of single checking, a measure of nurses' attitudes, the thirteen-item Single Checking Administration of Medications Scale (SCAMS) was developed. We examined the psychometric properties of the SCAMS. Secondary analyses were conducted on data collected from 503 nurses across a large Australian health-care service. Analyses using exploratory and confirmatory factor analyses supported by structural equation modelling resulted in a valid twelve-item SCAMS containing two reliable subscales, the nine-item Attitudes towards single checking and three-item Advantages of single checking subscales. The SCAMS is recommended as a valid and reliable measure for monitoring nurses' attitudes to single checking prior to introducing single checking medications and after its implementation. © 2013 Wiley Publishing Asia Pty Ltd.

  2. Reionization in sterile neutrino cosmologies

    NASA Astrophysics Data System (ADS)

    Bose, Sownak; Frenk, Carlos S.; Hou, Jun; Lacey, Cedric G.; Lovell, Mark R.

    2016-12-01

    We investigate the process of reionization in a model in which the dark matter is a warm elementary particle such as a sterile neutrino. We focus on models that are consistent with the dark matter decay interpretation of the recently detected line at 3.5 keV in the X-ray spectra of galaxies and clusters. In warm dark matter models, the primordial spectrum of density perturbations has a cut-off on the scale of dwarf galaxies. Structure formation therefore begins later than in the standard cold dark matter (CDM) model and very few objects form below the cut-off mass scale. To calculate the number of ionizing photons, we use the Durham semi-analytic model of galaxy formation, GALFORM. We find that even the most extreme 7 keV sterile neutrino we consider is able to reionize the Universe early enough to be compatible with the bounds on the epoch of reionization from Planck. This, perhaps surprising, result arises from the rapid build-up of high redshift galaxies in the sterile neutrino models which is also reflected in a faster evolution of their far-UV luminosity function between 10 > z > 7 than in CDM. The dominant sources of ionizing photons are systematically more massive in the sterile neutrino models than in CDM. As a consistency check on the models, we calculate the present-day luminosity function of satellites of Milky Way-like galaxies. When the satellites recently discovered in the Dark Energy Survey are taken into account, strong constraints are placed on viable sterile neutrino models.

  3. Simulation-based model checking approach to cell fate specification during Caenorhabditis elegans vulval development by hybrid functional Petri net with extension.

    PubMed

    Li, Chen; Nagasaki, Masao; Ueno, Kazuko; Miyano, Satoru

    2009-04-27

    Model checking approaches were applied to biological pathway validations around 2003. Recently, Fisher et al. have proved the importance of model checking approach by inferring new regulation of signaling crosstalk in C. elegans and confirming the regulation with biological experiments. They took a discrete and state-based approach to explore all possible states of the system underlying vulval precursor cell (VPC) fate specification for desired properties. However, since both discrete and continuous features appear to be an indispensable part of biological processes, it is more appropriate to use quantitative models to capture the dynamics of biological systems. Our key motivation of this paper is to establish a quantitative methodology to model and analyze in silico models incorporating the use of model checking approach. A novel method of modeling and simulating biological systems with the use of model checking approach is proposed based on hybrid functional Petri net with extension (HFPNe) as the framework dealing with both discrete and continuous events. Firstly, we construct a quantitative VPC fate model with 1761 components by using HFPNe. Secondly, we employ two major biological fate determination rules - Rule I and Rule II - to VPC fate model. We then conduct 10,000 simulations for each of 48 sets of different genotypes, investigate variations of cell fate patterns under each genotype, and validate the two rules by comparing three simulation targets consisting of fate patterns obtained from in silico and in vivo experiments. In particular, an evaluation was successfully done by using our VPC fate model to investigate one target derived from biological experiments involving hybrid lineage observations. However, the understandings of hybrid lineages are hard to make on a discrete model because the hybrid lineage occurs when the system comes close to certain thresholds as discussed by Sternberg and Horvitz in 1986. Our simulation results suggest that: Rule I that cannot be applied with qualitative based model checking, is more reasonable than Rule II owing to the high coverage of predicted fate patterns (except for the genotype of lin-15ko; lin-12ko double mutants). More insights are also suggested. The quantitative simulation-based model checking approach is a useful means to provide us valuable biological insights and better understandings of biological systems and observation data that may be hard to capture with the qualitative one.

  4. Design of Installing Check Dam Using RAMMS Model in Seorak National Park of South Korea

    NASA Astrophysics Data System (ADS)

    Jun, K.; Tak, W.; JUN, B. H.; Lee, H. J.; KIM, S. D.

    2016-12-01

    Design of Installing Check Dam Using RAMMS Model in Seorak National Park of South Korea Kye-Won Jun*, Won-Jun Tak*, Byong-Hee Jun**, Ho-Jin Lee***, Soung-Doug Kim* *Graduate School of Disaster Prevention, Kangwon National University, 346 Joogang-ro, Samcheok-si, Gangwon-do, Korea **School of Fire and Disaster Protection, Kangwon National University, 346 Joogang-ro, Samcheok-si, Gangwon-do, Korea ***School of Civil Engineering, Chungbuk National University, 1 Chungdae-ro, Seowon-gu, Cheongju, Korea Abstract As more than 64% of the land in South Korea is mountainous area, so many regions in South Korea are exposed to the danger of landslide and debris flow. So it is important to understand the behavior of debris flow in mountainous terrains, the various methods and models are being presented and developed based on the mathematical concept. The purpose of this study is to investigate the regions that experienced the debris flow due to typhoon called Ewiniar and to perform numerical modeling to design and layout of the Check dam for reducing the damage by the debris flow. For the performance of numerical modeling, on-site measurement of the research area was conducted including: topographic investigation, research on bridges in the downstream, and precision LiDAR 3D scanning for composed basic data of numerical modeling. The numerical simulation of this study was performed using RAMMS (Rapid Mass Movements Simulation) model for the analysis of the debris flow. This model applied to the conditions of the Check dam which was installed in the upstream, midstream, and downstream. Considering the reduction effect of debris flow, the expansion of debris flow, and the influence on the bridges in the downstream, proper location of the Check dam was designated. The result of present numerical model showed that when the Check dam was installed in the downstream section, 50 m above the bridge, the reduction effect of the debris flow was higher compared to when the Check dam were installed in other sections. Key words: Debris flow, LiDAR, Check dam, RAMMSAcknowledgementsThis research was supported by a grant [MPSS-NH-2014-74] through the Disaster and Safety Management Institute funded by Ministry of Public Safety and Security of Korean government

  5. Model building strategy for logistic regression: purposeful selection.

    PubMed

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  6. HiVy automated translation of stateflow designs for model checking verification

    NASA Technical Reports Server (NTRS)

    Pingree, Paula

    2003-01-01

    tool set enables model checking of finite state machines designs. This is acheived by translating state-chart specifications into the input language of the Spin model checker. An abstract syntax of hierarchical sequential automata (HSA) is provided as an intermediate format tool set.

  7. Petri Net and Probabilistic Model Checking Based Approach for the Modelling, Simulation and Verification of Internet Worm Propagation

    PubMed Central

    Razzaq, Misbah; Ahmad, Jamil

    2015-01-01

    Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework. PMID:26713449

  8. Petri Net and Probabilistic Model Checking Based Approach for the Modelling, Simulation and Verification of Internet Worm Propagation.

    PubMed

    Razzaq, Misbah; Ahmad, Jamil

    2015-01-01

    Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework.

  9. Immediate Effects of Body Checking Behaviour on Negative and Positive Emotions in Women with Eating Disorders: An Ecological Momentary Assessment Approach.

    PubMed

    Kraus, Nicole; Lindenberg, Julia; Zeeck, Almut; Kosfelder, Joachim; Vocks, Silja

    2015-09-01

    Cognitive-behavioural models of eating disorders state that body checking arises in response to negative emotions in order to reduce the aversive emotional state and is therefore negatively reinforced. This study empirically tests this assumption. For a seven-day period, women with eating disorders (n = 26) and healthy controls (n = 29) were provided with a handheld computer for assessing occurring body checking strategies as well as negative and positive emotions. Serving as control condition, randomized computer-emitted acoustic signals prompted reports on body checking and emotions. There was no difference in the intensity of negative emotions before body checking and in control situations across groups. However, from pre- to post-body checking, an increase in negative emotions was found. This effect was more pronounced in women with eating disorders compared with healthy controls. Results are contradictory to the assumptions of the cognitive-behavioural model, as body checking does not seem to reduce negative emotions. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.

  10. Impacts of the driver's bounded rationality on the traffic running cost under the car-following model

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Luo, Xiao-Feng; Liu, Kai

    2016-09-01

    The driver's bounded rationality has significant influences on the micro driving behavior and researchers proposed some traffic flow models with the driver's bounded rationality. However, little effort has been made to explore the effects of the driver's bounded rationality on the trip cost. In this paper, we use our recently proposed car-following model to study the effects of the driver's bounded rationality on his running cost and the system's total cost under three traffic running costs. The numerical results show that considering the driver's bounded rationality will enhance his each running cost and the system's total cost under the three traffic running costs.

  11. Model Diagnostics for Bayesian Networks

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2006-01-01

    Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…

  12. Limits on Log Cross-Product Ratios for Item Response Models. Research Report. ETS RR-06-10

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Holland, Paul W.; Sinharay, Sandip

    2006-01-01

    Bounds are established for log cross-product ratios (log odds ratios) involving pairs of items for item response models. First, expressions for bounds on log cross-product ratios are provided for unidimensional item response models in general. Then, explicit bounds are obtained for the Rasch model and the two-parameter logistic (2PL) model.…

  13. The dopamine D2/D3 receptor agonist quinpirole increases checking-like behaviour in an operant observing response task with uncertain reinforcement: A novel possible model of OCD?

    PubMed Central

    Eagle, Dawn M.; Noschang, Cristie; d’Angelo, Laure-Sophie Camilla; Noble, Christie A.; Day, Jacob O.; Dongelmans, Marie Louise; Theobald, David E.; Mar, Adam C.; Urcelay, Gonzalo P.; Morein-Zamir, Sharon; Robbins, Trevor W.

    2014-01-01

    Excessive checking is a common, debilitating symptom of obsessive-compulsive disorder (OCD). In an established rodent model of OCD checking behaviour, quinpirole (dopamine D2/3-receptor agonist) increased checking in open-field tests, indicating dopaminergic modulation of checking-like behaviours. We designed a novel operant paradigm for rats (observing response task (ORT)) to further examine cognitive processes underpinning checking behaviour and clarify how and why checking develops. We investigated i) how quinpirole increases checking, ii) dependence of these effects on D2/3 receptor function (following treatment with D2/3 receptor antagonist sulpiride) and iii) effects of reward uncertainty. In the ORT, rats pressed an ‘observing’ lever for information about the location of an ‘active’ lever that provided food reinforcement. High- and low-checkers (defined from baseline observing) received quinpirole (0.5 mg/kg, 10 treatments) or vehicle. Parametric task manipulations assessed observing/checking under increasing task demands relating to reinforcement uncertainty (variable response requirement and active-lever location switching). Treatment with sulpiride further probed the pharmacological basis of long-term behavioural changes. Quinpirole selectively increased checking, both functional observing lever presses (OLPs) and non-functional extra OLPs (EOLPs). The increase in OLPs and EOLPs was long-lasting, without further quinpirole administration. Quinpirole did not affect the immediate ability to use information from checking. Vehicle and quinpirole-treated rats (VEH and QNP respectively) were selectively sensitive to different forms of uncertainty. Sulpiride reduced non-functional EOLPs in QNP rats but had no effect on functional OLPs. These data have implications for treatment of compulsive checking in OCD, particularly for serotonin-reuptake-inhibitor treatment-refractory cases, where supplementation with dopamine receptor antagonists may be beneficial. PMID:24406720

  14. Perpetual Model Validation

    DTIC Science & Technology

    2017-03-01

    models of software execution, for example memory access patterns, to check for security intrusions. Additional research was performed to tackle the...considered using indirect models of software execution, for example memory access patterns, to check for security intrusions. Additional research ...deterioration for example , no longer corresponds to the model used during verification time. Finally, the research looked at ways to combine hybrid systems

  15. Finite temperature corrections and embedded strings in noncommutative geometry and the standard model with neutrino mixing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martins, R. A.

    The recent extension of the standard model to include massive neutrinos in the framework of noncommutative geometry and the spectral action principle involves new scalar fields and their interactions with the usual complex scalar doublet. After ensuring that they bring no unphysical consequences, we address the question of how these fields affect the physics predicted in the Weinberg-Salam theory, particularly in the context of the electroweak phase transition. Applying the Dolan-Jackiw procedure, we calculate the finite temperature corrections, and find that the phase transition is first order. The new scalar interactions significantly improve the stability of the electroweak Z string,more » through the 'bag' phenomenon described by Vachaspati and Watkins ['Bound states can stabilize electroweak strings', Phys. Lett. B 318, 163-168 (1993)]. (Recently, cosmic strings have climbed back into interest due to a new evidence.) Sourced by static embedded strings, an internal space analogy of Cartan's torsion is drawn, and a possible Higgs-force-like 'gravitational' effect of this nonpropagating torsion on the fermion masses is described. We also check that the field generating the Majorana mass for the {nu}{sub R} is nonzero in the physical vacuum.« less

  16. Evolution of dispersal in spatially and temporally variable environments: The importance of life cycles.

    PubMed

    Massol, François; Débarre, Florence

    2015-07-01

    Spatiotemporal variability of the environment is bound to affect the evolution of dispersal, and yet model predictions strongly differ on this particular effect. Recent studies on the evolution of local adaptation have shown that the life cycle chosen to model the selective effects of spatiotemporal variability of the environment is a critical factor determining evolutionary outcomes. Here, we investigate the effect of the order of events in the life cycle on the evolution of unconditional dispersal in a spatially heterogeneous, temporally varying landscape. Our results show that the occurrence of intermediate singular strategies and disruptive selection are conditioned by the temporal autocorrelation of the environment and by the life cycle. Life cycles with dispersal of adults versus dispersal of juveniles, local versus global density regulation, give radically different evolutionary outcomes that include selection for total philopatry, evolutionary bistability, selection for intermediate stable states, and evolutionary branching points. Our results highlight the importance of accounting for life-cycle specifics when predicting the effects of the environment on evolutionarily selected trait values, such as dispersal, as well as the need to check the robustness of model conclusions against modifications of the life cycle. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.

  17. Adaptive Control via Neural Output Feedback for a Class of Nonlinear Discrete-Time Systems in a Nested Interconnected Form.

    PubMed

    Li, Dong-Juan; Li, Da-Peng

    2017-09-14

    In this paper, an adaptive output feedback control is framed for uncertain nonlinear discrete-time systems. The considered systems are a class of multi-input multioutput nonaffine nonlinear systems, and they are in the nested lower triangular form. Furthermore, the unknown dead-zone inputs are nonlinearly embedded into the systems. These properties of the systems will make it very difficult and challenging to construct a stable controller. By introducing a new diffeomorphism coordinate transformation, the controlled system is first transformed into a state-output model. By introducing a group of new variables, an input-output model is finally obtained. Based on the transformed model, the implicit function theorem is used to determine the existence of the ideal controllers and the approximators are employed to approximate the ideal controllers. By using the mean value theorem, the nonaffine functions of systems can become an affine structure but nonaffine terms still exist. The adaptation auxiliary terms are skillfully designed to cancel the effect of the dead-zone input. Based on the Lyapunov difference theorem, the boundedness of all the signals in the closed-loop system can be ensured and the tracking errors are kept in a bounded compact set. The effectiveness of the proposed technique is checked by a simulation study.

  18. Proceedings of the Second NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar (Editor)

    2010-01-01

    This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.

  19. Precise and Efficient Static Array Bound Checking for Large Embedded C Programs

    NASA Technical Reports Server (NTRS)

    Venet, Arnaud

    2004-01-01

    In this paper we describe the design and implementation of a static array-bound checker for a family of embedded programs: the flight control software of recent Mars missions. These codes are large (up to 250 KLOC), pointer intensive, heavily multithreaded and written in an object-oriented style, which makes their analysis very challenging. We designed a tool called C Global Surveyor (CGS) that can analyze the largest code in a couple of hours with a precision of 80%. The scalability and precision of the analyzer are achieved by using an incremental framework in which a pointer analysis and a numerical analysis of array indices mutually refine each other. CGS has been designed so that it can distribute the analysis over several processors in a cluster of machines. To the best of our knowledge this is the first distributed implementation of static analysis algorithms. Throughout the paper we will discuss the scalability setbacks that we encountered during the construction of the tool and their impact on the initial design decisions.

  20. Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.

  1. Development Context Driven Change Awareness and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Sarma, Anita; Branchaud, Josh; Dwyer, Matthew B.; Person, Suzette; Rungta, Neha

    2014-01-01

    Recent work on workspace monitoring allows conflict prediction early in the development process, however, these approaches mostly use syntactic differencing techniques to compare different program versions. In contrast, traditional change-impact analysis techniques analyze related versions of the program only after the code has been checked into the master repository. We propose a novel approach, De- CAF (Development Context Analysis Framework), that leverages the development context to scope a change impact analysis technique. The goal is to characterize the impact of each developer on other developers in the team. There are various client applications such as task prioritization, early conflict detection, and providing advice on testing that can benefit from such a characterization. The DeCAF framework leverages information from the development context to bound the iDiSE change impact analysis technique to analyze only the parts of the code base that are of interest. Bounding the analysis can enable DeCAF to efficiently compute the impact of changes using a combination of program dependence and symbolic execution based approaches.

  2. Development Context Driven Change Awareness and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Sarma, Anita; Branchaud, Josh; Dwyer, Matthew B.; Person, Suzette; Rungta, Neha; Wang, Yurong; Elbaum, Sebastian

    2014-01-01

    Recent work on workspace monitoring allows conflict prediction early in the development process, however, these approaches mostly use syntactic differencing techniques to compare different program versions. In contrast, traditional change-impact analysis techniques analyze related versions of the program only after the code has been checked into the master repository. We propose a novel approach, DeCAF (Development Context Analysis Framework), that leverages the development context to scope a change impact analysis technique. The goal is to characterize the impact of each developer on other developers in the team. There are various client applications such as task prioritization, early conflict detection, and providing advice on testing that can benefit from such a characterization. The DeCAF framework leverages information from the development context to bound the iDiSE change impact analysis technique to analyze only the parts of the code base that are of interest. Bounding the analysis can enable DeCAF to efficiently compute the impact of changes using a combination of program dependence and symbolic execution based approaches.

  3. Bounds on low scale gravity from RICE data and cosmogenic neutrino flux models

    NASA Astrophysics Data System (ADS)

    Hussain, Shahid; McKay, Douglas W.

    2006-03-01

    We explore limits on low scale gravity models set by results from the Radio Ice Cherenkov Experiment's (RICE) ongoing search for cosmic ray neutrinos in the cosmogenic, or GZK, energy range. The bound on M, the fundamental scale of gravity, depends upon cosmogenic flux model, black hole formation and decay treatments, inclusion of graviton mediated elastic neutrino processes, and the number of large extra dimensions, d. Assuming proton-based cosmogenic flux models that cover a broad range of flux possibilities, we find bounds in the interval 0.9 TeV

  4. Parallel Implementation of Numerical Solution of Few-Body Problem Using Feynman's Continual Integrals

    NASA Astrophysics Data System (ADS)

    Naumenko, Mikhail; Samarin, Viacheslav

    2018-02-01

    Modern parallel computing algorithm has been applied to the solution of the few-body problem. The approach is based on Feynman's continual integrals method implemented in C++ programming language using NVIDIA CUDA technology. A wide range of 3-body and 4-body bound systems has been considered including nuclei described as consisting of protons and neutrons (e.g., 3,4He) and nuclei described as consisting of clusters and nucleons (e.g., 6He). The correctness of the results was checked by the comparison with the exactly solvable 4-body oscillatory system and experimental data.

  5. An error bound for a discrete reduced order model of a linear multivariable system

    NASA Technical Reports Server (NTRS)

    Al-Saggaf, Ubaid M.; Franklin, Gene F.

    1987-01-01

    The design of feasible controllers for high dimension multivariable systems can be greatly aided by a method of model reduction. In order for the design based on the order reduction to include a guarantee of stability, it is sufficient to have a bound on the model error. Previous work has provided such a bound for continuous-time systems for algorithms based on balancing. In this note an L-infinity bound is derived for model error for a method of order reduction of discrete linear multivariable systems based on balancing.

  6. Black hole meiosis

    NASA Astrophysics Data System (ADS)

    van Herck, Walter; Wyder, Thomas

    2010-04-01

    The enumeration of BPS bound states in string theory needs refinement. Studying partition functions of particles made from D-branes wrapped on algebraic Calabi-Yau 3-folds, and classifying states using split attractor flow trees, we extend the method for computing a refined BPS index, [1]. For certain D-particles, a finite number of microstates, namely polar states, exclusively realized as bound states, determine an entire partition function (elliptic genus). This underlines their crucial importance: one might call them the ‘chromosomes’ of a D-particle or a black hole. As polar states also can be affected by our refinement, previous predictions on elliptic genera are modified. This can be metaphorically interpreted as ‘crossing-over in the meiosis of a D-particle’. Our results improve on [2], provide non-trivial evidence for a strong split attractor flow tree conjecture, and thus suggest that we indeed exhaust the BPS spectrum. In the D-brane description of a bound state, the necessity for refinement results from the fact that tachyonic strings split up constituent states into ‘generic’ and ‘special’ states. These are enumerated separately by topological invariants, which turn out to be partitions of Donaldson-Thomas invariants. As modular predictions provide a check on many of our results, we have compelling evidence that our computations are correct.

  7. Proceedings of the First NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)

    2009-01-01

    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.

  8. Model analysis of check dam impacts on long-term sediment and water budgets in southeast Arizona, USA

    USGS Publications Warehouse

    Norman, Laura M.; Niraula, Rewati

    2016-01-01

    The objective of this study was to evaluate the effect of check dam infrastructure on soil and water conservation at the catchment scale using the Soil and Water Assessment Tool (SWAT). This paired watershed study includes a watershed treated with over 2000 check dams and a Control watershed which has none, in the West Turkey Creek watershed, Southeast Arizona, USA. SWAT was calibrated for streamflow using discharge documented during the summer of 2013 at the Control site. Model results depict the necessity to eliminate lateral flow from SWAT models of aridland environments, the urgency to standardize geospatial soils data, and the care for which modelers must document altering parameters when presenting findings. Performance was assessed using the percent bias (PBIAS), with values of ±2.34%. The calibrated model was then used to examine the impacts of check dams at the Treated watershed. Approximately 630 tons of sediment is estimated to be stored behind check dams in the Treated watershed over the 3-year simulation, increasing water quality for fish habitat. A minimum precipitation event of 15 mm was necessary to instigate the detachment of soil, sediments, or rock from the study area, which occurred 2% of the time. The resulting watershed model is useful as a predictive framework and decision-support tool to consider long-term impacts of restoration and potential for future restoration.

  9. Kinematics and dynamics of the MKW/AWM poor clusters

    NASA Technical Reports Server (NTRS)

    Beers, Timothy C.; Kriessler, Jeffrey R.; Bird, Christina M.; Huchra, John P.

    1995-01-01

    We report 472 new redshifts for 416 galaxies in the regions of the 23 poor clusters of galaxies originally identified by Morgan, Kayser, and White (MKW), and Albert, White, and Morgan (AWM). Eighteen of the poor clusters now have 10 or more available redshifts within 1.5/h Mpc of the central galaxy; 11 clusters have at least 20 available redshifts. Based on the 21 clusters for which we have sufficient velocity information, the median velocity scale is 336 km/s, a factor of 2 smaller than found for rich clusters. Several of the poor clusters exhibit complex velocity distributions due to the presence of nearby clumps of galaxies. We check on the velocity of the dominant galaxy in each poor cluster relative to the remaining cluster members. Significantly high relative velocities of the dominant galaxy are found in only 4 of 21 poor clusters, 3 of which we suspect are due to contamination of the parent velocity distribution. Several statistical tests indicate that the D/cD galaxies are at the kinematic centers of the parent poor cluster velocity distributions. Mass-to-light ratios for 13 of the 15 poor clusters for which we have the required data are in the range 50 less than or = M/L(sub B(0)) less than or = 200 solar mass/solar luminosity. The complex nature of the regions surrounding many of the poor clusters suggests that these groupings may represent an early epoch of cluster formation. For example, the poor clusters MKW7 and MKWS are shown to be gravitationally bound and likely to merge to form a richer cluster within the next several Gyrs. Eight of the nine other poor clusters for which simple two-body dynamical models can be carried out are consistent with being bound to other clumps in their vicinity. Additional complex systems with more than two gravitationally bound clumps are observed among the poor clusters.

  10. Faster Parameterized Algorithms for Minor Containment

    NASA Astrophysics Data System (ADS)

    Adler, Isolde; Dorn, Frederic; Fomin, Fedor V.; Sau, Ignasi; Thilikos, Dimitrios M.

    The theory of Graph Minors by Robertson and Seymour is one of the deepest and significant theories in modern Combinatorics. This theory has also a strong impact on the recent development of Algorithms, and several areas, like Parameterized Complexity, have roots in Graph Minors. Until very recently it was a common belief that Graph Minors Theory is mainly of theoretical importance. However, it appears that many deep results from Robertson and Seymour's theory can be also used in the design of practical algorithms. Minor containment testing is one of algorithmically most important and technical parts of the theory, and minor containment in graphs of bounded branchwidth is a basic ingredient of this algorithm. In order to implement minor containment testing on graphs of bounded branchwidth, Hicks [NETWORKS 04] described an algorithm, that in time O(3^{k^2}\\cdot (h+k-1)!\\cdot m) decides if a graph G with m edges and branchwidth k, contains a fixed graph H on h vertices as a minor. That algorithm follows the ideas introduced by Robertson and Seymour in [J'CTSB 95]. In this work we improve the dependence on k of Hicks' result by showing that checking if H is a minor of G can be done in time O(2^{(2k +1 )\\cdot log k} \\cdot h^{2k} \\cdot 2^{2h^2} \\cdot m). Our approach is based on a combinatorial object called rooted packing, which captures the properties of the potential models of subgraphs of H that we seek in our dynamic programming algorithm. This formulation with rooted packings allows us to speed up the algorithm when G is embedded in a fixed surface, obtaining the first single-exponential algorithm for minor containment testing. Namely, it runs in time 2^{O(k)} \\cdot h^{2k} \\cdot 2^{O(h)} \\cdot n, with n = |V(G)|. Finally, we show that slight modifications of our algorithm permit to solve some related problems within the same time bounds, like induced minor or contraction minor containment.

  11. New Results in Software Model Checking and Analysis

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.

    2010-01-01

    This introductory article surveys new techniques, supported by automated tools, for the analysis of software to ensure reliability and safety. Special focus is on model checking techniques. The article also introduces the five papers that are enclosed in this special journal volume.

  12. Outward Bound Outcome Model Validation and Multilevel Modeling

    ERIC Educational Resources Information Center

    Luo, Yuan-Chun

    2011-01-01

    This study was intended to measure construct validity for the Outward Bound Outcomes Instrument (OBOI) and to predict outcome achievement from individual characteristics and course attributes using multilevel modeling. A sample of 2,340 participants was collected by Outward Bound USA between May and September 2009 using the OBOI. Two phases of…

  13. Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos

    2015-05-01

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  14. Query Language for Location-Based Services: A Model Checking Approach

    NASA Astrophysics Data System (ADS)

    Hoareau, Christian; Satoh, Ichiro

    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.

  15. Abstraction Techniques for Parameterized Verification

    DTIC Science & Technology

    2006-11-01

    approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite

  16. RADIATIVE AND MOMENTUM-BASED MECHANICAL ACTIVE GALACTIC NUCLEUS FEEDBACK IN A THREE-DIMENSIONAL GALAXY EVOLUTION CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Ena; Ostriker, Jeremiah P.; Naab, Thorsten

    2012-08-01

    We study the growth of black holes (BHs) in galaxies using three-dimensional smoothed particle hydrodynamic simulations with new implementations of the momentum mechanical feedback, and restriction of accreted elements to those that are gravitationally bound to the BH. We also include the feedback from the X-ray radiation emitted by the BH, which heats the surrounding gas in the host galaxies, and adds radial momentum to the fluid. We perform simulations of isolated galaxies and merging galaxies and test various feedback models with the new treatment of the Bondi radius criterion. We find that overall the BH growth is similar tomore » what has been obtained by earlier works using the Springel, Di Matteo, and Hernquist algorithms. However, the outflowing wind velocities and mechanical energy emitted by winds are considerably higher (v{sub w} {approx} 1000-3000 km s{sup -1}) compared to the standard thermal feedback model (v{sub w} {approx} 50-100 km s{sup -1}). While the thermal feedback model emits only 0.1% of BH released energy in winds, the momentum feedback model emits more than 30% of the total energy released by the BH in winds. In the momentum feedback model, the degree of fluctuation in both radiant and wind output is considerably larger than in standard treatments. We check that the new model of BH mass accretion agrees with analytic results for the standard Bondi problem.« less

  17. The Automation of Nowcast Model Assessment Processes

    DTIC Science & Technology

    2016-09-01

    that will automate real-time WRE-N model simulations, collect and quality control check weather observations for assimilation and verification, and...domains centered near White Sands Missile Range, New Mexico, where the Meteorological Sensor Array (MSA) will be located. The MSA will provide...observations and performing quality -control checks for the pre-forecast data assimilation period. 2. Run the WRE-N model to generate model forecast data

  18. A combined ligand-based and target-based drug design approach for G-protein coupled receptors: application to salvinorin A, a selective kappa opioid receptor agonist

    NASA Astrophysics Data System (ADS)

    Singh, Nidhi; Chevé, Gwénaël; Ferguson, David M.; McCurdy, Christopher R.

    2006-08-01

    Combined ligand-based and target-based drug design approaches provide a synergistic advantage over either method individually. Therefore, we set out to develop a powerful virtual screening model to identify novel molecular scaffolds as potential leads for the human KOP (hKOP) receptor employing a combined approach. Utilizing a set of recently reported derivatives of salvinorin A, a structurally unique KOP receptor agonist, a pharmacophore model was developed that consisted of two hydrogen bond acceptor and three hydrophobic features. The model was cross-validated by randomizing the data using the CatScramble technique. Further validation was carried out using a test set that performed well in classifying active and inactive molecules correctly. Simultaneously, a bovine rhodopsin based "agonist-bound" hKOP receptor model was also generated. The model provided more accurate information about the putative binding site of salvinorin A based ligands. Several protein structure-checking programs were used to validate the model. In addition, this model was in agreement with the mutation experiments carried out on KOP receptor. The predictive ability of the model was evaluated by docking a set of known KOP receptor agonists into the active site of this model. The docked scores correlated reasonably well with experimental p K i values. It is hypothesized that the integration of these two independently generated models would enable a swift and reliable identification of new lead compounds that could reduce time and cost of hit finding within the drug discovery and development process, particularly in the case of GPCRs.

  19. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  20. The method of a joint intraday security check system based on cloud computing

    NASA Astrophysics Data System (ADS)

    Dong, Wei; Feng, Changyou; Zhou, Caiqi; Cai, Zhi; Dan, Xu; Dai, Sai; Zhang, Chuancheng

    2017-01-01

    The intraday security check is the core application in the dispatching control system. The existing security check calculation only uses the dispatch center’s local model and data as the functional margin. This paper introduces the design of all-grid intraday joint security check system based on cloud computing and its implementation. To reduce the effect of subarea bad data on the all-grid security check, a new power flow algorithm basing on comparison and adjustment with inter-provincial tie-line plan is presented. And the numerical example illustrated the effectiveness and feasibility of the proposed method.

  1. 76 FR 35344 - Airworthiness Directives; Costruzioni Aeronautiche Tecnam srl Model P2006T Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-17

    ... retraction/extension ground checks performed on the P2006T, a loose Seeger ring was found on the nose landing... specified products. The MCAI states: During Landing Gear retraction/extension ground checks performed on the... airworthiness information (MCAI) states: During Landing Gear retraction/extension ground checks performed on the...

  2. Adsorption and Reduction of Hexavalent Chromium on the Surface of Vivianite at Acidic Environment

    NASA Astrophysics Data System (ADS)

    HA, S.; Hyun, S. P.; Lee, W.

    2016-12-01

    Due to the rapid increase of chemical use in industrial activities, acid spills have frequently occurred in Korea. The acid spill causes soil and water acidification and additional problems such as heavy metal leaching from the soil. Hexavalent chromium (Cr(VI)) is relatively mobile in the environment and toxic and mutagenic. Monoclinic octa-hydrated ferrous phosphate, vivianite, is one of commonly found iron-bearing soil minerals occurring in phosphorous-enriched reducing environments. We have investigated reductive sorption of Cr(VI) on the vivianite surfaces using batch experimental tests under diverse groundwater conditions. Cr(VI) (5 mg/L) was added in 6.5 g/L vivianite suspension buffered at pH 5, 7, and 9, using 0.05 M HEPES or tris buffer solution, to check the effect of pH on the reductive sorption of Cr(VI) on the vivianite surface. The aqueous Cr(VI) removal was fastest at pH 5, followed by pH 7, and pH 9. The effect of ionic strength on the removal kinetics of Cr(VI) was negligible. It could be subsequently removed via sorption and reduction on the surface of vivianite of which reactive chemical species could be aqueous Fe(II), iron oxides, and metavivianite. Adsorption test was conducted using the same amount of Cr(III) to check the selectivity of chromium species on the vivianite surface for the reductive adsorption. Through Cr extraction test, amount of strong-bound Cr to vivianite is similar for Cr(III) and Cr(VI) injection but amount of weak-bound Cr is bigger for Cr(VI) injection. Reaction mechanism for the sorption and reductive transformation of Cr(VI) to Cr(III) species at reactive sites of vivianite surface are discussed based on surface complexation modeling and K-edge Fe X-ray absorption near edge structure (XANES) results. Since vivianite is reacted with Cr(VI), two smooth peaks of absorption edge changed to one sharp peak. Pre-edge that contains 1s-3d transition information tends to show high peak when reaction time is increased and pH is low. This fact indicated that the Fe(II) is oxidized to Fe(III) at the surface of viviante and this phenomena is optimized at pH 5 and longer elapsed time.

  3. Sediment trapping efficiency of adjustable check dam in laboratory and field experiment

    NASA Astrophysics Data System (ADS)

    Wang, Chiang; Chen, Su-Chin; Lu, Sheng-Jui

    2014-05-01

    Check dam has been constructed at mountain area to block debris flow, but has been filled after several events and lose its function of trapping. For the reason, the main facilities of our research is the adjustable steel slit check dam, which with the advantages of fast building, easy to remove or adjust it function. When we can remove transverse beams to drain sediments off and keep the channel continuity. We constructed adjustable steel slit check dam on the Landow torrent, Huisun Experiment Forest station as the prototype to compare with model in laboratory. In laboratory experiments, the Froude number similarity was used to design the dam model. The main comparisons focused on types of sediment trapping and removing, sediment discharge, and trapping rate of slit check dam. In different types of removing transverse beam showed different kind of sediment removal and differences on rate of sediment removing, removing rate, and particle size distribution. The sediment discharge in check dam with beams is about 40%~80% of check dam without beams. Furthermore, the spacing of beams is considerable factor to the sediment discharge. In field experiment, this research uses time-lapse photography to record the adjustable steel slit check dam on the Landow torrent. The typhoon Soulik made rainfall amounts of 600 mm in eight hours and induced debris flow in Landow torrent. Image data of time-lapse photography demonstrated that after several sediment transport event the adjustable steel slit check dam was buried by debris flow. The result of lab and field experiments: (1)Adjustable check dam could trap boulders and stop woody debris flow and flush out fine sediment to supply the need of downstream river. (2)The efficiency of sediment trapping in adjustable check dam with transverse beams was significantly improved. (3)The check dam without transverse beams can remove the sediment and keep the ecosystem continuity.

  4. The dopamine D2/D3 receptor agonist quinpirole increases checking-like behaviour in an operant observing response task with uncertain reinforcement: a novel possible model of OCD.

    PubMed

    Eagle, Dawn M; Noschang, Cristie; d'Angelo, Laure-Sophie Camilla; Noble, Christie A; Day, Jacob O; Dongelmans, Marie Louise; Theobald, David E; Mar, Adam C; Urcelay, Gonzalo P; Morein-Zamir, Sharon; Robbins, Trevor W

    2014-05-01

    Excessive checking is a common, debilitating symptom of obsessive-compulsive disorder (OCD). In an established rodent model of OCD checking behaviour, quinpirole (dopamine D2/3-receptor agonist) increased checking in open-field tests, indicating dopaminergic modulation of checking-like behaviours. We designed a novel operant paradigm for rats (observing response task (ORT)) to further examine cognitive processes underpinning checking behaviour and clarify how and why checking develops. We investigated i) how quinpirole increases checking, ii) dependence of these effects on D2/3 receptor function (following treatment with D2/3 receptor antagonist sulpiride) and iii) effects of reward uncertainty. In the ORT, rats pressed an 'observing' lever for information about the location of an 'active' lever that provided food reinforcement. High- and low-checkers (defined from baseline observing) received quinpirole (0.5mg/kg, 10 treatments) or vehicle. Parametric task manipulations assessed observing/checking under increasing task demands relating to reinforcement uncertainty (variable response requirement and active-lever location switching). Treatment with sulpiride further probed the pharmacological basis of long-term behavioural changes. Quinpirole selectively increased checking, both functional observing lever presses (OLPs) and non-functional extra OLPs (EOLPs). The increase in OLPs and EOLPs was long-lasting, without further quinpirole administration. Quinpirole did not affect the immediate ability to use information from checking. Vehicle and quinpirole-treated rats (VEH and QNP respectively) were selectively sensitive to different forms of uncertainty. Sulpiride reduced non-functional EOLPs in QNP rats but had no effect on functional OLPs. These data have implications for treatment of compulsive checking in OCD, particularly for serotonin-reuptake-inhibitor treatment-refractory cases, where supplementation with dopamine receptor antagonists may be beneficial. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Lieb-Robinson bounds for spin-boson lattice models and trapped ions.

    PubMed

    Jünemann, J; Cadarso, A; Pérez-García, D; Bermudez, A; García-Ripoll, J J

    2013-12-06

    We derive a Lieb-Robinson bound for the propagation of spin correlations in a model of spins interacting through a bosonic lattice field, which satisfies a Lieb-Robinson bound in the absence of spin-boson couplings. We apply these bounds to a system of trapped ions and find that the propagation of spin correlations, as mediated by the phonons of the ion crystal, can be faster than the regimes currently explored in experiments. We propose a scheme to test the bounds by measuring retarded correlation functions via the crystal fluorescence.

  6. Determination of total sulfur in fertilizers by high temperature combustion: single-laboratory validation.

    PubMed

    Bernius, Jean; Kraus, Sabine; Hughes, Sandra; Margraf, Dominik; Bartos, James; Newlon, Natalie; Sieper, Hans-Peter

    2014-01-01

    Asingle-laboratory validation study was conducted for the determination of total sulfur (S) in a variety of common, inorganic fertilizers by combustion. The procedure involves conversion of S species into SO2 through combustion at 1150 degrees C, absorption then desorption from a purge and trap column, followed by measurement by a thermal conductivity detector. Eleven different validation materials were selected for study, which included four commercial fertilizer products, five fertilizers from the Magruder Check Sample Program, one reagent grade product, and one certified organic reference material. S content ranged between 1.47 and 91% as sulfate, thiosulfate, and elemental and organically bound S. Determinations of check samples were performed on 3 different days with four replicates/day. Determinations for non-Magruder samples were performed on 2 different days. Recoveries ranged from 94.3 to 125.9%. ABS SL absolute SD among runs ranged from 0.038 to 0.487%. Based on the accuracy and precision demonstrated here, it is recommended that this method be collaboratively studied for the determination of total S in fertilizers.

  7. Finite state projection based bounds to compare chemical master equation models using single-cell data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Zachary; Neuert, Gregor; Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232

    2016-08-21

    Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort.more » In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.« less

  8. An experimental method to verify soil conservation by check dams on the Loess Plateau, China.

    PubMed

    Xu, X Z; Zhang, H W; Wang, G Q; Chen, S C; Dang, W Q

    2009-12-01

    A successful experiment with a physical model requires necessary conditions of similarity. This study presents an experimental method with a semi-scale physical model. The model is used to monitor and verify soil conservation by check dams in a small watershed on the Loess Plateau of China. During experiments, the model-prototype ratio of geomorphic variables was kept constant under each rainfall event. Consequently, experimental data are available for verification of soil erosion processes in the field and for predicting soil loss in a model watershed with check dams. Thus, it can predict the amount of soil loss in a catchment. This study also mentions four criteria: similarities of watershed geometry, grain size and bare land, Froude number (Fr) for rainfall event, and soil erosion in downscaled models. The efficacy of the proposed method was confirmed using these criteria in two different downscaled model experiments. The B-Model, a large scale model, simulates watershed prototype. The two small scale models, D(a) and D(b), have different erosion rates, but are the same size. These two models simulate hydraulic processes in the B-Model. Experiment results show that while soil loss in the small scale models was converted by multiplying the soil loss scale number, it was very close to that of the B-Model. Obviously, with a semi-scale physical model, experiments are available to verify and predict soil loss in a small watershed area with check dam system on the Loess Plateau, China.

  9. 75 FR 27406 - Airworthiness Directives; Bombardier, Inc. Model BD-100-1A10 (Challenger 300) Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-17

    ... BD- 100 Time Limits/Maintenance Checks. The actions described in this service information are... Challenger 300 BD-100 Time Limits/Maintenance Checks. (1) For the new tasks identified in Bombardier TR 5-2... Requirements,'' in Part 2 of Chapter 5 of Bombardier Challenger 300 BD-100 Time Limits/ Maintenance Checks...

  10. 75 FR 66655 - Airworthiness Directives; PILATUS Aircraft Ltd. Model PC-7 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-29

    ... December 3, 2010 (the effective date of this AD), check the airplane maintenance records to determine if... of the airplane. Do this check following paragraph 3.A. of Pilatus Aircraft Ltd. PC-7 Service... maintenance records check required in paragraph (f)(1) of this AD or it is unclear whether or not the left and...

  11. 77 FR 20520 - Airworthiness Directives; Bombardier, Inc. Model BD-100-1A10 (Challenger 300) Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-05

    ... Bombardier Challenger 300 BD-100 Time Limits/Maintenance Checks Manual. For this task, the initial compliance..., of Part 2, of the Bombardier Challenger 300 BD-100 Time Limits/Maintenance Checks Manual, the general.../Maintenance Checks Manual, provided that the relevant information in the general revision is identical to that...

  12. Extending the Diffuse Layer Model of Surface Acidity Behavior: III. Estimating Bound Site Activity Coefficients

    EPA Science Inventory

    Although detailed thermodynamic analyses of the 2-pK diffuse layer surface complexation model generally specify bound site activity coefficients for the purpose of accounting for those non-ideal excess free energies contributing to bound site electrochemical potentials, in applic...

  13. The screening Horndeski cosmologies

    NASA Astrophysics Data System (ADS)

    Starobinsky, Alexei A.; Sushkov, Sergey V.; Volkov, Mikhail S.

    2016-06-01

    We present a systematic analysis of homogeneous and isotropic cosmologies in a particular Horndeski model with Galileon shift symmetry, containing also a Λ-term and a matter. The model, sometimes called Fab Five, admits a rich spectrum of solutions. Some of them describe the standard late time cosmological dynamic dominated by the Λ-term and matter, while at the early times the universe expands with a constant Hubble rate determined by the value of the scalar kinetic coupling. For other solutions the Λ-term and matter are screened at all times but there are nevertheless the early and late accelerating phases. The model also admits bounces, as well as peculiar solutions describing ``the emergence of time''. Most of these solutions contain ghosts in the scalar and tensor sectors. However, a careful analysis reveals three different branches of ghost-free solutions, all showing a late time acceleration phase. We analyse the dynamical stability of these solutions and find that all of them are stable in the future, since all their perturbations stay bounded at late times. However, they all turn out to be unstable in the past, as their perturbations grow violently when one approaches the initial spacetime singularity. We therefore conclude that the model has no viable solutions describing the whole of the cosmological history, although it may describe the current acceleration phase. We also check that the flat space solution is ghost-free in the model, but it may acquire ghost in more general versions of the Horndeski theory.

  14. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.

    2016-01-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230

  15. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C

    2015-03-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.

  16. "I share, therefore I am": personality traits, life satisfaction, and Facebook check-ins.

    PubMed

    Wang, Shaojung Sharon

    2013-12-01

    This study explored whether agreeableness, extraversion, and openness function to influence self-disclosure behavior, which in turn impacts the intensity of checking in on Facebook. A complete path from extraversion to Facebook check-in through self-disclosure and sharing was found. The indirect effect from sharing to check-in intensity through life satisfaction was particularly salient. The central component of check-in is for users to disclose a specific location selectively that has implications on demonstrating their social lives, lifestyles, and tastes, enabling a selective and optimized self-image. Implications on the hyperpersonal model and warranting principle are discussed.

  17. Validation of the SURE Program, phase 1

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1987-01-01

    Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.

  18. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  19. Classical verification of quantum circuits containing few basis changes

    NASA Astrophysics Data System (ADS)

    Demarie, Tommaso F.; Ouyang, Yingkai; Fitzsimons, Joseph F.

    2018-04-01

    We consider the task of verifying the correctness of quantum computation for a restricted class of circuits which contain at most two basis changes. This contains circuits giving rise to the second level of the Fourier hierarchy, the lowest level for which there is an established quantum advantage. We show that when the circuit has an outcome with probability at least the inverse of some polynomial in the circuit size, the outcome can be checked in polynomial time with bounded error by a completely classical verifier. This verification procedure is based on random sampling of computational paths and is only possible given knowledge of the likely outcome.

  20. A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Tamai, Tetsuo

    2009-01-01

    Since the complexity of software systems continues to grow, most engineers face two serious problems: the state space explosion problem and the problem of how to debug systems. In this paper, we propose a game-theoretic approach to full branching time model checking on three-valued semantics. The three-valued models and logics provide successful abstraction that overcomes the state space explosion problem. The game style model checking that generates counter-examples can guide refinement or identify validated formulas, which solves the system debugging problem. Furthermore, output of our game style method will give significant information to engineers in detecting where errors have occurred and what the causes of the errors are.

  1. Efficient Translation of LTL Formulae into Buchi Automata

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Lerda, Flavio

    2001-01-01

    Model checking is a fully automated technique for checking that a system satisfies a set of required properties. With explicit-state model checkers, properties are typically defined in linear-time temporal logic (LTL), and are translated into B chi automata in order to be checked. This report presents how we have combined and improved existing techniques to obtain an efficient LTL to B chi automata translator. In particular, we optimize the core of existing tableau-based approaches to generate significantly smaller automata. Our approach has been implemented and is being released as part of the Java PathFinder software (JPF), an explicit state model checker under development at the NASA Ames Research Center.

  2. Innovative Adolescent Chemical Dependency Treatment and Its Outcome: A Model Based on Outward Bound Programming.

    ERIC Educational Resources Information Center

    McPeake, John D.; And Others

    1991-01-01

    Describes adolescent chemical dependency treatment model developed at Beech Hill Hospital (New Hampshire) which integrated Twelve Step-oriented alcohol and drug rehabilitation program with experiential education school, Hurricane Island Outward Bound School. Describes Beech Hill Hurricane Island Outward Bound School Adolescent Chemical Dependency…

  3. Development of the functional simulator for the Galileo attitude and articulation control system

    NASA Technical Reports Server (NTRS)

    Namiri, M. K.

    1983-01-01

    A simulation program for verifying and checking the performance of the Galileo Spacecraft's Attitude and Articulation Control Subsystem's (AACS) flight software is discussed. The program, which is called Functional Simulator (FUNSIM), provides a simple method of interfacing user-supplied mathematical models coded in FORTRAN which describes spacecraft dynamics, sensors, and actuators; this is done with the AACS flight software, coded in HAL/S (High-level Advanced Language/Shuttle). It is thus able to simulate the AACS flight software accurately to the HAL/S statement level in the environment of a mainframe computer system. FUNSIM also has a command and data subsystem (CDS) simulator. It is noted that the input/output data and timing are simulated with the same precision as the flight microprocessor. FUNSIM uses a variable stepsize numerical integration algorithm complete with individual error bound control on the state variable to solve the equations of motion. The program has been designed to provide both line printer and matrix dot plotting of the variables requested in the run section and to provide error diagnostics.

  4. Search for a Radio Pulsar in the Remnant of Supernova 1987A

    NASA Astrophysics Data System (ADS)

    Zhang, S.-B.; Dai, S.; Hobbs, G.; Staveley-Smith, L.; Manchester, R. N.; Russell, C. J.; Zanardo, G.; Wu, X.-F.

    2018-06-01

    We have observed the remnant of supernova SN 1987A (SNR 1987A), located in the Large Magellanic Cloud (LMC), to search for periodic and/or transient radio emission with the Parkes 64 m-diameter radio telescope. We found no evidence of a radio pulsar in our periodicity search and derived 8σ upper bounds on the flux density of any such source of 31 μJy at 1.4 GHz and 21 μJy at 3 GHz. Four candidate transient events were detected with greater than 7σ significance, with dispersion measures (DMs) in the range 150 to 840 cm-3 pc. For two of them, we found a second pulse at slightly lower significance. However, we cannot at present conclude that any of these are associated with a pulsar in SNR 1987A. As a check on the system, we also observed PSR B0540-69, a young pulsar which also lies in the LMC. We found eight giant pulses at the DM of this pulsar. We discuss the implications of these results for models of the supernova remnant, neutron star formation and pulsar evolution.

  5. Concrete Model Checking with Abstract Matching and Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu Corina S.; Peianek Radek; Visser, Willem

    2005-01-01

    We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to he generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition. the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction, by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs. We also show how a lightweight variant can be used for efficient software testing.

  6. LISA pathfinder appreciably constrains collapse models

    NASA Astrophysics Data System (ADS)

    Helou, Bassam; Slagmolen, B. J. J.; McClelland, David E.; Chen, Yanbei

    2017-04-01

    Spontaneous collapse models are phenomological theories formulated to address major difficulties in macroscopic quantum mechanics. We place significant bounds on the parameters of the leading collapse models, the continuous spontaneous localization (CSL) model, and the Diosi-Penrose (DP) model, by using LISA Pathfinder's measurement, at a record accuracy, of the relative acceleration noise between two free-falling macroscopic test masses. In particular, we bound the CSL collapse rate to be at most (2.96 ±0.12 ) ×10-8 s-1 . This competitive bound explores a new frequency regime, 0.7 to 20 mHz, and overlaps with the lower bound 10-8 ±2 s-1 proposed by Adler in order for the CSL collapse noise to be substantial enough to explain the phenomenology of quantum measurement. Moreover, we bound the regularization cutoff scale used in the DP model to prevent divergences to be at least 40.1 ±0.5 fm , which is larger than the size of any nucleus. Thus, we rule out the DP model if the cutoff is the size of a fundamental particle.

  7. iGen: An automated generator of simplified models with provable error bounds.

    NASA Astrophysics Data System (ADS)

    Tang, D.; Dobbie, S.

    2009-04-01

    Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.

  8. Logic Model Checking of Unintended Acceleration Claims in Toyota Vehicles

    NASA Technical Reports Server (NTRS)

    Gamble, Ed

    2012-01-01

    Part of the US Department of Transportation investigation of Toyota sudden unintended acceleration (SUA) involved analysis of the throttle control software, JPL Laboratory for Reliable Software applied several techniques including static analysis and logic model checking, to the software; A handful of logic models were build, Some weaknesses were identified; however, no cause for SUA was found; The full NASA report includes numerous other analyses

  9. Logic Model Checking of Unintended Acceleration Claims in the 2005 Toyota Camry Electronic Throttle Control System

    NASA Technical Reports Server (NTRS)

    Gamble, Ed; Holzmann, Gerard

    2011-01-01

    Part of the US DOT investigation of Toyota SUA involved analysis of the throttle control software. JPL LaRS applied several techniques, including static analysis and logic model checking, to the software. A handful of logic models were built. Some weaknesses were identified; however, no cause for SUA was found. The full NASA report includes numerous other analyses

  10. Model selection and assessment for multi­-species occupancy models

    USGS Publications Warehouse

    Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.

    2016-01-01

    While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.

  11. Twisting, supercoiling and stretching in protein bound DNA

    NASA Astrophysics Data System (ADS)

    Lam, Pui-Man; Zhen, Yi

    2018-04-01

    We have calculated theoretical results for the torque and slope of the twisted DNA, with various proteins bound on it, using the Neukirch-Marko model, in the regime where plectonemes exist. We found that the torque in the protein bound DNA decreases compared to that in the bare DNA. This is caused by the decrease in the free energy g(f) , and hence the smaller persistence lengths, in the case of protein bound DNA. We hope our results will encourage experimental investigations of supercoiling in protein bound DNA, which can provide further tests of the Neukirch-Marko model.

  12. 76 FR 53348 - Airworthiness Directives; BAE SYSTEMS (Operations) Limited Model BAe 146 Airplanes and Model Avro...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ... Maintenance Manual (AMM) includes chapters 05-10 ``Time Limits'', 05-15 ``Critical Design Configuration... 05, ``Time Limits/Maintenance Checks,'' of BAe 146 Series/AVRO 146-RJ Series Aircraft Maintenance... Chapter 05, ``Time Limits/ Maintenance Checks,'' of the BAE SYSTEMS (Operations) Limited BAe 146 Series...

  13. 78 FR 40063 - Airworthiness Directives; Erickson Air-Crane Incorporated Helicopters (Type Certificate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-03

    ... Sikorsky Model S-64E helicopters. The AD requires repetitive checks of the Blade Inspection Method (BIM... and check procedures for BIM blades installed on the Model S-64F helicopters. Several blade spars with a crack emanating from corrosion pits and other damage have been found because of BIM pressure...

  14. Slicing AADL Specifications for Model Checking

    NASA Technical Reports Server (NTRS)

    Odenbrett, Maximilian; Nguyen, Viet Yen; Noll, Thomas

    2010-01-01

    To combat the state-space explosion problem in model checking larger systems, abstraction techniques can be employed. Here, methods that operate on the system specification before constructing its state space are preferable to those that try to minimize the resulting transition system as they generally reduce peak memory requirements. We sketch a slicing algorithm for system specifications written in (a variant of) the Architecture Analysis and Design Language (AADL). Given a specification and a property to be verified, it automatically removes those parts of the specification that are irrelevant for model checking the property, thus reducing the size of the corresponding transition system. The applicability and effectiveness of our approach is demonstrated by analyzing the state-space reduction for an example, employing a translator from AADL to Promela, the input language of the SPIN model checker.

  15. 75 FR 42585 - Airworthiness Directives; Empresa Brasileira de Aeronautica S.A. (EMBRAER) Model ERJ 170 and ERJ...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-22

    ... (Low Stage Bleed Check Valve) specified in Section 1 of the EMBRAER 170 Maintenance Review Board Report...-11-02-002 (Low Stage Bleed Check Valve), specified in Section 1 of the EMBRAER 170 Maintenance Review... Task 36-11-02-002 (Low Stage Bleed Check Valve) specified in Section 1 of the EMBRAER 170 Maintenance...

  16. 75 FR 9816 - Airworthiness Directives; Empresa Brasileira de Aeronautica S.A. (EMBRAER) Model ERJ 170 and ERJ...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-04

    ... maintenance plan to include repetitive functional tests of the low-stage check valve. For certain other... program to include maintenance Task Number 36-11-02- 002 (Low Stage Bleed Check Valve), specified in... Check Valve) in Section 1 of the EMBRAER 170 Maintenance Review Board Report MRB-1621. Issued in Renton...

  17. 75 FR 39811 - Airworthiness Directives; The Boeing Company Model 777 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... Service Bulletin 777-57A0064, dated March 26, 2009, it is not necessary to perform the torque check on the... instructions in Boeing Alert Service Bulletin 777-57A0064, dated March 26, 2009, a torque check is redundant... are less than those for the torque check. Boeing notes that it plans to issue a new revision to this...

  18. String tensions in deformed Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Poppitz, Erich; Shalchian T., M. Erfan

    2018-01-01

    We study k-strings in deformed Yang-Mills (dYM) with SU(N) gauge group in the semiclassically calculable regime on R^3× S^1 . Their tensions Tk are computed in two ways: numerically, for 2 ≤ N ≤ 10, and via an analytic approach using a re-summed perturbative expansion. The latter serves both as a consistency check on the numerical results and as a tool to analytically study the large-N limit. We find that dYM k-string ratios Tk/T1 do not obey the well-known sine- or Casimir-scaling laws. Instead, we show that the ratios Tk/T1 are bound above by a square root of Casimir scaling, previously found to hold for stringlike solutions of the MIT Bag Model. The reason behind this similarity is that dYM dynamically realizes, in a theoretically controlled setting, the main model assumptions of the Bag Model. We also compare confining strings in dYM and in other four-dimensional theories with abelian confinement, notably Seiberg-Witten theory, and show that the unbroken Z_N center symmetry in dYM leads to different properties of k-strings in the two theories; for example, a "baryon vertex" exists in dYM but not in softly-broken Seiberg-Witten theory. Our results also indicate that, at large values of N, k-strings in dYM do not become free.

  19. Mixed and Mixture Regression Models for Continuous Bounded Responses Using the Beta Distribution

    ERIC Educational Resources Information Center

    Verkuilen, Jay; Smithson, Michael

    2012-01-01

    Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…

  20. Divergences and estimating tight bounds on Bayes error with applications to multivariate Gaussian copula and latent Gaussian copula

    NASA Astrophysics Data System (ADS)

    Thelen, Brian J.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.

    2017-04-01

    In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information- theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are "loose" or "tight" bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.

  1. Bounded rationality alters the dynamics of paediatric immunization acceptance.

    PubMed

    Oraby, Tamer; Bauch, Chris T

    2015-06-02

    Interactions between disease dynamics and vaccinating behavior have been explored in many coupled behavior-disease models. Cognitive effects such as risk perception, framing, and subjective probabilities of adverse events can be important determinants of the vaccinating behaviour, and represent departures from the pure "rational" decision model that are often described as "bounded rationality". However, the impact of such cognitive effects in the context of paediatric infectious disease vaccines has received relatively little attention. Here, we develop a disease-behavior model that accounts for bounded rationality through prospect theory. We analyze the model and compare its predictions to a reduced model that lacks bounded rationality. We find that, in general, introducing bounded rationality increases the dynamical richness of the model and makes it harder to eliminate a paediatric infectious disease. In contrast, in other cases, a low cost, highly efficacious vaccine can be refused, even when the rational decision model predicts acceptance. Injunctive social norms can prevent vaccine refusal, if vaccine acceptance is sufficiently high in the beginning of the vaccination campaign. Cognitive processes can have major impacts on the predictions of behaviour-disease models, and further study of such processes in the context of vaccination is thus warranted.

  2. Bounded rationality alters the dynamics of paediatric immunization acceptance

    PubMed Central

    Oraby, Tamer; Bauch, Chris T.

    2015-01-01

    Interactions between disease dynamics and vaccinating behavior have been explored in many coupled behavior-disease models. Cognitive effects such as risk perception, framing, and subjective probabilities of adverse events can be important determinants of the vaccinating behaviour, and represent departures from the pure “rational” decision model that are often described as “bounded rationality”. However, the impact of such cognitive effects in the context of paediatric infectious disease vaccines has received relatively little attention. Here, we develop a disease-behavior model that accounts for bounded rationality through prospect theory. We analyze the model and compare its predictions to a reduced model that lacks bounded rationality. We find that, in general, introducing bounded rationality increases the dynamical richness of the model and makes it harder to eliminate a paediatric infectious disease. In contrast, in other cases, a low cost, highly efficacious vaccine can be refused, even when the rational decision model predicts acceptance. Injunctive social norms can prevent vaccine refusal, if vaccine acceptance is sufficiently high in the beginning of the vaccination campaign. Cognitive processes can have major impacts on the predictions of behaviour-disease models, and further study of such processes in the context of vaccination is thus warranted. PMID:26035413

  3. Planck limits on non-canonical generalizations of large-field inflation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Nina K.; Kinney, William H., E-mail: ninastei@buffalo.edu, E-mail: whkinney@buffalo.edu

    2017-04-01

    In this paper, we consider two case examples of Dirac-Born-Infeld (DBI) generalizations of canonical large-field inflation models, characterized by a reduced sound speed, c {sub S} < 1. The reduced speed of sound lowers the tensor-scalar ratio, improving the fit of the models to the data, but increases the equilateral-mode non-Gaussianity, f {sup equil.}{sub NL}, which the latest results from the Planck satellite constrain by a new upper bound. We examine constraints on these models in light of the most recent Planck and BICEP/Keck results, and find that they have a greatly decreased window of viability. The upper bound onmore » f {sup equil.}{sub NL} corresponds to a lower bound on the sound speed and a corresponding lower bound on the tensor-scalar ratio of r ∼ 0.01, so that near-future Cosmic Microwave Background observations may be capable of ruling out entire classes of DBI inflation models. The result is, however, not universal: infrared-type DBI inflation models, where the speed of sound increases with time, are not subject to the bound.« less

  4. Checking Dimensionality in Item Response Models with Principal Component Analysis on Standardized Residuals

    ERIC Educational Resources Information Center

    Chou, Yeh-Tai; Wang, Wen-Chung

    2010-01-01

    Dimensionality is an important assumption in item response theory (IRT). Principal component analysis on standardized residuals has been used to check dimensionality, especially under the family of Rasch models. It has been suggested that an eigenvalue greater than 1.5 for the first eigenvalue signifies a violation of unidimensionality when there…

  5. Stress analysis of 27% scale model of AH-64 main rotor hub

    NASA Technical Reports Server (NTRS)

    Hodges, R. V.

    1985-01-01

    Stress analysis of an AH-64 27% scale model rotor hub was performed. Component loads and stresses were calculated based upon blade root loads and motions. The static and fatigue analysis indicates positive margins of safety in all components checked. Using the format developed here, the hub can be stress checked for future application.

  6. Updated RICE Bounds on Ultrahigh Energy Neutrino fluxes and interactions

    NASA Astrophysics Data System (ADS)

    Hussain, Shahid; McKay, Douglas

    2006-04-01

    We explore limits on low scale gravity models set by results from the Radio Ice Cherenkov Experiment's (RICE) ongoing search for cosmic ray neutrinos in the cosmogenic, or GZK, energy range. The bound on, MD, the fundamental scale of gravity, depends upon cosmogenic flux model, black hole formation and decay treatments, inclusion of graviton mediated elastic neutrino processes, and the number of large extra dimensions, d. We find bounds in the interval 0.9 TeV < MD < 10 TeV. Values d = 5, 6 and 7, for which laboratory and astrophysical bounds on LSG models are less restrictive, lead to essentially the same limits on MD.

  7. Do alcohol compliance checks decrease underage sales at neighboring establishments?

    PubMed

    Erickson, Darin J; Smolenski, Derek J; Toomey, Traci L; Carlin, Bradley P; Wagenaar, Alexander C

    2013-11-01

    Underage alcohol compliance checks conducted by law enforcement agencies can reduce the likelihood of illegal alcohol sales at checked alcohol establishments, and theory suggests that an alcohol establishment that is checked may warn nearby establishments that compliance checks are being conducted in the area. In this study, we examined whether the effects of compliance checks diffuse to neighboring establishments. We used data from the Complying with the Minimum Drinking Age trial, which included more than 2,000 compliance checks conducted at more than 900 alcohol establishments. The primary outcome was the sale of alcohol to a pseudo-underage buyer without the need for age identification. A multilevel logistic regression was used to model the effect of a compliance check at each establishment as well as the effect of compliance checks at neighboring establishments within 500 m (stratified into four equal-radius concentric rings), after buyer, license, establishment, and community-level variables were controlled for. We observed a decrease in the likelihood of establishments selling alcohol to underage youth after they had been checked by law enforcement, but these effects quickly decayed over time. Establishments that had a close neighbor (within 125 m) checked in the past 90 days were also less likely to sell alcohol to young-appearing buyers. The spatial effect of compliance checks on other establishments decayed rapidly with increasing distance. Results confirm the hypothesis that the effects of police compliance checks do spill over to neighboring establishments. These findings have implications for the development of an optimal schedule of police compliance checks.

  8. Observational analysis on inflammatory reaction to talc pleurodesis: Small and large animal model series review

    PubMed Central

    Vannucci, Jacopo; Bellezza, Guido; Matricardi, Alberto; Moretti, Giulia; Bufalari, Antonello; Cagini, Lucio; Puma, Francesco; Daddi, Niccolò

    2018-01-01

    Talc pleurodesis has been associated with pleuropulmonary damage, particularly long-term damage due to its inert nature. The present model series review aimed to assess the safety of this procedure by examining inflammatory stimulus, biocompatibility and tissue reaction following talc pleurodesis. Talc slurry was performed in rabbits: 200 mg/kg checked at postoperative day 14 (five models), 200 mg/kg checked at postoperative day 28 (five models), 40 mg/kg, checked at postoperative day 14 (five models), 40 mg/kg checked at postoperative day 28 (five models). Talc poudrage was performed in pigs: 55 mg/kg checked at postoperative day 60 (18 models). Tissue inspection and data collection followed the surgical pathology approach currently used in clinical practice. As this was an observational study, no statistical analysis was performed. Regarding the rabbit model (Oryctolagus cunicoli), the extent of adhesions ranged between 0 and 30%, and between 0 and 10% following 14 and 28 days, respectively. No intraparenchymal granuloma was observed whereas, pleural granulomas were extensively encountered following both talc dosages, with more evidence of visceral pleura granulomas following 200 mg/kg compared with 40 mg/kg. Severe florid inflammation was observed in 2/10 cases following 40 mg/kg. Parathymic, pericardium granulomas and mediastinal lymphadenopathy were evidenced at 28 days. At 60 days, from rare adhesions to extended pleurodesis were observed in the pig model (Sus Scrofa domesticus). Pleural granulomas were ubiquitous on visceral and parietal pleurae. Severe spotted inflammation among the adhesions were recorded in 15/18 pigs. Intraparenchymal granulomas were observed in 9/18 lungs. Talc produced unpredictable pleurodesis in both animal models with enduring pleural inflammation whether it was performed via slurry or poudrage. Furthermore, talc appeared to have triggered extended pleural damage, intraparenchymal nodules (porcine poudrage) and mediastinal migration (rabbit slurry). PMID:29403549

  9. Explicit formula for the Holevo bound for two-parameter qubit-state estimation problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Jun, E-mail: junsuzuki@uec.ac.jp

    The main contribution of this paper is to derive an explicit expression for the fundamental precision bound, the Holevo bound, for estimating any two-parameter family of qubit mixed-states in terms of quantum versions of Fisher information. The obtained formula depends solely on the symmetric logarithmic derivative (SLD), the right logarithmic derivative (RLD) Fisher information, and a given weight matrix. This result immediately provides necessary and sufficient conditions for the following two important classes of quantum statistical models; the Holevo bound coincides with the SLD Cramér-Rao bound and it does with the RLD Cramér-Rao bound. One of the important results ofmore » this paper is that a general model other than these two special cases exhibits an unexpected property: the structure of the Holevo bound changes smoothly when the weight matrix varies. In particular, it always coincides with the RLD Cramér-Rao bound for a certain choice of the weight matrix. Several examples illustrate these findings.« less

  10. Analyzing Phylogenetic Trees with Timed and Probabilistic Model Checking: The Lactose Persistence Case Study.

    PubMed

    Requeno, José Ignacio; Colom, José Manuel

    2014-12-01

    Model checking is a generic verification technique that allows the phylogeneticist to focus on models and specifications instead of on implementation issues. Phylogenetic trees are considered as transition systems over which we interrogate phylogenetic questions written as formulas of temporal logic. Nonetheless, standard logics become insufficient for certain practices of phylogenetic analysis since they do not allow the inclusion of explicit time and probabilities. The aim of this paper is to extend the application of model checking techniques beyond qualitative phylogenetic properties and adapt the existing logical extensions and tools to the field of phylogeny. The introduction of time and probabilities in phylogenetic specifications is motivated by the study of a real example: the analysis of the ratio of lactose intolerance in some populations and the date of appearance of this phenotype.

  11. Analyzing phylogenetic trees with timed and probabilistic model checking: the lactose persistence case study.

    PubMed

    Requeno, José Ignacio; Colom, José Manuel

    2014-10-23

    Model checking is a generic verification technique that allows the phylogeneticist to focus on models and specifications instead of on implementation issues. Phylogenetic trees are considered as transition systems over which we interrogate phylogenetic questions written as formulas of temporal logic. Nonetheless, standard logics become insufficient for certain practices of phylogenetic analysis since they do not allow the inclusion of explicit time and probabilities. The aim of this paper is to extend the application of model checking techniques beyond qualitative phylogenetic properties and adapt the existing logical extensions and tools to the field of phylogeny. The introduction of time and probabilities in phylogenetic specifications is motivated by the study of a real example: the analysis of the ratio of lactose intolerance in some populations and the date of appearance of this phenotype.

  12. Model Checking for Verification of Interactive Health IT Systems

    PubMed Central

    Butler, Keith A.; Mercer, Eric; Bahrami, Ali; Tao, Cui

    2015-01-01

    Rigorous methods for design and verification of health IT systems have lagged far behind their proliferation. The inherent technical complexity of healthcare, combined with the added complexity of health information technology makes their resulting behavior unpredictable and introduces serious risk. We propose to mitigate this risk by formalizing the relationship between HIT and the conceptual work that increasingly typifies modern care. We introduce new techniques for modeling clinical workflows and the conceptual products within them that allow established, powerful modeling checking technology to be applied to interactive health IT systems. The new capability can evaluate the workflows of a new HIT system performed by clinicians and computers to improve safety and reliability. We demonstrate the method on a patient contact system to demonstrate model checking is effective for interactive systems and that much of it can be automated. PMID:26958166

  13. Interpreting the spatio-temporal patterns of sea turtle strandings: Going with the flow

    USGS Publications Warehouse

    Hart, K.M.; Mooreside, P.; Crowder, L.B.

    2006-01-01

    Knowledge of the spatial and temporal distribution of specific mortality sources is crucial for management of species that are vulnerable to human interactions. Beachcast carcasses represent an unknown fraction of at-sea mortalities. While a variety of physical (e.g., water temperature) and biological (e.g., decomposition) factors as well as the distribution of animals and their mortality sources likely affect the probability of carcass stranding, physical oceanography plays a major role in where and when carcasses strand. Here, we evaluate the influence of nearshore physical oceanographic and wind regimes on sea turtle strandings to decipher seasonal trends and make qualitative predictions about stranding patterns along oceanfront beaches. We use results from oceanic drift-bottle experiments to check our predictions and provide an upper limit on stranding proportions. We compare predicted current regimes from a 3D physical oceanographic model to spatial and temporal locations of both sea turtle carcass strandings and drift bottle landfalls. Drift bottle return rates suggest an upper limit for the proportion of sea turtle carcasses that strand (about 20%). In the South Atlantic Bight, seasonal development of along-shelf flow coincides with increased numbers of strandings of both turtles and drift bottles in late spring and early summer. The model also predicts net offshore flow of surface waters during winter - the season with the fewest relative strandings. The drift bottle data provide a reasonable upper bound on how likely carcasses are to reach land from points offshore and bound the general timeframe for stranding post-mortem (< two weeks). Our findings suggest that marine turtle strandings follow a seasonal regime predictable from physical oceanography and mimicked by drift bottle experiments. Managers can use these findings to reevaluate incidental strandings limits and fishery takes for both nearshore and offshore mortality sources. ?? 2005 Elsevier Ltd. All rights reserved.

  14. The screening Horndeski cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Starobinsky, Alexei A.; Department of General Relativity and Gravitation, Institute of Physics,Kazan Federal University,Kremlevskaya street 18, 420008 Kazan; Sushkov, Sergey V.

    2016-06-06

    We present a systematic analysis of homogeneous and isotropic cosmologies in a particular Horndeski model with Galileon shift symmetry, containing also a Λ-term and a matter. The model, sometimes called Fab Five, admits a rich spectrum of solutions. Some of them describe the standard late time cosmological dynamic dominated by the Λ-term and matter, while at the early times the universe expands with a constant Hubble rate determined by the value of the scalar kinetic coupling. For other solutions the Λ-term and matter are screened at all times but there are nevertheless the early and late accelerating phases. The modelmore » also admits bounces, as well as peculiar solutions describing “the emergence of time”. Most of these solutions contain ghosts in the scalar and tensor sectors. However, a careful analysis reveals three different branches of ghost-free solutions, all showing a late time acceleration phase. We analyse the dynamical stability of these solutions and find that all of them are stable in the future, since all their perturbations stay bounded at late times. However, they all turn out to be unstable in the past, as their perturbations grow violently when one approaches the initial spacetime singularity. We therefore conclude that the model has no viable solutions describing the whole of the cosmological history, although it may describe the current acceleration phase. We also check that the flat space solution is ghost-free in the model, but it may acquire ghost in more general versions of the Horndeski theory.« less

  15. Comparison of ultrafiltration and solid phase extraction for the separation of free and protein-bound serum copper for the Wilson's disease diagnosis.

    PubMed

    Bohrer, Denise; Do Nascimento, Paulo Cícero; Ramirez, Adrian G; Mendonça, Jean Karlo A; De Carvalho, Leandro M; Pomblum, Solange Cristina G

    2004-07-01

    The determination of the ratio free/protein-bound serum copper along with urinary copper can be used as a preliminary test for the Wilson's Disease diagnosis. In this work, the determination of these copper fractions in serum samples was carried out in two different ways; after separation of the copper bound to proteins from the free fraction by a column for protein adsorption and by ultrafiltration. As proteins can be adsorbed onto plastic polymeric surfaces, polyethylene (PE) with different molecular weights in powder form was investigated for protein adsorption. A small column was adapted in a flow system to carry out a solid-phase extraction (SPE) on-line. Preliminary experiments defined conditions for protein retention and elution and column saturation. Good performance was achieved using Mg(NO3)2 solution as carrier and methanol as eluent. The presence of proteins in both fraction (column effluent and eluate) was checked by the Coomassie Brilliant Blue test. Copper was measured by graphite furnace atomic absorption spectrometry. The measurement in the column effluent furnished the free-fraction of copper while the copper measured in the eluate the bound-fraction. The method was compared with ultrafiltration (20 kDa), measuring the free-copper in the ultrafiltrate. For the determination of protein-bound copper, the copper found in the ultrafitrate was discounted from the total copper measured in the sample. Serum samples of 10 individuals were analyzed by both methods with good agreement of the results. The regression plots, obtained by analysing the samples by both methods, presented r2 and slope of 0.97 and 0.96 for free copper and 1.00 and 1.00 for bound copper, respectively. Protein-bound copper (PB) concentrations ranged from 74 to 2074 microg/l and free-copper (F) from 22 to 54 microg/l. The ratio F/PB, calculated from SPE data, was 29.7% for one individual, with Wilson Disease well-characterized, and ranged from 1.2% to 5.2% for the others. The SPE method performed well in terms of accuracy and precision, and showed good agreement with the UF. Advantages of SPE are small sample volume (50 microl), separation carried out in 10 min, and the use of the same column for several analyses. Copyright 2004 Elsevier B.V.

  16. LHC phenomenology of SO(10) models with Yukawa unification

    NASA Astrophysics Data System (ADS)

    Anandakrishnan, Archana; Bryant, B. Charles; Raby, Stuart; Wingerter, Akın

    2013-10-01

    In this paper we study an SO(10) SUSY GUT with Yukawa unification for the third generation. We perform a global χ2 analysis given to obtain the GUT boundary conditions consistent with 11 low-energy observables, including the top, bottom and tau masses. We assume a universal mass, m16, for squarks and sleptons and a universal gaugino mass, M1/2. We then analyze the phenomenological consequences for the LHC for 15 benchmark models with fixed m16=20TeV and with varying values of the gluino mass. The goal of the present work is to (i) evaluate the lower bound on the gluino mass in our model coming from the most recent published data of CMS and (ii) to compare this bound with similar bounds obtained by CMS using simplified models. The bottom line is that the bounds coming from the same-sign dilepton analysis are comparable for our model and the simplified model studied assuming B(g˜→tt¯χ˜10)=100%. However the bounds coming from the purely hadronic analyses for our model are 10%-20% lower than obtained for the simplified models. This is due to the fact that for our models the branching ratio for the decay g˜→gχ˜1,20 is significant. Thus there are significantly fewer b-jets. We find a lower bound on the gluino mass in our models with Mg˜≳1000GeV. Finally, there is a theoretical upper bound on the gluino mass which increases with the value of m16. For m16≤30TeV, the gluino mass satisfies Mg˜≤2.8TeV at 90% C.L. Thus, unless we further increase the amount of fine-tuning, we expect gluinos to be discovered at LHC 14.

  17. Verification and Planning Based on Coinductive Logic Programming

    NASA Technical Reports Server (NTRS)

    Bansal, Ajay; Min, Richard; Simon, Luke; Mallya, Ajay; Gupta, Gopal

    2008-01-01

    Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations [6]. Where induction corresponds to least fixed point's semantics, coinduction corresponds to greatest fixed point semantics. Recently coinduction has been incorporated into logic programming and an elegant operational semantics developed for it [11, 12]. This operational semantics is the greatest fix point counterpart of SLD resolution (SLD resolution imparts operational semantics to least fix point based computations) and is termed co- SLD resolution. In co-SLD resolution, a predicate goal p( t) succeeds if it unifies with one of its ancestor calls. In addition, rational infinite terms are allowed as arguments of predicates. Infinite terms are represented as solutions to unification equations and the occurs check is omitted during the unification process. Coinductive Logic Programming (Co-LP) and Co-SLD resolution can be used to elegantly perform model checking and planning. A combined SLD and Co-SLD resolution based LP system forms the common basis for planning, scheduling, verification, model checking, and constraint solving [9, 4]. This is achieved by amalgamating SLD resolution, co-SLD resolution, and constraint logic programming [13] in a single logic programming system. Given that parallelism in logic programs can be implicitly exploited [8], complex, compute-intensive applications (planning, scheduling, model checking, etc.) can be executed in parallel on multi-core machines. Parallel execution can result in speed-ups as well as in larger instances of the problems being solved. In the remainder we elaborate on (i) how planning can be elegantly and efficiently performed under real-time constraints, (ii) how real-time systems can be elegantly and efficiently model- checked, as well as (iii) how hybrid systems can be verified in a combined system with both co-SLD and SLD resolution. Implementations of co-SLD resolution as well as preliminary implementations of the planning and verification applications have been developed [4]. Co-LP and Model Checking: The vast majority of properties that are to be verified can be classified into safety properties and liveness properties. It is well known within model checking that safety properties can be verified by reachability analysis, i.e, if a counter-example to the property exists, it can be finitely determined by enumerating all the reachable states of the Kripke structure.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  19. Bayesian model checking: A comparison of tests

    NASA Astrophysics Data System (ADS)

    Lucy, L. B.

    2018-06-01

    Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.

  20. Secure open cloud in data transmission using reference pattern and identity with enhanced remote privacy checking

    NASA Astrophysics Data System (ADS)

    Vijay Singh, Ran; Agilandeeswari, L.

    2017-11-01

    To handle the large amount of client’s data in open cloud lots of security issues need to be address. Client’s privacy should not be known to other group members without data owner’s valid permission. Sometime clients are fended to have accessing with open cloud servers due to some restrictions. To overcome the security issues and these restrictions related to storing, data sharing in an inter domain network and privacy checking, we propose a model in this paper which is based on an identity based cryptography in data transmission and intermediate entity which have client’s reference with identity that will take control handling of data transmission in an open cloud environment and an extended remote privacy checking technique which will work at admin side. On behalf of data owner’s authority this proposed model will give best options to have secure cryptography in data transmission and remote privacy checking either as private or public or instructed. The hardness of Computational Diffie-Hellman assumption algorithm for key exchange makes this proposed model more secure than existing models which are being used for public cloud environment.

  1. Spot-checks to measure general hygiene practice.

    PubMed

    Sonego, Ina L; Mosler, Hans-Joachim

    2016-01-01

    A variety of hygiene behaviors are fundamental to the prevention of diarrhea. We used spot-checks in a survey of 761 households in Burundi to examine whether something we could call general hygiene practice is responsible for more specific hygiene behaviors, ranging from handwashing to sweeping the floor. Using structural equation modeling, we showed that clusters of hygiene behavior, such as primary caregivers' cleanliness and household cleanliness, explained the spot-check findings well. Within our model, general hygiene practice as overall concept explained the more specific clusters of hygiene behavior well. Furthermore, the higher general hygiene practice, the more likely children were to be categorized healthy (r = 0.46). General hygiene practice was correlated with commitment to hygiene (r = 0.52), indicating a strong association to psychosocial determinants. The results show that different hygiene behaviors co-occur regularly. Using spot-checks, the general hygiene practice of a household can be rated quickly and easily.

  2. Towards the Solution of the Many-Electron Problem in Real Materials: Equation of State of the Hydrogen Chain with State-of-the-Art Many-Body Methods

    DOE PAGES

    Motta, Mario; Ceperley, David M.; Chan, Garnet Kin-Lic; ...

    2017-09-28

    We present numerical results for the equation of state of an infinite chain of hydrogen atoms. A variety of modern many-body methods are employed, with exhaustive cross-checks and validation. Approaches for reaching the continuous space limit and the thermodynamic limit are investigated, proposed, and tested. The detailed comparisons provide a benchmark for assessing the current state of the art in many-body computation, and for the development of new methods. The ground-state energy per atom in the linear chain is accurately determined versus bond length, with a confidence bound given on all uncertainties.

  3. Towards the Solution of the Many-Electron Problem in Real Materials: Equation of State of the Hydrogen Chain with State-of-the-Art Many-Body Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motta, Mario; Ceperley, David M.; Chan, Garnet Kin-Lic

    We present numerical results for the equation of state of an infinite chain of hydrogen atoms. A variety of modern many-body methods are employed, with exhaustive cross-checks and validation. Approaches for reaching the continuous space limit and the thermodynamic limit are investigated, proposed, and tested. The detailed comparisons provide a benchmark for assessing the current state of the art in many-body computation, and for the development of new methods. The ground-state energy per atom in the linear chain is accurately determined versus bond length, with a confidence bound given on all uncertainties.

  4. Numerical Implementation of the Cohesive Soil Bounding Surface Plasticity Model. Volume I.

    DTIC Science & Technology

    1983-02-01

    AD-R24 866 NUMERICAL IMPLEMENTATION OF THE COHESIVE SOIL BOUNDING 1/2 SURFACE PLASTICITY ..(U) CALIFORNIA UNIV DAVIS DEPT OF CIVIL ENGINEERING L R...a study of various numerical means for implementing the bounding surface plasticity model for cohesive soils is presented. A comparison is made of... Plasticity Models 17 3.4 Selection Of Methods For Comparison 17 3.5 Theory 20 3.5.1 Solution Methods 20 3.5.2 Reduction Of The Number Of Equation

  5. XMI2USE: A Tool for Transforming XMI to USE Specifications

    NASA Astrophysics Data System (ADS)

    Sun, Wuliang; Song, Eunjee; Grabow, Paul C.; Simmonds, Devon M.

    The UML-based Specification Environment (USE) tool supports syntactic analysis, type checking, consistency checking, and dynamic validation of invariants and pre-/post conditions specified in the Object Constraint Language (OCL). Due to its animation and analysis power, it is useful when checking critical non-functional properties such as security policies. However, the USE tool requires one to specify (i.e., "write") a model using its own textual language and does not allow one to import any model specification files created by other UML modeling tools. Hence, to make the best use of existing UML tools, we often create a model with OCL constraints using a modeling tool such as the IBM Rational Software Architect (RSA) and then use the USE tool for model validation. This approach, however, requires a manual transformation between the specifications of two different tool formats, which is error-prone and diminishes the benefit of automated model-level validations. In this paper, we describe our own implementation of a specification transformation engine that is based on the Model Driven Architecture (MDA) framework and currently supports automatic tool-level transformations from RSA to USE.

  6. Evaluation of properties over phylogenetic trees using stochastic logics.

    PubMed

    Requeno, José Ignacio; Colom, José Manuel

    2016-06-14

    Model checking has been recently introduced as an integrated framework for extracting information of the phylogenetic trees using temporal logics as a querying language, an extension of modal logics that imposes restrictions of a boolean formula along a path of events. The phylogenetic tree is considered a transition system modeling the evolution as a sequence of genomic mutations (we understand mutation as different ways that DNA can be changed), while this kind of logics are suitable for traversing it in a strict and exhaustive way. Given a biological property that we desire to inspect over the phylogeny, the verifier returns true if the specification is satisfied or a counterexample that falsifies it. However, this approach has been only considered over qualitative aspects of the phylogeny. In this paper, we repair the limitations of the previous framework for including and handling quantitative information such as explicit time or probability. To this end, we apply current probabilistic continuous-time extensions of model checking to phylogenetics. We reinterpret a catalog of qualitative properties in a numerical way, and we also present new properties that couldn't be analyzed before. For instance, we obtain the likelihood of a tree topology according to a mutation model. As case of study, we analyze several phylogenies in order to obtain the maximum likelihood with the model checking tool PRISM. In addition, we have adapted the software for optimizing the computation of maximum likelihoods. We have shown that probabilistic model checking is a competitive framework for describing and analyzing quantitative properties over phylogenetic trees. This formalism adds soundness and readability to the definition of models and specifications. Besides, the existence of model checking tools hides the underlying technology, omitting the extension, upgrade, debugging and maintenance of a software tool to the biologists. A set of benchmarks justify the feasibility of our approach.

  7. A Comparison of the Cheater Detection and the Unrelated Question Models: A Randomized Response Survey on Physical and Cognitive Doping in Recreational Triathletes

    PubMed Central

    Schröter, Hannes; Studzinski, Beatrix; Dietz, Pavel; Ulrich, Rolf; Striegel, Heiko; Simon, Perikles

    2016-01-01

    Purpose This study assessed the prevalence of physical and cognitive doping in recreational triathletes with two different randomized response models, that is, the Cheater Detection Model (CDM) and the Unrelated Question Model (UQM). Since both models have been employed in assessing doping, the major objective of this study was to investigate whether the estimates of these two models converge. Material and Methods An anonymous questionnaire was distributed to 2,967 athletes at two triathlon events (Frankfurt and Wiesbaden, Germany). Doping behavior was assessed either with the CDM (Frankfurt sample, one Wiesbaden subsample) or the UQM (one Wiesbaden subsample). A generalized likelihood-ratio test was employed to check whether the prevalence estimates differed significantly between models. In addition, we compared the prevalence rates of the present survey with those of a previous study on a comparable sample. Results After exclusion of incomplete questionnaires and outliers, the data of 2,017 athletes entered the final data analysis. Twelve-month prevalence for physical doping ranged from 4% (Wiesbaden, CDM and UQM) to 12% (Frankfurt CDM), and for cognitive doping from 1% (Wiesbaden, CDM) to 9% (Frankfurt CDM). The generalized likelihood-ratio test indicated no differences in prevalence rates between the two methods. Furthermore, there were no significant differences in prevalences between the present (undertaken in 2014) and the previous survey (undertaken in 2011), although the estimates tended to be smaller in the present survey. Discussion The results suggest that the two models can provide converging prevalence estimates. The high rate of cheaters estimated by the CDM, however, suggests that the present results must be seen as a lower bound and that the true prevalence of doping might be considerably higher. PMID:27218830

  8. Net Weight Issue LLNL DOE-STD-3013 Containers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilk, P

    2008-01-16

    The following position paper will describe DOE-STD-3013 container sets No.L000072 and No.L000076, and how they are compliant with DOE-STD-3013-2004. All masses of accountable nuclear materials are measured on LLNL certified balances maintained under an MC&A Program approved by DOE/NNSA LSO. All accountability balances are recalibrated annually and checked to be within calibration on each day that the balance is used for accountability purposes. A statistical analysis of the historical calibration checks from the last seven years indicates that the full-range Limit of Error (LoE, 95% confidence level) for the balance used to measure the mass of the contents of themore » above indicated 3013 containers is 0.185 g. If this error envelope, at the 95% confidence level, were to be used to generate an upper-limit to the measured weight of the containers No.L000072 and No.L000076, the error-envelope would extend beyond the 5.0 kg 3013-standard limit on the package contents by less than 0.3 g. However, this is still well within the intended safety bounds of DOE-STD-3013-2004.« less

  9. Induction of hyaluronan cables and monocyte adherence in epidermal keratinocytes.

    PubMed

    Jokela, Tiina A; Lindgren, Antti; Rilla, Kirsi; Maytin, Edward; Hascall, Vincent C; Tammi, Raija H; Tammi, Markku I

    2008-01-01

    Hyaluronan attached to cell surface can form at least two very different structures; a pericellular coat close to plasma membrane and hyaluronan chains coalesced into "cables" that can span several cell lengths. The hyaluronan in cables, induced by many inflammatory agents, can bind leukocytes, whereas that in the pericellular coat does not contribute to leukocyte binding. Therefore, this structural change seems to have a major role in inflammation. In the present study we checked whether cells of squamous epithelium, like epidermal keratinocytes, can form hyaluronan cables and bind leukocytes. In addition, we checked whether hyaluronan synthesis is affected during the induction of cables. Control keratinocytes expressed pericellular hyaluronan as small patches on plasma membrane. But when treated with inflammatory agents or stressful conditions (tunicamycin, interleukin-1beta, tumor necrosis factor-alpha, and high glucose concentration), hyaluronan organization changed into cable-like structures that avidly bound monocytes. Simultaneously, the total amount of secreted hyaluronan was slightly decreased, and the expression levels of hyaluronan synthases (Has1-3) and CD44 were not significantly changed. The results show that epidermal keratinocytes can form cables and bind leukocytes under inflammatory provocation and that these effects are not dependent on stimulation of hyaluronan secretion.

  10. Scattering of quasiparticles in $sup 3$He--$sup 4$He mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagchi, A.; Ruvalds, J.

    Considering the elementary excitation spectrum of /sup 3/He -/sup 4/He mixtures to be of the form proposed by Landau and Pomeranchuk, the scattering cross section for roton and /sup 3/He quasiparticle collisions was calculated taking final-state interactions into account. The theory demonstrates the importance of final-state interactions in renormalizing the roton energy and lifetime. Previous theories based on the Porn approximation are shown to give unreliable results for the change of the energy and lifetime for rotons in dilute /sup 3/He --/sup 4/He mixtures owing to roton-/sup 3/He scattering. Upper bound s on the changes in the energy and lifetimemore » of a roton as a function of the roton- /sup 3/He coupling strength were obtained using a simplified model for the coupling. These bounds give an insignificant change of the roton energy with the iHe eoncentration and thus explain recent neutron-seattering and Raman data on the mixtures. Effects of level repulsion between rotons and the /sup 3/He quasiparticle-hole continuum are calculated, and estimated to be small on the basis of recent Raman data, However, decay of a roton into a iHe quasiparticle- hole pair may live rise to an interesting concentration dependence of the roton linewidth. Further experimental studies of the mixtures are suggested, which may check the detailed predictions of the theory and provide insight into the momentum dependence of the coupling parameters. The present analysis represents an essential link between microscopic theories of the quasiparticle coupling and related experiments on dilute /sup 3/He --/sup 4/He mixtures. (auth)« less

  11. Stationary and oscillatory bound states of dissipative solitons created by third-order dispersion

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Hidetsugu; Skryabin, Dmitry V.; Malomed, Boris A.

    2018-06-01

    We consider the model of fiber-laser cavities near the zero-dispersion point, based on the complex Ginzburg-Landau equation with the cubic-quintic nonlinearity, including the third-order dispersion (TOD) term. It is well known that this model supports stable dissipative solitons. We demonstrate that the same model gives rise to several families of robust bound states of the solitons, which exists only in the presence of the TOD. There are both stationary and dynamical bound states, with oscillating separation between the bound solitons. Stationary states are multistable, corresponding to different values of the separation. With the increase of the TOD coefficient, the bound state with the smallest separation gives rise the oscillatory state through the Hopf bifurcation. Further growth of TOD leads to a bifurcation transforming the oscillatory limit cycle into a strange attractor, which represents a chaotically oscillating dynamical bound state. Families of multistable three- and four-soliton complexes are found too, the ones with the smallest separation between the solitons again ending by a transition to oscillatory states through the Hopf bifurcation.

  12. Model Checking Verification and Validation at JPL and the NASA Fairmont IV and V Facility

    NASA Technical Reports Server (NTRS)

    Schneider, Frank; Easterbrook, Steve; Callahan, Jack; Montgomery, Todd

    1999-01-01

    We show how a technology transfer effort was carried out. The successful use of model checking on a pilot JPL flight project demonstrates the usefulness and the efficacy of the approach. The pilot project was used to model a complex spacecraft controller. Software design and implementation validation were carried out successfully. To suggest future applications we also show how the implementation validation step can be automated. The effort was followed by the formal introduction of the modeling technique as a part of the JPL Quality Assurance process.

  13. Seismic Modeling of the Alasehir Graben, Western Turkey

    NASA Astrophysics Data System (ADS)

    Gozde Okut, Nigar; Demirbag, Emin

    2014-05-01

    The purpose of this study is to develop a depth model to make synthetic seismic reflection sections, such as stacked and migrated sections with different velocity models. The study area is east-west trending Alasehir graben which is one of the most prominent structure in the western Anatolia, proved to have geothermal energy potential by researchers and exploration companies. Geological formations were taken from Alaşehir-1 borehole drilled by Turkish Petroleum Corporation (Çiftçi, 2007) and seismic interval velocities were taken from check-shots in the same borehole (Kolenoǧlu-Demircioǧlu, 2009). The most important structure is the master graben bounding fault (MGBF) in the southern margin of the Alasehir graben. Another main structure is the northern bounding fault called the antithetic fault of the MGBF with high angle normal fault characteristic. MGBF is a crucial contact between sedimentary cover and the metamorphic basement. From basement to the surface, five different stratigraphic units constitute graben fill . All the sedimentary units thicknesses get thinner from the southern margin to the northern margin of the Alasehir graben displaying roll-over geometry. A commercial seismic data software was used during modeling. In the first step, a 2D velocity/depth model was defined. Ray tracing was carried out with diffraction option to produce the reflection travel times. The reflection coefficients were calculated and wavelet shaping was carried out by means of band-pass filtering. Finally synthetic stacked section of the Alasehir graben was obtained. Then, migrated sections were generated with different velocity models. From interval velocities, average and RMS velocities were calculated for the formation entires to test how the general features of the geological model may change against different seismic models after the migration. Post-stack time migration method was used. Pseudo-velocity analysis was applied at selected CDP locations. In theory, seismic migration moves events to their correct spatial locations and collapse energy from diffractions back to their scattering points. This features of migration can be distinguished in the migrated sections. When interval velocities used, all the diffractions are removed and fault planes can be seen clearly. When average velocities used, MGBF plane extends to greater depths. Additionally, slope angles and locations of antithetic faults in the northern margin of the graben changes. When RMS velocities used, a migrated section was obtained for which to make an interpretation was quite hard, especially for the main structures along the northern margin and reflections related to formations.

  14. Philosophy and the practice of Bayesian statistics

    PubMed Central

    Gelman, Andrew; Shalizi, Cosma Rohilla

    2015-01-01

    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework. PMID:22364575

  15. Philosophy and the practice of Bayesian statistics.

    PubMed

    Gelman, Andrew; Shalizi, Cosma Rohilla

    2013-02-01

    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework. © 2012 The British Psychological Society.

  16. Deciphering the nonlocal entanglement entropy of fracton topological orders

    NASA Astrophysics Data System (ADS)

    Shi, Bowen; Lu, Yuan-Ming

    2018-04-01

    The ground states of topological orders condense extended objects and support topological excitations. This nontrivial property leads to nonzero topological entanglement entropy Stopo for conventional topological orders. Fracton topological order is an exotic class of models which is beyond the description of TQFT. With some assumptions about the condensates and the topological excitations, we derive a lower bound of the nonlocal entanglement entropy Snonlocal (a generalization of Stopo). The lower bound applies to Abelian stabilizer models including conventional topological orders as well as type-I and type-II fracton models, and it could be used to distinguish them. For fracton models, the lower bound shows that Snonlocal could obtain geometry-dependent values, and Snonlocal is extensive for certain choices of subsystems, including some choices which always give zero for TQFT. The stability of the lower bound under local perturbations is discussed.

  17. A Model-Driven Approach for Telecommunications Network Services Definition

    NASA Astrophysics Data System (ADS)

    Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.

    Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.

  18. The dynamics of aloof baby Skyrmions

    DOE PAGES

    Salmi, Petja; Sutcliffe, Paul

    2016-01-25

    The aloof baby Skyrme model is a (2+1)-dimensional theory with solitons that are lightly bound. It is a low-dimensional analogue of a similar Skyrme model in (3+1)- dimensions, where the lightly bound solitons have binding energies comparable to nuclei. A previous study of static solitons in the aloof baby Skyrme model revealed that multi-soliton bound states have a cluster structure, with constituents that preserve their individual identities due to the short-range repulsion and long-range attraction between solitons. Furthermore, there are many different local energy minima that are all well-described by a simple binary species particle model. In this paper wemore » present the first results on soliton dynamics in the aloof baby Skyrme model. Numerical field theory simulations reveal that the lightly bound cluster structure results in a variety of exotic soliton scattering events that are novel in comparison to standard Skyrmion scattering. A dynamical version of the binary species point particle model is shown to provide a good qualitative description of the dynamics.« less

  19. The dynamics of aloof baby Skyrmions

    NASA Astrophysics Data System (ADS)

    Salmi, Petja; Sutcliffe, Paul

    2016-01-01

    The aloof baby Skyrme model is a (2+1)-dimensional theory with solitons that are lightly bound. It is a low-dimensional analogue of a similar Skyrme model in (3+1)-dimensions, where the lightly bound solitons have binding energies comparable to nuclei. A previous study of static solitons in the aloof baby Skyrme model revealed that multi-soliton bound states have a cluster structure, with constituents that preserve their individual identities due to the short-range repulsion and long-range attraction between solitons. Furthermore, there are many different local energy minima that are all well-described by a simple binary species particle model. In this paper we present the first results on soliton dynamics in the aloof baby Skyrme model. Numerical field theory simulations reveal that the lightly bound cluster structure results in a variety of exotic soliton scattering events that are novel in comparison to standard Skyrmion scattering. A dynamical version of the binary species point particle model is shown to provide a good qualitative description of the dynamics.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salmi, Petja; Sutcliffe, Paul

    The aloof baby Skyrme model is a (2+1)-dimensional theory with solitons that are lightly bound. It is a low-dimensional analogue of a similar Skyrme model in (3+1)- dimensions, where the lightly bound solitons have binding energies comparable to nuclei. A previous study of static solitons in the aloof baby Skyrme model revealed that multi-soliton bound states have a cluster structure, with constituents that preserve their individual identities due to the short-range repulsion and long-range attraction between solitons. Furthermore, there are many different local energy minima that are all well-described by a simple binary species particle model. In this paper wemore » present the first results on soliton dynamics in the aloof baby Skyrme model. Numerical field theory simulations reveal that the lightly bound cluster structure results in a variety of exotic soliton scattering events that are novel in comparison to standard Skyrmion scattering. A dynamical version of the binary species point particle model is shown to provide a good qualitative description of the dynamics.« less

  1. The Infobiotics Workbench: an integrated in silico modelling platform for Systems and Synthetic Biology.

    PubMed

    Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio

    2011-12-01

    The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.

  2. Complexity Bounds for Quantum Computation

    DTIC Science & Technology

    2007-06-22

    Programs Trustees of Boston University Boston, MA 02215 - Complexity Bounds for Quantum Computation REPORT DOCUMENTATION PAGE 18. SECURITY CLASSIFICATION...Complexity Bounds for Quantum Comp[utation Report Title ABSTRACT This project focused on upper and lower bounds for quantum computability using constant...classical computation models, particularly emphasizing new examples of where quantum circuits are more powerful than their classical counterparts. A second

  3. Error assessment of biogeochemical models by lower bound methods (NOMMA-1.0)

    NASA Astrophysics Data System (ADS)

    Sauerland, Volkmar; Löptien, Ulrike; Leonhard, Claudine; Oschlies, Andreas; Srivastav, Anand

    2018-03-01

    Biogeochemical models, capturing the major feedbacks of the pelagic ecosystem of the world ocean, are today often embedded into Earth system models which are increasingly used for decision making regarding climate policies. These models contain poorly constrained parameters (e.g., maximum phytoplankton growth rate), which are typically adjusted until the model shows reasonable behavior. Systematic approaches determine these parameters by minimizing the misfit between the model and observational data. In most common model approaches, however, the underlying functions mimicking the biogeochemical processes are nonlinear and non-convex. Thus, systematic optimization algorithms are likely to get trapped in local minima and might lead to non-optimal results. To judge the quality of an obtained parameter estimate, we propose determining a preferably large lower bound for the global optimum that is relatively easy to obtain and that will help to assess the quality of an optimum, generated by an optimization algorithm. Due to the unavoidable noise component in all observations, such a lower bound is typically larger than zero. We suggest deriving such lower bounds based on typical properties of biogeochemical models (e.g., a limited number of extremes and a bounded time derivative). We illustrate the applicability of the method with two real-world examples. The first example uses real-world observations of the Baltic Sea in a box model setup. The second example considers a three-dimensional coupled ocean circulation model in combination with satellite chlorophyll a.

  4. Parallel Software Model Checking

    DTIC Science & Technology

    2015-01-08

    checker. This project will explore this strategy to parallelize the generalized PDR algorithm for software model checking. It belongs to TF1 due to its ... focus on formal verification . Generalized PDR. Generalized Property Driven Rechability (GPDR) i is an algorithm for solving HORN-SMT reachability...subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 08

  5. Stochastic Game Analysis and Latency Awareness for Self-Adaptation

    DTIC Science & Technology

    2014-01-01

    this paper, we introduce a formal analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to quantify the...Additional Key Words and Phrases: Proactive adaptation, Stochastic multiplayer games , Latency 1. INTRODUCTION When planning how to adapt, self-adaptive...contribution of this paper is twofold: (1) A novel analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to

  6. Spread of entanglement and causality

    NASA Astrophysics Data System (ADS)

    Casini, Horacio; Liu, Hong; Mezei, Márk

    2016-07-01

    We investigate causality constraints on the time evolution of entanglement entropy after a global quench in relativistic theories. We first provide a general proof that the so-called tsunami velocity is bounded by the speed of light. We then generalize the free particle streaming model of [1] to general dimensions and to an arbitrary entanglement pattern of the initial state. In more than two spacetime dimensions the spread of entanglement in these models is highly sensitive to the initial entanglement pattern, but we are able to prove an upper bound on the normalized rate of growth of entanglement entropy, and hence the tsunami velocity. The bound is smaller than what one gets for quenches in holographic theories, which highlights the importance of interactions in the spread of entanglement in many-body systems. We propose an interacting model which we believe provides an upper bound on the spread of entanglement for interacting relativistic theories. In two spacetime dimensions with multiple intervals, this model and its variations are able to reproduce intricate results exhibited by holographic theories for a significant part of the parameter space. For higher dimensions, the model bounds the tsunami velocity at the speed of light. Finally, we construct a geometric model for entanglement propagation based on a tensor network construction for global quenches.

  7. Probabilistic Priority Message Checking Modeling Based on Controller Area Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    Although the probabilistic model checking tool called PRISM has been applied in many communication systems, such as wireless local area network, Bluetooth, and ZigBee, the technique is not used in a controller area network (CAN). In this paper, we use PRISM to model the mechanism of priority messages for CAN because the mechanism has allowed CAN to become the leader in serial communication for automobile and industry control. Through modeling CAN, it is easy to analyze the characteristic of CAN for further improving the security and efficiency of automobiles. The Markov chain model helps us to model the behaviour of priority messages.

  8. The Lag Model, a Turbulence Model for Wall Bounded Flows Including Separation

    NASA Technical Reports Server (NTRS)

    Olsen, Michael E.; Coakley, Thomas J.; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A new class of turbulence model is described for wall bounded, high Reynolds number flows. A specific turbulence model is demonstrated, with results for favorable and adverse pressure gradient flowfields. Separation predictions are as good or better than either Spalart Almaras or SST models, do not require specification of wall distance, and have similar or reduced computational effort compared with these models.

  9. Nearly Supersymmetric Dark Atoms

    DOE PAGES

    Behbahani, Siavosh R.; Jankowiak, Martin; Rube, Tomas; ...

    2011-01-01

    Theories of dark matter that support bound states are an intriguing possibility for the identity of the missing mass of the Universe. This article proposes a class of models of supersymmetric composite dark matter where the interactions with the Standard Model communicate supersymmetry breaking to the dark sector. In these models, supersymmetry breaking can be treated as a perturbation on the spectrum of bound states. Using a general formalism, the spectrum with leading supersymmetry effects is computed without specifying the details of the binding dynamics. The interactions of the composite states with the Standard Model are computed, and several benchmarkmore » models are described. General features of nonrelativistic supersymmetric bound states are emphasized.« less

  10. On the likelihood of single-peaked preferences.

    PubMed

    Lackner, Marie-Louise; Lackner, Martin

    2017-01-01

    This paper contains an extensive combinatorial analysis of the single-peaked domain restriction and investigates the likelihood that an election is single-peaked. We provide a very general upper bound result for domain restrictions that can be defined by certain forbidden configurations. This upper bound implies that many domain restrictions (including the single-peaked restriction) are very unlikely to appear in a random election chosen according to the Impartial Culture assumption. For single-peaked elections, this upper bound can be refined and complemented by a lower bound that is asymptotically tight. In addition, we provide exact results for elections with few voters or candidates. Moreover, we consider the Pólya urn model and the Mallows model and obtain lower bounds showing that single-peakedness is considerably more likely to appear for certain parameterizations.

  11. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  12. Cell-bound lipases from Burkholderia sp. ZYB002: gene sequence analysis, expression, enzymatic characterization, and 3D structural model.

    PubMed

    Shu, Zhengyu; Lin, Hong; Shi, Shaolei; Mu, Xiangduo; Liu, Yanru; Huang, Jianzhong

    2016-05-03

    The whole-cell lipase from Burkholderia cepacia has been used as a biocatalyst in organic synthesis. However, there is no report in the literature on the component or the gene sequence of the cell-bound lipase from this species. Qualitative analysis of the cell-bound lipase would help to illuminate the regulation mechanism of gene expression and further improve the yield of the cell-bound lipase by gene engineering. Three predictive cell-bound lipases, lipA, lipC21 and lipC24, from Burkholderia sp. ZYB002 were cloned and expressed in E. coli. Both LipA and LipC24 displayed the lipase activity. LipC24 was a novel mesophilic enzyme and displayed preference for medium-chain-length acyl groups (C10-C14). The 3D structural model of LipC24 revealed the open Y-type active site. LipA displayed 96 % amino acid sequence identity with the known extracellular lipase. lipA-inactivation and lipC24-inactivation decreased the total cell-bound lipase activity of Burkholderia sp. ZYB002 by 42 % and 14 %, respectively. The cell-bound lipase activity from Burkholderia sp. ZYB002 originated from a multi-enzyme mixture with LipA as the main component. LipC24 was a novel lipase and displayed different enzymatic characteristics and structural model with LipA. Besides LipA and LipC24, other type of the cell-bound lipases (or esterases) should exist.

  13. Folate-targeted nanoparticles show efficacy in the treatment of inflammatory arthritis

    PubMed Central

    Thomas, Thommey P.; Goonewardena, Sascha N.; Majoros, Istvan; Kotlyar, Alina; Cao, Zhengyi; Leroueil, Pascale R.; Baker, James R.

    2011-01-01

    Objective To investigate the uptake of a poly(amidoamine) dendrimer (generation 5 (G5)) nanoparticle covalently conjugated to polyvalent folic acid (FA) as the targeting ligand into macrophages, and the activity of a FA- and methotrexate-conjugated dendrimer (G5-FA-MTX) as a therapeutic for the inflammatory disease of arthritis. Methods In vitro studies were performed in macrophage cell lines and in isolated mouse macrophages to check the cellular uptake of fluorescently tagged G5-FA nanoparticles, using flow cytometry and confocal microscopy. In vivo studies were conducted in a rat model of collagen-induced arthritis to evaluate the therapeutic potential of G5-FA-MTX. Results Folate targeted dendrimer bound and internalized in a receptor-specific manner into both folate receptor β-expressing macrophage cell lines and primary mouse macrophages. The G5-FA-MTX acts as a potent anti-inflammatory agent and reduces arthritis-induced inflammatory parameters such as ankle swelling, paw volume, cartilage damage, bone resorption and body weight decrease. Conclusion The use of folate-targeted nanoparticles to specifically target MTX into macrophages may provide an effective clinical approach for anti-inflammatory therapy in rheumatoid arthritis. PMID:21618461

  14. Logical-Rule Models of Classification Response Times: A Synthesis of Mental-Architecture, Random-Walk, and Decision-Bound Approaches

    ERIC Educational Resources Information Center

    Fific, Mario; Little, Daniel R.; Nosofsky, Robert M.

    2010-01-01

    We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli…

  15. MOM: A meteorological data checking expert system in CLIPS

    NASA Technical Reports Server (NTRS)

    Odonnell, Richard

    1990-01-01

    Meteorologists have long faced the problem of verifying the data they use. Experience shows that there is a sizable number of errors in the data reported by meteorological observers. This is unacceptable for computer forecast models, which depend on accurate data for accurate results. Most errors that occur in meteorological data are obvious to the meteorologist, but time constraints prevent hand-checking. For this reason, it is necessary to have a 'front end' to the computer model to ensure the accuracy of input. Various approaches to automatic data quality control have been developed by several groups. MOM is a rule-based system implemented in CLIPS and utilizing 'consistency checks' and 'range checks'. The system is generic in the sense that it knows some meteorological principles, regardless of specific station characteristics. Specific constraints kept as CLIPS facts in a separate file provide for system flexibility. Preliminary results show that the expert system has detected some inconsistencies not noticed by a local expert.

  16. Learning Assumptions for Compositional Verification

    NASA Technical Reports Server (NTRS)

    Cobleigh, Jamieson M.; Giannakopoulou, Dimitra; Pasareanu, Corina; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assume-guarantee style. However, the application of this technique is difficult because it involves non-trivial human input. This paper presents a novel framework for performing assume-guarantee reasoning in an incremental and fully automated fashion. To check a component against a property, our approach generates assumptions that the environment needs to satisfy for the property to hold. These assumptions are then discharged on the rest of the system. Assumptions are computed by a learning algorithm. They are initially approximate, but become gradually more precise by means of counterexamples obtained by model checking the component and its environment, alternately. This iterative process may at any stage conclude that the property is either true or false in the system. We have implemented our approach in the LTSA tool and applied it to the analysis of a NASA system.

  17. Indazole-based potent and cell-active Mps1 kinase inhibitors: rational design from pan-kinase inhibitor anthrapyrazolone (SP600125).

    PubMed

    Kusakabe, Ken-ichi; Ide, Nobuyuki; Daigo, Yataro; Tachibana, Yuki; Itoh, Takeshi; Yamamoto, Takahiko; Hashizume, Hiroshi; Hato, Yoshio; Higashino, Kenichi; Okano, Yousuke; Sato, Yuji; Inoue, Makiko; Iguchi, Motofumi; Kanazawa, Takayuki; Ishioka, Yukichi; Dohi, Keiji; Kido, Yasuto; Sakamoto, Shingo; Yasuo, Kazuya; Maeda, Masahiro; Higaki, Masayo; Ueda, Kazuo; Yoshizawa, Hidenori; Baba, Yoshiyasu; Shiota, Takeshi; Murai, Hitoshi; Nakamura, Yusuke

    2013-06-13

    Monopolar spindle 1 (Mps1) is essential for centrosome duplication, the spindle assembly check point, and the maintenance of chromosomal instability. Mps1 is highly expressed in cancer cells, and its expression levels correlate with the histological grades of cancers. Thus, selective Mps1 inhibitors offer an attractive opportunity for the development of novel cancer therapies. To design novel Mps1 inhibitors, we utilized the pan-kinase inhibitor anthrapyrazolone (4, SP600125) and its crystal structure bound to JNK1. Our design efforts led to the identification of indazole-based lead 6 with an Mps1 IC50 value of 498 nM. Optimization of the 3- and 6-positions on the indazole core of 6 resulted in 23c with improved Mps1 activity (IC50 = 3.06 nM). Finally, application of structure-based design using the X-ray structure of 23d bound to Mps1 culminated in the discovery of 32a and 32b with improved potency for cellular Mps1 and A549 lung cancer cells. Moreover, 32a and 32b exhibited reasonable selectivities over 120 and 166 kinases, respectively.

  18. Action growth of charged black holes with a single horizon

    NASA Astrophysics Data System (ADS)

    Cai, Rong-Gen; Sasaki, Misao; Wang, Shao-Jiang

    2017-06-01

    According to the conjecture "complexity equals action," the complexity of a holographic state is equal to the action of a Wheeler-DeWitt (WDW) patch of black holes in anti-de Sitter space. In this paper we calculate the action growth of charged black holes with a single horizon, paying attention to the contribution from a spacelike singularity inside the horizon. We consider two kinds of such charged black holes: one is a charged dilaton black hole, and the other is a Born-Infeld black hole with β2Q2<1 /4 . In both cases, although an electric charge appears in the black hole solutions, the inner horizon is absent; instead a spacelike singularity appears inside the horizon. We find that the action growth of the WDW patch of the charged black hole is finite and satisfies the Lloyd bound. As a check, we also calculate the action growth of a charged black hole with a phantom Maxwell field. In this case, although the contributions from the bulk integral and the spacelike singularity are individually divergent, these two divergences just cancel each other and a finite action growth is obtained. But in this case, the Lloyd bound is violated as expected.

  19. Generalization of the Hartree-Fock approach to collision processes

    NASA Astrophysics Data System (ADS)

    Hahn, Yukap

    1997-06-01

    The conventional Hartree and Hartree-Fock approaches for bound states are generalized to treat atomic collision processes. All the single-particle orbitals, for both bound and scattering states, are determined simultaneously by requiring full self-consistency. This generalization is achieved by introducing two Ansäauttze: (a) the weak asymptotic boundary condition, which maintains the correct scattering energy and target orbitals with correct number of nodes, and (b) square integrable amputated scattering functions to generate self-consistent field (SCF) potentials for the target orbitals. The exact initial target and final-state asymptotic wave functions are not required and thus need not be specified a priori, as they are determined simultaneously by the SCF iterations. To check the asymptotic behavior of the solution, the theory is applied to elastic electron-hydrogen scattering at low energies. The solution is found to be stable and the weak asymptotic condition is sufficient to produce the correct scattering amplitudes. The SCF potential for the target orbital shows the strong penetration by the projectile electron during the collision, but the exchange term tends to restore the original form. Potential applicabilities of this extension are discussed, including the treatment of ionization and shake-off processes.

  20. Model-checking techniques based on cumulative residuals.

    PubMed

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  1. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  2. Thermal Destruction Of CB Contaminants Bound On Building ...

    EPA Pesticide Factsheets

    Symposium Paper An experimental and theoretical program has been initiated by the U.S. EPA to investigate issues of chemical/biological agent destruction in incineration systems when the agent in question is bound on common porous building interior materials. This program includes 3-dimensional computational fluid dynamics modeling with matrix-bound agent destruction kinetics, bench-scale experiments to determine agent destruction kinetics while bound on various matrices, and pilot-scale experiments to scale-up the bench-scale experiments to a more practical scale. Finally, model predictions are made to predict agent destruction and combustion conditions in two full-scale incineration systems that are typical of modern combustor design.

  3. Conformational phases of membrane bound cytoskeletal filaments

    NASA Astrophysics Data System (ADS)

    Quint, David A.; Grason, Gregory; Gopinathan, Ajay

    2013-03-01

    Membrane bound cytoskeletal filaments found in living cells are employed to carry out many types of activities including cellular division, rigidity and transport. When these biopolymers are bound to a membrane surface they may take on highly non-trivial conformations as compared to when they are not bound. This leads to the natural question; What are the important interactions which drive these polymers to particular conformations when they are bound to a surface? Assuming that there are binding domains along the polymer which follow a periodic helical structure set by the natural monomeric handedness, these bound conformations must arise from the interplay of the intrinsic monomeric helicity and membrane binding. To probe this question, we study a continuous model of an elastic filament with intrinsic helicity and map out the conformational phases of this filament for various mechanical and structural parameters in our model, such as elastic stiffness and intrinsic twist of the filament. Our model allows us to gain insight into the possible mechanisms which drive real biopolymers such as actin and tubulin in eukaryotes and their prokaryotic cousins MreB and FtsZ to take on their functional conformations within living cells.

  4. The global historical and future economic loss and cost of earthquakes during the production of adaptive worldwide economic fragility functions

    NASA Astrophysics Data System (ADS)

    Daniell, James; Wenzel, Friedemann

    2014-05-01

    Over the past decade, the production of economic indices behind the CATDAT Damaging Earthquakes Database has allowed for the conversion of historical earthquake economic loss and cost events into today's terms using long-term spatio-temporal series of consumer price index (CPI), construction costs, wage indices, and GDP from 1900-2013. As part of the doctoral thesis of Daniell (2014), databases and GIS layers for a country and sub-country level have been produced for population, GDP per capita, net and gross capital stock (depreciated and non-depreciated) using studies, census information and the perpetual inventory method. In addition, a detailed study has been undertaken to collect and reproduce as many historical isoseismal maps, macroseismic intensity results and reproductions of earthquakes as possible out of the 7208 damaging events in the CATDAT database from 1900 onwards. a) The isoseismal database and population bounds from 3000+ collected damaging events were compared with the output parameters of GDP and net and gross capital stock per intensity bound and administrative unit, creating a spatial join for analysis. b) The historical costs were divided into shaking/direct ground motion effects, and secondary effects costs. The shaking costs were further divided into gross capital stock related and GDP related costs for each administrative unit, intensity bound couplet. c) Costs were then estimated based on the optimisation of the function in terms of costs vs. gross capital stock and costs vs. GDP via the regression of the function. Losses were estimated based on net capital stock, looking at the infrastructure age and value at the time of the event. This dataset was then used to develop an economic exposure for each historical earthquake in comparison with the loss recorded in the CATDAT Damaging Earthquakes Database. The production of economic fragility functions for each country was possible using a temporal regression based on the parameters of macroseismic intensity, capital stock estimate, GDP estimate, year and the combined seismic building index (a created combination of the global seismic code index, building practice factor, building age and infrastructure vulnerability). The analysis provided three key results: a) The production of economic fragility functions from the 1900-2008 events showed very good correlation to the economic loss and cost from earthquakes from 2009-2013, in real-time. This methodology has been extended to other natural disaster types (typhoon, flood, drought). b) The reanalysis of historical earthquake events in order to check associated historical loss and costs versus the expected exposure in terms of intensities. The 1939 Chillan, 1948 Turkmenistan, 1950 Iran, 1972 Managua, 1980 Western Nepal and 1992 Erzincan earthquake events were seen as huge outliers compared with the modelled capital stock and GDP and thus additional studies were undertaken to check the original loss results. c) A worldwide GIS layer database of capital stock (gross and net), GDP, infrastructure age and economic indices over the period 1900-2013 have been created in conjunction with the CATDAT database in order to define correct economic loss and costs.

  5. Generalized Symbolic Execution for Model Checking and Testing

    NASA Technical Reports Server (NTRS)

    Khurshid, Sarfraz; Pasareanu, Corina; Visser, Willem; Kofmeyer, David (Technical Monitor)

    2003-01-01

    Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches: one, we define a program instrumentation, which enables standard model checkers to perform symbolic execution; two, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity of lists), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore program heap configurations (using a systematic treatment of aliasing) and manipulate logical formulae on program data values (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.

  6. Model Checking Degrees of Belief in a System of Agents

    NASA Technical Reports Server (NTRS)

    Raimondi, Franco; Primero, Giuseppe; Rungta, Neha

    2014-01-01

    Reasoning about degrees of belief has been investigated in the past by a number of authors and has a number of practical applications in real life. In this paper we present a unified framework to model and verify degrees of belief in a system of agents. In particular, we describe an extension of the temporal-epistemic logic CTLK and we introduce a semantics based on interpreted systems for this extension. In this way, degrees of beliefs do not need to be provided externally, but can be derived automatically from the possible executions of the system, thereby providing a computationally grounded formalism. We leverage the semantics to (a) construct a model checking algorithm, (b) investigate its complexity, (c) provide a Java implementation of the model checking algorithm, and (d) evaluate our approach using the standard benchmark of the dining cryptographers. Finally, we provide a detailed case study: using our framework and our implementation, we assess and verify the situational awareness of the pilot of Air France 447 flying in off-nominal conditions.

  7. Test and Evaluation Report of the IVAC (Trademark) Vital Check Monitor Model 4000AEE

    DTIC Science & Technology

    1992-02-01

    AD-A248 834 111111 jIf+l l’ l USAARL Report No. 92-14 Test and Evaluation Report of the IVAC® Vital Check Monitor DTI ~cModel 4000AEE f ELECTE APR17...does not constitute an official Department of the Army endorsement or approval of the use ot such commercial items. Reviewed: DENNIS F . SHANAHAN LTC, MC...to 12.4 GHz) was scanned for emissions. The IVACO Model 4000AEE was operated with both ac and battery power. 2.10.3.2 The radiated susceptibility

  8. Automated Verification of Specifications with Typestates and Access Permissions

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.; Catano, Nestor

    2011-01-01

    We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).

  9. Robust stability for stochastic bidirectional associative memory neural networks with time delays

    NASA Astrophysics Data System (ADS)

    Shu, H. S.; Lv, Z. W.; Wei, G. L.

    2008-02-01

    In this paper, the asymptotic stability is considered for a class of uncertain stochastic bidirectional associative memory neural networks with time delays and parameter uncertainties. The delays are time-invariant and the uncertainties are norm-bounded that enter into all network parameters. The aim of this paper is to establish easily verifiable conditions under which the delayed neural network is robustly asymptotically stable in the mean square for all admissible parameter uncertainties. By employing a Lyapunov-Krasovskii functional and conducting the stochastic analysis, a linear matrix inequality matrix inequality (LMI) approach is developed to derive the stability criteria. The proposed criteria can be easily checked by the Matlab LMI toolbox. A numerical example is given to demonstrate the usefulness of the proposed criteria.

  10. Data mining the PDB for glyco-related data.

    PubMed

    Lütteke, Thomas; von der Lieth, Claus W

    2009-01-01

    The 3D structural data of glycoprotein or protein-carbohydrate complexes that are found in the Protein Data Bank (PDB) are an interesting data source for glycobiologists. Unfortunately, carbohydrate components are difficult to find with the means provided by the PDB. The GLYCOSCIENCES.de internet portal offers a variety of tools and databases to locate and analyze these structures. This chapter describes how to find PDB entries that feature a specific carbohydrate structure and how to locate carbohydrate residues in a 3D structure file and to check their consistency. In addition to this, methods to statistically analyze torsion angles and the abundance of amino acids both in the neighborhood of glycosylation sites and in the spatial vicinity of non-covalently bound carbohydrate chains are summarized.

  11. Two-polariton bound states in the Jaynes-Cummings-Hubbard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Max T. C.; Law, C. K.

    2011-05-15

    We examine the eigenstates of the one-dimensional Jaynes-Cummings-Hubbard model in the two-excitation subspace. We discover that two-excitation bound states emerge when the ratio of vacuum Rabi frequency to the tunneling rate between cavities exceeds a critical value. We determine the critical value as a function of the quasimomentum quantum number, and indicate that the bound states carry a strong correlation in which the two polaritons appear to be spatially confined together.

  12. Effects of soft interactions and bound mobility on diffusion in crowded environments: a model of sticky and slippery obstacles

    NASA Astrophysics Data System (ADS)

    Stefferson, Michael W.; Norris, Samantha L.; Vernerey, Franck J.; Betterton, Meredith D.; E Hough, Loren

    2017-08-01

    Crowded environments modify the diffusion of macromolecules, generally slowing their movement and inducing transient anomalous subdiffusion. The presence of obstacles also modifies the kinetics and equilibrium behavior of tracers. While previous theoretical studies of particle diffusion have typically assumed either impenetrable obstacles or binding interactions that immobilize the particle, in many cellular contexts bound particles remain mobile. Examples include membrane proteins or lipids with some entry and diffusion within lipid domains and proteins that can enter into membraneless organelles or compartments such as the nucleolus. Using a lattice model, we studied the diffusive movement of tracer particles which bind to soft obstacles, allowing tracers and obstacles to occupy the same lattice site. For sticky obstacles, bound tracer particles are immobile, while for slippery obstacles, bound tracers can hop without penalty to adjacent obstacles. In both models, binding significantly alters tracer motion. The type and degree of motion while bound is a key determinant of the tracer mobility: slippery obstacles can allow nearly unhindered diffusion, even at high obstacle filling fraction. To mimic compartmentalization in a cell, we examined how obstacle size and a range of bound diffusion coefficients affect tracer dynamics. The behavior of the model is similar in two and three spatial dimensions. Our work has implications for protein movement and interactions within cells.

  13. A Feasibility Study of Nonlinear Spectroscopic Measurement of Magnetic Nanoparticles Targeted to Cancer Cells.

    PubMed

    Ficko, Bradley W; NDong, Christian; Giacometti, Paolo; Griswold, Karl E; Diamond, Solomon G

    2017-05-01

    Magnetic nanoparticles (MNPs) are an emerging platform for targeted diagnostics in cancer. An important component needed for translation of MNPs is the detection and quantification of targeted MNPs bound to tumor cells. This study explores the feasibility of a multifrequency nonlinear magnetic spectroscopic method that uses excitation and pickup coils and is capable of discriminating between quantities of bound and unbound MNPs in 0.5 ml samples of KB and Igrov human cancer cell lines. The method is tested over a range of five concentrations of MNPs from 0 to 80 μg/ml and five concentrations of cells from 50 to 400 000 count per ml. A linear model applied to the magnetic spectroscopy data was able to simultaneously measure bound and unbound MNPs with agreement between the model-fit and lab assay measurements (p < 0.001). The detectable iron of the presented method to bound and unbound MNPs was < 2 μg in a 0.5 ml sample. The linear model parameters used to determine the quantities of bound and unbound nanoparticles in KB cells were also used to measure the bound and unbound MNP in the Igrov cell line and vice versa. Nonlinear spectroscopic measurement of MNPs may be a useful method for studying targeted MNPs in oncology. Determining the quantity of bound and unbound MNP in an unknown sample using a linear model represents an exciting opportunity to translate multifrequency nonlinear spectroscopy methods to in vivo applications where MNPs could be targeted to cancer cells.

  14. Randomness in the network inhibits cooperation based on the bounded rational collective altruistic decision

    NASA Astrophysics Data System (ADS)

    Ohdaira, Tetsushi

    2014-07-01

    Previous studies discussing cooperation employ the best decision that every player knows all information regarding the payoff matrix and selects the strategy of the highest payoff. Therefore, they do not discuss cooperation based on the altruistic decision with limited information (bounded rational altruistic decision). In addition, they do not cover the case where every player can submit his/her strategy several times in a match of the game. This paper is based on Ohdaira's reconsideration of the bounded rational altruistic decision, and also employs the framework of the prisoner's dilemma game (PDG) with sequential strategy. The distinction between this study and the Ohdaira's reconsideration is that the former covers the model of multiple groups, but the latter deals with the model of only two groups. Ohdaira's reconsideration shows that the bounded rational altruistic decision facilitates much more cooperation in the PDG with sequential strategy than Ohdaira and Terano's bounded rational second-best decision does. However, the detail of cooperation of multiple groups based on the bounded rational altruistic decision has not been resolved yet. This study, therefore, shows how randomness in the network composed of multiple groups affects the increase of the average frequency of mutual cooperation (cooperation between groups) based on the bounded rational altruistic decision of multiple groups. We also discuss the results of the model in comparison with related studies which employ the best decision.

  15. Determining relative error bounds for the CVBEM

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.

  16. Utilisation of preventative health check-ups in the UK: findings from individual-level repeated cross-sectional data from 1992 to 2008

    PubMed Central

    Labeit, Alexander; Peinemann, Frank; Baker, Richard

    2013-01-01

    Objectives To analyse and compare the determinants of screening uptake for different National Health Service (NHS) health check-ups in the UK. Design Individual-level analysis of repeated cross-sectional surveys with balanced panel data. Setting The UK. Participants Individuals taking part in the British Household Panel Survey (BHPS), 1992–2008. Outcome measure Uptake of NHS health check-ups for cervical cancer screening, breast cancer screening, blood pressure checks, cholesterol tests, dental screening and eyesight tests. Methods Dynamic panel data models (random effects panel probit with initial conditions). Results Having had a health check-up 1 year before, and previously in accordance with the recommended schedule, was associated with higher uptake of health check-ups. Individuals who visited a general practitioner (GP) had a significantly higher uptake in 5 of the 6 health check-ups. Uptake was highest in the recommended age group for breast and cervical cancer screening. For all health check-ups, age had a non-linear relationship. Lower self-rated health status was associated with increased uptake of blood pressure checks and cholesterol tests; smoking was associated with decreased uptake of 4 health check-ups. The effects of socioeconomic variables differed for the different health check-ups. Ethnicity did not have a significant influence on any health check-up. Permanent household income had an influence only on eyesight tests and dental screening. Conclusions Common determinants for having health check-ups are age, screening history and a GP visit. Policy interventions to increase uptake should consider the central role of the GP in promoting screening examinations and in preserving a high level of uptake. Possible economic barriers to access for prevention exist for dental screening and eyesight tests, and could be a target for policy intervention. Trial registration This observational study was not registered. PMID:24366576

  17. How can machine-learning methods assist in virtual screening for hyperuricemia? A healthcare machine-learning approach.

    PubMed

    Ichikawa, Daisuke; Saito, Toki; Ujita, Waka; Oyama, Hiroshi

    2016-12-01

    Our purpose was to develop a new machine-learning approach (a virtual health check-up) toward identification of those at high risk of hyperuricemia. Applying the system to general health check-ups is expected to reduce medical costs compared with administering an additional test. Data were collected during annual health check-ups performed in Japan between 2011 and 2013 (inclusive). We prepared training and test datasets from the health check-up data to build prediction models; these were composed of 43,524 and 17,789 persons, respectively. Gradient-boosting decision tree (GBDT), random forest (RF), and logistic regression (LR) approaches were trained using the training dataset and were then used to predict hyperuricemia in the test dataset. Undersampling was applied to build the prediction models to deal with the imbalanced class dataset. The results showed that the RF and GBDT approaches afforded the best performances in terms of sensitivity and specificity, respectively. The area under the curve (AUC) values of the models, which reflected the total discriminative ability of the classification, were 0.796 [95% confidence interval (CI): 0.766-0.825] for the GBDT, 0.784 [95% CI: 0.752-0.815] for the RF, and 0.785 [95% CI: 0.752-0.819] for the LR approaches. No significant differences were observed between pairs of each approach. Small changes occurred in the AUCs after applying undersampling to build the models. We developed a virtual health check-up that predicted the development of hyperuricemia using machine-learning methods. The GBDT, RF, and LR methods had similar predictive capability. Undersampling did not remarkably improve predictive power. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Reopen parameter regions in two-Higgs doublet models

    NASA Astrophysics Data System (ADS)

    Staub, Florian

    2018-01-01

    The stability of the electroweak potential is a very important constraint for models of new physics. At the moment, it is standard for Two-Higgs doublet models (THDM), singlet or triplet extensions of the standard model to perform these checks at tree-level. However, these models are often studied in the presence of very large couplings. Therefore, it can be expected that radiative corrections to the potential are important. We study these effects at the example of the THDM type-II and find that loop corrections can revive more than 50% of the phenomenological viable points which are ruled out by the tree-level vacuum stability checks. Similar effects are expected for other extension of the standard model.

  19. Verification of Compartmental Epidemiological Models using Metamorphic Testing, Model Checking and Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanathan, Arvind; Steed, Chad A; Pullum, Laura L

    Compartmental models in epidemiology are widely used as a means to model disease spread mechanisms and understand how one can best control the disease in case an outbreak of a widespread epidemic occurs. However, a significant challenge within the community is in the development of approaches that can be used to rigorously verify and validate these models. In this paper, we present an approach to rigorously examine and verify the behavioral properties of compartmen- tal epidemiological models under several common modeling scenarios including birth/death rates and multi-host/pathogen species. Using metamorphic testing, a novel visualization tool and model checking, we buildmore » a workflow that provides insights into the functionality of compartmental epidemiological models. Our initial results indicate that metamorphic testing can be used to verify the implementation of these models and provide insights into special conditions where these mathematical models may fail. The visualization front-end allows the end-user to scan through a variety of parameters commonly used in these models to elucidate the conditions under which an epidemic can occur. Further, specifying these models using a process algebra allows one to automatically construct behavioral properties that can be rigorously verified using model checking. Taken together, our approach allows for detecting implementation errors as well as handling conditions under which compartmental epidemiological models may fail to provide insights into disease spread dynamics.« less

  20. Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.

    PubMed

    Cabrera, M E; Casas, J A; Delgado, A

    2012-01-13

    The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11)  GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.

  1. 10 CFR 35.2642 - Records of periodic spot-checks for teletherapy units.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....2642 Section 35.2642 Energy NUCLEAR REGULATORY COMMISSION MEDICAL USE OF BYPRODUCT MATERIAL Records... must include— (1) The date of the spot-check; (2) The manufacturer's name, model number, and serial... device; (6) The determined accuracy of each distance measuring and localization device; (7) The...

  2. 10 CFR 35.2642 - Records of periodic spot-checks for teletherapy units.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....2642 Section 35.2642 Energy NUCLEAR REGULATORY COMMISSION MEDICAL USE OF BYPRODUCT MATERIAL Records... must include— (1) The date of the spot-check; (2) The manufacturer's name, model number, and serial... device; (6) The determined accuracy of each distance measuring and localization device; (7) The...

  3. 12 CFR Appendix A to Part 205 - Model Disclosure Clauses and Forms

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... your checking account using information from your check to: (i) Pay for purchases. (ii) Pay bills. (3... disclose information to third parties about your account or the transfers you make: (i) Where it is...) Disclosure by government agencies of information about obtaining account balances and account histories...

  4. Using computer models to design gully erosion control structures for humid northern Ethiopia

    USDA-ARS?s Scientific Manuscript database

    Classic gully erosion control measures such as check dams have been unsuccessful in halting gully formation and growth in the humid northern Ethiopian highlands. Gullies are typically formed in vertisols and flow often bypasses the check dams as elevated groundwater tables make gully banks unstable....

  5. Posterior Predictive Checks for Conditional Independence between Response Time and Accuracy

    ERIC Educational Resources Information Center

    Bolsinova, Maria; Tijmstra, Jesper

    2016-01-01

    Conditional independence (CI) between response time and response accuracy is a fundamental assumption of many joint models for time and accuracy used in educational measurement. In this study, posterior predictive checks (PPCs) are proposed for testing this assumption. These PPCs are based on three discrepancy measures reflecting different…

  6. Building Program Verifiers from Compilers and Theorem Provers

    DTIC Science & Technology

    2015-05-14

    Checking with SMT UFO • LLVM-based front-end (partially reused in SeaHorn) • Combines Abstract Interpretation with Interpolation-Based Model Checking • (no...assertions Counter-examples are long Hard to determine (from main) what is relevant Assertion Main 35 Building Verifiers from Comp and SMT Gurfinkel, 2015

  7. Exploration of Effective Persuasive Strategies Used in Resisting Product Advertising: A Case Study of Adult Health Check-Ups.

    PubMed

    Tien, Han-Kuang; Chung, Wen

    2018-05-10

    This research addressed adults' health check-ups through the lens of Role Transportation Theory. This theory is applied to narrative advertising that lures adults into seeking health check-ups by causing audiences to empathize with the advertisement's character. This study explored the persuasive mechanism behind narrative advertising and reinforced the Protection Motivation Theory model. We added two key perturbation variables: optimistic bias and truth avoidance. To complete the verification hypothesis, we performed two experiments. In Experiment 1, we recruited 77 respondents online for testing. We used analyses of variance to verify the effectiveness of narrative and informative advertising. Then, in Experiment 2, we recruited 228 respondents to perform offline physical experiments and conducted a path analysis through structural equation modelling. The findings showed that narrative advertising positively impacted participants' disease prevention intentions. The use of Role Transportation Theory in advertising enables the audience to be emotionally connected with the character, which enhances persuasiveness. In Experiment 2, we found that the degree of role transference can interfere with optimistic bias, improve perceived health risk, and promote behavioral intentions for health check-ups. Furthermore, truth avoidance can interfere with perceived health risks, which, in turn, reduce behavioral intentions for health check-ups.

  8. RICE bounds on cosmogenic neutrino fluxes and interactions

    NASA Astrophysics Data System (ADS)

    Hussain, Shahid

    2005-04-01

    Assuming standard model interactions we calculate shower rates induced by cosmogenic neutrinos in ice, and we bound the cosmogenic neutrino fluxes using RICE 2000-2004 results. Next we assume new interactions due to extra- dimensional, low-scale gravity (i.e. black hole production and decay; graviton mediated deep inelastic scattering) and calculate enhanced shower rates induced by cosmogenic neutrinos in ice. With the help of RICE 2000-2004 results, we survey bounds on low scale gravity parameters for a range of cosmogenic neutrino flux models.

  9. On bound-states of the Gross Neveu model with massive fundamental fermions

    NASA Astrophysics Data System (ADS)

    Frishman, Yitzhak; Sonnenschein, Jacob

    2018-01-01

    In the search for QFT's that admit boundstates, we reinvestigate the two dimensional Gross-Neveu model, but with massive fermions. By computing the self-energy for the auxiliary boundstate field and the effective potential, we show that there are no bound states around the lowest minimum, but there is a meta-stable bound state around the other minimum, a local one. The latter decays by tunneling. We determine the dependence of its lifetime on the fermion mass and coupling constant.

  10. Bounds on quantum confinement effects in metal nanoparticles

    NASA Astrophysics Data System (ADS)

    Blackman, G. Neal; Genov, Dentcho A.

    2018-03-01

    Quantum size effects on the permittivity of metal nanoparticles are investigated using the quantum box model. Explicit upper and lower bounds are derived for the permittivity and relaxation rates due to quantum confinement effects. These bounds are verified numerically, and the size dependence and frequency dependence of the empirical Drude size parameter is extracted from the model. Results suggest that the common practice of empirically modifying the dielectric function can lead to inaccurate predictions for highly uniform distributions of finite-sized particles.

  11. Tri-critical behavior of the Blume-Emery-Griffiths model on a Kagomé lattice: Effective-field theory and Rigorous bounds

    NASA Astrophysics Data System (ADS)

    Santos, Jander P.; Sá Barreto, F. C.

    2016-01-01

    Spin correlation identities for the Blume-Emery-Griffiths model on Kagomé lattice are derived and combined with rigorous correlation inequalities lead to upper bounds on the critical temperature. From the spin correlation identities the mean field approximation and the effective field approximation results for the magnetization, the critical frontiers and the tricritical points are obtained. The rigorous upper bounds on the critical temperature improve over those effective-field type theories results.

  12. Ambient air particulates and particulate-bound mercury Hg(p) concentrations: dry deposition study over a Traffic, Airport, Park (T.A.P.) areas during years of 2011-2012.

    PubMed

    Fang, Guor-Cheng; Lin, Yen-Heng; Zheng, Yu-Cheng

    2016-02-01

    The main purpose of this study was to monitor ambient air particles and particulate-bound mercury Hg(p) in total suspended particulate (TSP) concentrations and dry deposition at the Hung Kuang (Traffic), Taichung airport and Westing Park sampling sites during the daytime and nighttime, from 2011 to 2012. In addition, the calculated/measured dry deposition flux ratios of ambient air particles and particulate-bound mercury Hg(p) were also studied with Baklanov & Sorensen and the Williams models. For a particle size of 10 μm, the Baklanov & Sorensen model yielded better predictions of dry deposition of ambient air particulates and particulate-bound mercury Hg(p) at the Hung Kuang (Traffic), Taichung airport and Westing Park sampling site during the daytime and nighttime sampling periods. However, for particulates with sizes 20-23 μm, the results obtained in the study reveal that the Williams model provided better prediction results for ambient air particulates and particulate-bound mercury Hg(p) at all sampling sites in this study.

  13. Are stock prices too volatile to be justified by the dividend discount model?

    NASA Astrophysics Data System (ADS)

    Akdeniz, Levent; Salih, Aslıhan Altay; Ok, Süleyman Tuluğ

    2007-03-01

    This study investigates excess stock price volatility using the variance bound framework of LeRoy and Porter [The present-value relation: tests based on implied variance bounds, Econometrica 49 (1981) 555-574] and of Shiller [Do stock prices move too much to be justified by subsequent changes in dividends? Am. Econ. Rev. 71 (1981) 421-436.]. The conditional variance bound relationship is examined using cross-sectional data simulated from the general equilibrium asset pricing model of Brock [Asset prices in a production economy, in: J.J. McCall (Ed.), The Economics of Information and Uncertainty, University of Chicago Press, Chicago (for N.B.E.R.), 1982]. Results show that the conditional variance bounds hold, hence, our hypothesis of the validity of the dividend discount model cannot be rejected. Moreover, in our setting, markets are efficient and stock prices are neither affected by herd psychology nor by the outcome of noise trading by naive investors; thus, we are able to control for market efficiency. Consequently, we show that one cannot infer any conclusions about market efficiency from the unconditional variance bounds tests.

  14. Finite-connectivity spin-glass phase diagrams and low-density parity check codes.

    PubMed

    Migliorini, Gabriele; Saad, David

    2006-02-01

    We obtain phase diagrams of regular and irregular finite-connectivity spin glasses. Contact is first established between properties of the phase diagram and the performance of low-density parity check (LDPC) codes within the replica symmetric (RS) ansatz. We then study the location of the dynamical and critical transition points of these systems within the one step replica symmetry breaking theory (RSB), extending similar calculations that have been performed in the past for the Bethe spin-glass problem. We observe that the location of the dynamical transition line does change within the RSB theory, in comparison with the results obtained in the RS case. For LDPC decoding of messages transmitted over the binary erasure channel we find, at zero temperature and rate , an RS critical transition point at while the critical RSB transition point is located at , to be compared with the corresponding Shannon bound . For the binary symmetric channel we show that the low temperature reentrant behavior of the dynamical transition line, observed within the RS ansatz, changes its location when the RSB ansatz is employed; the dynamical transition point occurs at higher values of the channel noise. Possible practical implications to improve the performance of the state-of-the-art error correcting codes are discussed.

  15. Bounds and inequalities relating h-index, g-index, e-index and generalized impact factor: an improvement over existing models.

    PubMed

    Abbas, Ash Mohammad

    2012-01-01

    In this paper, we describe some bounds and inequalities relating h-index, g-index, e-index, and generalized impact factor. We derive the bounds and inequalities relating these indexing parameters from their basic definitions and without assuming any continuous model to be followed by any of them. We verify the theorems using citation data for five Price Medalists. We observe that the lower bound for h-index given by Theorem 2, [formula: see text], g ≥ 1, comes out to be more accurate as compared to Schubert-Glanzel relation h is proportional to C(2/3)P(-1/3) for a proportionality constant of 1, where C is the number of citations and P is the number of papers referenced. Also, the values of h-index obtained using Theorem 2 outperform those obtained using Egghe-Liang-Rousseau power law model for the given citation data of Price Medalists. Further, we computed the values of upper bound on g-index given by Theorem 3, g ≤ (h + e), where e denotes the value of e-index. We observe that the upper bound on g-index given by Theorem 3 is reasonably tight for the given citation record of Price Medalists.

  16. A rule-based approach to model checking of UML state machines

    NASA Astrophysics Data System (ADS)

    Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz

    2016-12-01

    In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

  17. Introduction of Virtualization Technology to Multi-Process Model Checking

    NASA Technical Reports Server (NTRS)

    Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu

    2009-01-01

    Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.

  18. Body checking is associated with weight- and body-related shame and weight- and body-related guilt among men and women.

    PubMed

    Solomon-Krakus, Shauna; Sabiston, Catherine M

    2017-12-01

    This study examined whether body checking was a correlate of weight- and body-related shame and guilt for men and women. Participants were 537 adults (386 women) between the ages of 17 and 74 (M age =28.29, SD=14.63). Preliminary analyses showed women reported significantly more body-checking (p<.001), weight- and body-related shame (p<.001), and weight- and body-related guilt (p<.001) than men. In sex-stratified hierarchical linear regression models, body checking was significantly and positively associated with weight- and body-related shame (R 2 =.29 and .43, p<.001) and weight- and body-related guilt (R 2 =.34 and .45, p<.001) for men and women, respectively. Based on these findings, body checking is associated with negative weight- and body-related self-conscious emotions. Intervention and prevention efforts aimed at reducing negative weight- and body-related self-conscious emotions should consider focusing on body checking for adult men and women. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Do Reuss and Voigt Bounds Really Bound in High-Pressure Rheology Experiments?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen,J.; Li, L.; Yu, T.

    2006-01-01

    Energy dispersive synchrotron x-ray diffraction is carried out to measure differential lattice strains in polycrystalline Fe{sub 2}SiO{sub 4} (fayalite) and MgO samples using a multi-element solid state detector during high-pressure deformation. The theory of elastic modeling with Reuss (iso-stress) and Voigt (iso-strain) bounds is used to evaluate the aggregate stress and weight parameter, {alpha} (0{le}{alpha}{le}1), of the two bounds. Results under the elastic assumption quantitatively demonstrate that a highly stressed sample in high-pressure experiments reasonably approximates to an iso-stress state. However, when the sample is plastically deformed, the Reuss and Voigt bounds are no longer valid ({alpha} becomes beyond 1).more » Instead, if plastic slip systems of the sample are known (e.g. in the case of MgO), the aggregate property can be modeled using a visco-plastic self-consistent theory.« less

  20. Systematic assignment of Feshbach resonances via an asymptotic bound state model

    NASA Astrophysics Data System (ADS)

    Goosen, Maikel; Kokkelmans, Servaas

    2008-05-01

    We present an Asymptotic Bound state Model (ABM), which is useful to predict Feshbach resonances. The model utilizes asymptotic properties of the interaction potentials to represent coupled molecular wavefunctions. The bound states of this system give rise to Feshbach resonances, localized at the magnetic fields of intersection of these bound states with the scattering threshold. This model was very successful to assign measured Feshbach resonances in an ultra cold mixture of ^6Li and ^40K atomsootnotetextE. Wille, F.M. Spiegelhalder, G. Kerner, D. Naik, A. Trenkwalder, G. Hendl, F. Schreck, R. Grimm, T.G. Tiecke, J.T.M. Walraven, S.J.J.M.F. Kokkelmans, E. Tiesinga, P.S. Julienne, arXiv:0711.2916. For this system, the accuracy of the determined scattering lengths is comparable to full coupled channels results. However, it was not possible to predict the width of the resonances. We discuss how an incorporation of threshold effects will improve the model, and we apply it to a mixture of ^87Rb and ^133Cs atoms, where recently Feshbach resonances have been measured.

  1. Minimizers with Bounded Action for the High-Dimensional Frenkel-Kontorova Model

    NASA Astrophysics Data System (ADS)

    Miao, Xue-Qing; Wang, Ya-Nan; Qin, Wen-Xin

    In Aubry-Mather theory for monotone twist maps or for one-dimensional Frenkel-Kontorova (FK) model with nearest neighbor interactions, each global minimizer (minimal energy configuration) is naturally Birkhoff. However, this is not true for the one-dimensional FK model with non-nearest neighbor interactions or for the high-dimensional FK model. In this paper, we study the Birkhoff property of minimizers with bounded action for the high-dimensional FK model.

  2. Model reduction by trimming for a class of semi-Markov reliability models and the corresponding error bound

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Palumbo, Daniel L.

    1991-01-01

    Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.

  3. Efficiency and its bounds for a quantum Einstein engine at maximum power.

    PubMed

    Yan, H; Guo, Hao

    2012-11-01

    We study a quantum thermal engine model for which the heat transfer law is determined by Einstein's theory of radiation. The working substance of the quantum engine is assumed to be a two-level quantum system of which the constituent particles obey Maxwell-Boltzmann (MB), Fermi-Dirac (FD), or Bose-Einstein (BE) distributions, respectively, at equilibrium. The thermal efficiency and its bounds at maximum power of these models are derived and discussed in the long and short thermal contact time limits. The similarity and difference between these models are discussed. We also compare the efficiency bounds of this quantum thermal engine to those of its classical counterpart.

  4. Efficiency and its bounds for thermal engines at maximum power using Newton's law of cooling.

    PubMed

    Yan, H; Guo, Hao

    2012-01-01

    We study a thermal engine model for which Newton's cooling law is obeyed during heat transfer processes. The thermal efficiency and its bounds at maximum output power are derived and discussed. This model, though quite simple, can be applied not only to Carnot engines but also to four other types of engines. For the long thermal contact time limit, new bounds, tighter than what were known before, are obtained. In this case, this model can simulate Otto, Joule-Brayton, Diesel, and Atkinson engines. While in the short contact time limit, which corresponds to the Carnot cycle, the same efficiency bounds as that from Esposito et al. [Phys. Rev. Lett. 105, 150603 (2010)] are derived. In both cases, the thermal efficiency decreases as the ratio between the heat capacities of the working medium during heating and cooling stages increases. This might provide instructions for designing real engines. © 2012 American Physical Society

  5. Information models of software productivity - Limits on productivity growth

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1992-01-01

    Research into generalized information-metric models of software process productivity establishes quantifiable behavior and theoretical bounds. The models establish a fundamental mathematical relationship between software productivity and the human capacity for information traffic, the software product yield (system size), information efficiency, and tool and process efficiencies. An upper bound is derived that quantifies average software productivity and the maximum rate at which it may grow. This bound reveals that ultimately, when tools, methodologies, and automated assistants have reached their maximum effective state, further improvement in productivity can only be achieved through increasing software reuse. The reuse advantage is shown not to increase faster than logarithmically in the number of reusable features available. The reuse bound is further shown to be somewhat dependent on the reuse policy: a general 'reuse everything' policy can lead to a somewhat slower productivity growth than a specialized reuse policy.

  6. Practical Results from the Application of Model Checking and Test Generation from UML/SysML Models of On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Faria, J. M.; Mahomad, S.; Silva, N.

    2009-05-01

    The deployment of complex safety-critical applications requires rigorous techniques and powerful tools both for the development and V&V stages. Model-based technologies are increasingly being used to develop safety-critical software, and arguably, turning to them can bring significant benefits to such processes, however, along with new challenges. This paper presents the results of a research project where we tried to extend current V&V methodologies to be applied on UML/SysML models and aiming at answering the demands related to validation issues. Two quite different but complementary approaches were investigated: (i) model checking and the (ii) extraction of robustness test-cases from the same models. These two approaches don't overlap and when combined provide a wider reaching model/design validation ability than each one alone thus offering improved safety assurance. Results are very encouraging, even though they either fell short of the desired outcome as shown for model checking, or still appear as not fully matured as shown for robustness test case extraction. In the case of model checking, it was verified that the automatic model validation process can become fully operational and even expanded in scope once tool vendors help (inevitably) to improve the XMI standard interoperability situation. For the robustness test case extraction methodology, the early approach produced interesting results but need further systematisation and consolidation effort in order to produce results in a more predictable fashion and reduce reliance on expert's heuristics. Finally, further improvements and innovation research projects were immediately apparent for both investigated approaches, which point to either circumventing current limitations in XMI interoperability on one hand and bringing test case specification onto the same graphical level as the models themselves and then attempting to automate the generation of executable test cases from its standard UML notation.

  7. An experimental manipulation of responsibility in children: a test of the inflated responsibility model of obsessive-compulsive disorder.

    PubMed

    Reeves, J; Reynolds, S; Coker, S; Wilson, C

    2010-09-01

    The objective of this study was to investigate whether Salkovskis (1985) inflated responsibility model of obsessive-compulsive disorder (OCD) applied to children. In an experimental design, 81 children aged 9-12 years were randomly allocated to three conditions: an inflated responsibility group, a moderate responsibility group, and a reduced responsibility group. In all groups children were asked to sort sweets according to whether or not they contained nuts. At baseline the groups did not differ on children's self reported anxiety, depression, obsessive-compulsive symptoms or on inflated responsibility beliefs. The experimental manipulation successfully changed children's perceptions of responsibility. During the sorting task time taken to complete the task, checking behaviours, hesitations, and anxiety were recorded. There was a significant effect of responsibility level on the behavioural variables of time taken, hesitations and check; as perceived responsibility increased children took longer to complete the task and checked and hesitated more often. There was no between-group difference in children's self reported state anxiety. The results offer preliminary support for the link between inflated responsibility and increased checking behaviours in children and add to the small but growing literature suggesting that cognitive models of OCD may apply to children. (c) 2010 Elsevier Ltd. All rights reserved.

  8. 75 FR 52482 - Airworthiness Directives; PILATUS Aircraft Ltd. Model PC-7 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-26

    ..., check the airplane maintenance records to determine if the left and/or right aileron outboard bearing... an entry is found during the airplane maintenance records check required in paragraph (f)(1) of this...-0849; Directorate Identifier 2010-CE-043-AD] RIN 2120-AA64 Airworthiness Directives; PILATUS Aircraft...

  9. 77 FR 50644 - Airworthiness Directives; Cessna Airplane Company Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... airplanes that have P/N 1134104-1 or 1134104-5 A/C compressor motor installed; an aircraft logbook check for... following: (1) Inspect the number of hours on the A/C compressor hour meter; and (2) Check the aircraft.... Do the replacement following Cessna Aircraft Company Model 525 Maintenance Manual, Revision 23, dated...

  10. Design of experiments enhanced statistical process control for wind tunnel check standard testing

    NASA Astrophysics Data System (ADS)

    Phillips, Ben D.

    The current wind tunnel check standard testing program at NASA Langley Research Center is focused on increasing data quality, uncertainty quantification and overall control and improvement of wind tunnel measurement processes. The statistical process control (SPC) methodology employed in the check standard testing program allows for the tracking of variations in measurements over time as well as an overall assessment of facility health. While the SPC approach can and does provide researchers with valuable information, it has certain limitations in the areas of process improvement and uncertainty quantification. It is thought by utilizing design of experiments methodology in conjunction with the current SPC practices that one can efficiently and more robustly characterize uncertainties and develop enhanced process improvement procedures. In this research, methodologies were developed to generate regression models for wind tunnel calibration coefficients, balance force coefficients and wind tunnel flow angularities. The coefficients of these regression models were then tracked in statistical process control charts, giving a higher level of understanding of the processes. The methodology outlined is sufficiently generic such that this research can be applicable to any wind tunnel check standard testing program.

  11. CCM-C,Collins checks the middeck experiment

    NASA Image and Video Library

    1999-07-24

    S93-E-5016 (23 July 1999) --- Astronaut Eileen M. Collins, mission commander, checks on an experiment on Columbia's middeck during Flight Day 1 activity. The experiment is called the Cell Culture Model, Configuration C. Objectives of it are to validate cell culture models for muscle, bone and endothelial cell biochemical and functional loss induced by microgravity stress; to evaluate cytoskeleton, metabolism, membrane integrity and protease activity in target cells; and to test tissue loss pharmaceuticals for efficacy. The photo was recorded with an electronic still camera (ESC).

  12. Turning Around along the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Lee, Jounghun; Yepes, Gustavo

    2016-12-01

    A bound violation designates a case in which the turnaround radius of a bound object exceeds the upper limit imposed by the spherical collapse model based on the standard ΛCDM paradigm. Given that the turnaround radius of a bound object is a stochastic quantity and that the spherical model overly simplifies the true gravitational collapse, which actually proceeds anisotropically along the cosmic web, the rarity of the occurrence of a bound violation may depend on the web environment. Assuming a Planck cosmology, we numerically construct the bound-zone peculiar velocity profiles along the cosmic web (filaments and sheets) around the isolated groups with virial mass {M}{{v}}≥slant 3× {10}13 {h}-1 {M}⊙ identified in the Small MultiDark Planck simulations and determine the radial distances at which their peculiar velocities equal the Hubble expansion speed as the turnaround radii of the groups. It is found that although the average turnaround radii of the isolated groups are well below the spherical bound limit on all mass scales, the bound violations are not forbidden for individual groups, and the cosmic web has an effect of reducing the rarity of the occurrence of a bound violation. Explaining that the spherical bound limit on the turnaround radius in fact represents the threshold distance up to which the intervention of the external gravitational field in the bound-zone peculiar velocity profiles around the nonisolated groups stays negligible, we discuss the possibility of using the threshold distance scale to constrain locally the equation of state of dark energy.

  13. Class Model Development Using Business Rules

    NASA Astrophysics Data System (ADS)

    Skersys, Tomas; Gudas, Saulius

    New developments in the area of computer-aided system engineering (CASE) greatly improve processes of the information systems development life cycle (ISDLC). Much effort is put into the quality improvement issues, but IS development projects still suffer from the poor quality of models during the system analysis and design cycles. At some degree, quality of models that are developed using CASE tools can be assured using various. automated. model comparison, syntax. checking procedures. It. is also reasonable to check these models against the business domain knowledge, but the domain knowledge stored in the repository of CASE tool (enterprise model) is insufficient (Gudas et al. 2004). Involvement of business domain experts into these processes is complicated because non- IT people often find it difficult to understand models that were developed by IT professionals using some specific modeling language.

  14. The Incidence of Sixteenth Century Cosmic Models in Modern Texts

    NASA Astrophysics Data System (ADS)

    Maene, S. A.; Best, J. S.; Usher, P. D.

    1999-12-01

    In the sixteenth century, the bounded cosmological models of Copernicus (1543) and Tycho Brahe (1588), and the unbounded model of Thomas Digges (1576), vied with the bounded geocentric model of Ptolemy (c. 140 AD). The work of the philosopher Giordano Bruno in 1584 lent further support to the Digges model. Despite the eventual acceptance of the unbounded universe, analysis of over 100 modern introductory astronomy texts reveals that these early unbounded models are mentioned infrequently. The ratio of mentions of Digges' model to Copernicus' model has the surprisingly low value of R = 0.08. The philosophical speculation of Bruno receives mention more than twice as often (R = 0.17). The expectation that these early unbounded models warrant inclusion in astronomy texts is supported both by modern hindsight and by the literature of the time. In Shakespeare's "Hamlet" of c. 1601, Prince Hamlet suffers from two transformations. According to the cosmic allegorical model, one transformation changes the bounded geocentricism of Ptolemy to the bounded heliocentricism of Copernicus, while the other completes the change to Digges' model of the infinite universe of suns. This interpretation and the modern world view suggest that both transformations should receive equal mention and thus that the ratio R in introductory texts should be close to unity. This work was supported in part by the NASA West Virginia Space Grant Consortium.

  15. Bounding uncertainty in volumetric geometric models for terrestrial lidar observations of ecosystems.

    PubMed

    Paynter, Ian; Genest, Daniel; Peri, Francesco; Schaaf, Crystal

    2018-04-06

    Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results.

  16. Bounding uncertainty in volumetric geometric models for terrestrial lidar observations of ecosystems

    PubMed Central

    Genest, Daniel; Peri, Francesco; Schaaf, Crystal

    2018-01-01

    Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results. PMID:29503722

  17. The interplay of intrinsic and extrinsic bounded noises in biomolecular networks.

    PubMed

    Caravagna, Giulio; Mauri, Giancarlo; d'Onofrio, Alberto

    2013-01-01

    After being considered as a nuisance to be filtered out, it became recently clear that biochemical noise plays a complex role, often fully functional, for a biomolecular network. The influence of intrinsic and extrinsic noises on biomolecular networks has intensively been investigated in last ten years, though contributions on the co-presence of both are sparse. Extrinsic noise is usually modeled as an unbounded white or colored gaussian stochastic process, even though realistic stochastic perturbations are clearly bounded. In this paper we consider Gillespie-like stochastic models of nonlinear networks, i.e. the intrinsic noise, where the model jump rates are affected by colored bounded extrinsic noises synthesized by a suitable biochemical state-dependent Langevin system. These systems are described by a master equation, and a simulation algorithm to analyze them is derived. This new modeling paradigm should enlarge the class of systems amenable at modeling. We investigated the influence of both amplitude and autocorrelation time of a extrinsic Sine-Wiener noise on: (i) the Michaelis-Menten approximation of noisy enzymatic reactions, which we show to be applicable also in co-presence of both intrinsic and extrinsic noise, (ii) a model of enzymatic futile cycle and (iii) a genetic toggle switch. In (ii) and (iii) we show that the presence of a bounded extrinsic noise induces qualitative modifications in the probability densities of the involved chemicals, where new modes emerge, thus suggesting the possible functional role of bounded noises.

  18. [Proposal and preliminary validation of a check-list for the assessment of occupational exposure to repetitive movements of the upper lims].

    PubMed

    Colombini, D; Occhipinti, E; Cairoli, S; Baracco, A

    2000-01-01

    Over the last few years the Authors developed and implemented, a specific check-list for a "rapid" assessment of occupational exposure to repetitive movements and exertion of the upper limbs, after verifying the lack of such a tool which also had to be coherent with the latest data in the specialized literature. The check-list model and the relevant application procedures are presented and discussed. The check-list was applied by trained factory technicians in 46 different working tasks where the OCRA method previously proposed by the Authors was also applied by independent observers. Since 46 pairs of observation data were available (OCRA index and check-list score) it was possible to verify, via parametric and nonparametric statistical tests, the level of association between the two variables and to find the best simple regression function (exponential in this case) of the OCRA index from the check-list score. By means of this function, which was highly significant (R2 = 0.98, p < 0.0000), the values of the check-list score which better corresponded to the critical values (for exposure assessment) of the OCRA index looked for. The following correspondance values between OCRA Index and check-list were then established with a view to classifying exposure levels. The check-list "critical" scores were established considering the need for obtaining, in borderline cases, a potential effect of overestimation of the exposure level. On the basis of practical application experience and the preliminary validation results, recommendations are made and the caution needed in the use of the check-list is suggested.

  19. Socioeconomic differences in health check-ups and medically certified sickness absence: a 10-year follow-up among middle-aged municipal employees in Finland.

    PubMed

    Piha, Kustaa; Sumanen, Hilla; Lahelma, Eero; Rahkonen, Ossi

    2017-04-01

    There is contradictory evidence on the association between health check-ups and future morbidity. Among the general population, those with high socioeconomic position participate more often in health check-ups. The main aims of this study were to analyse if attendance to health check-ups are socioeconomically patterned and affect sickness absence over a 10-year follow-up. This register-based follow-up study included municipal employees of the City of Helsinki. 13 037 employees were invited to age-based health check-up during 2000-2002, with a 62% attendance rate. Education, occupational class and individual income were used to measure socioeconomic position. Medically certified sickness absence of 4 days or more was measured and controlled for at the baseline and used as an outcome over follow-up. The mean follow-up time was 7.5 years. Poisson regression was used. Men and employees with lower socioeconomic position participated more actively in health check-ups. Among women, non-attendance to health check-up predicted higher sickness absence during follow-up (relative risk =1.26, 95% CI 1.17 to 1.37) in the fully adjusted model. Health check-ups were not effective in reducing socioeconomic differences in sickness absence. Age-based health check-ups reduced subsequent sickness absence and should be promoted. Attendance to health check-ups should be as high as possible. Contextual factors need to be taken into account when applying the results in interventions in other settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  20. Bayesian Estimation of Panel Data Fractional Response Models with Endogeneity: An Application to Standardized Test Rates

    ERIC Educational Resources Information Center

    Kessler, Lawrence M.

    2013-01-01

    In this paper I propose Bayesian estimation of a nonlinear panel data model with a fractional dependent variable (bounded between 0 and 1). Specifically, I estimate a panel data fractional probit model which takes into account the bounded nature of the fractional response variable. I outline estimation under the assumption of strict exogeneity as…

  1. Structured Uncertainty Bound Determination From Data for Control and Performance Validation

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.

    2003-01-01

    This report attempts to document the broad scope of issues that must be satisfactorily resolved before one can expect to methodically obtain, with a reasonable confidence, a near-optimal robust closed loop performance in physical applications. These include elements of signal processing, noise identification, system identification, model validation, and uncertainty modeling. Based on a recently developed methodology involving a parameterization of all model validating uncertainty sets for a given linear fractional transformation (LFT) structure and noise allowance, a new software, Uncertainty Bound Identification (UBID) toolbox, which conveniently executes model validation tests and determine uncertainty bounds from data, has been designed and is currently available. This toolbox also serves to benchmark the current state-of-the-art in uncertainty bound determination and in turn facilitate benchmarking of robust control technology. To help clarify the methodology and use of the new software, two tutorial examples are provided. The first involves the uncertainty characterization of a flexible structure dynamics, and the second example involves a closed loop performance validation of a ducted fan based on an uncertainty bound from data. These examples, along with other simulation and experimental results, also help describe the many factors and assumptions that determine the degree of success in applying robust control theory to practical problems.

  2. Equivalence principle and bound kinetic energy.

    PubMed

    Hohensee, Michael A; Müller, Holger; Wiringa, R B

    2013-10-11

    We consider the role of the internal kinetic energy of bound systems of matter in tests of the Einstein equivalence principle. Using the gravitational sector of the standard model extension, we show that stringent limits on equivalence principle violations in antimatter can be indirectly obtained from tests using bound systems of normal matter. We estimate the bound kinetic energy of nucleons in a range of light atomic species using Green's function Monte Carlo calculations, and for heavier species using a Woods-Saxon model. We survey the sensitivities of existing and planned experimental tests of the equivalence principle, and report new constraints at the level of between a few parts in 10(6) and parts in 10(8) on violations of the equivalence principle for matter and antimatter.

  3. Litho hotspots fixing using model based algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Meili; Yu, Shirui; Mao, Zhibiao; Shafee, Marwa; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe; Hu, Xinyi; Wan, Qijian; Du, Chunshan

    2017-04-01

    As technology advances, IC designs are getting more sophisticated, thus it becomes more critical and challenging to fix printability issues in the design flow. Running lithography checks before tapeout is now mandatory for designers, which creates a need for more advanced and easy-to-use techniques for fixing hotspots found after lithographic simulation without creating a new design rule checking (DRC) violation or generating a new hotspot. This paper presents a new methodology for fixing hotspots on layouts while using the same engine currently used to detect the hotspots. The fix is achieved by applying minimum movement of edges causing the hotspot, with consideration of DRC constraints. The fix is internally simulated by the lithographic simulation engine to verify that the hotspot is eliminated and that no new hotspot is generated by the new edge locations. Hotspot fix checking is enhanced by adding DRC checks to the litho-friendly design (LFD) rule file to guarantee that any fix options that violate DRC checks are removed from the output hint file. This extra checking eliminates the need to re-run both DRC and LFD checks to ensure the change successfully fixed the hotspot, which saves time and simplifies the designer's workflow. This methodology is demonstrated on industrial designs, where the fixing rate of single and dual layer hotspots is reported.

  4. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  5. Impact of warped extra dimensions on the dipole coefficients in b → sγ transitions

    NASA Astrophysics Data System (ADS)

    Malm, Raoul; Neubert, Matthias; Schmell, Christoph

    2016-04-01

    We calculate the electro- and chromomagnetic dipole coefficients C 7 γ,8 g and {tilde{C}}_{7γ, 8g} in the context of the minimal Randall-Sundrum (RS) model with a Higgs sector localized on the IR brane using the five-dimensional (5D) approach, where the coefficients are expressed in terms of integrals over 5D propagators. Since we keep the full dependence on the Yukawa matrices, the integral expressions are formally valid to all orders in v 2/ M KK 2 . In addition we relate our results to the expressions obtained in the Kaluza-Klein (KK) decomposed theory and show the consistency in both pictures analytically and numerically, which presents a non-trivial cross-check. In Feynman-'t Hooft gauge, the dominant corrections from virtual KK modes arise from the scalar parts of the W ±-boson penguin diagrams, including the contributions from the scalar component of the 5D gauge-boson field and from the charged Goldstone bosons in the Higgs sector. The size of the KK corrections depends on the parameter y *, which sets the upper bound for the anarchic 5D Yukawa matrices. We find that for y * ≳ 1 the KK corrections are proportional to y ∗ 2 . We discuss the phenomenological implications of our results for the branching ratio Br(overline{B}to {X}_sγ ) , the time-dependent CP asymmetry S K ∗ γ , the direct CP asymmetry A CP b → sγ and the CP asymmetry difference Δ A CP b → sγ . We can derive a lower bound on the first KK gluon resonance of 3 .8 TeV for y * = 3, requiring that at least 10% of the RS parameter space covers the experimental 2 σ error margins. We further discuss the branching ratio Br(overline{B}to {X}_s{l}+{l}-) and compare our predictions for C 7γ,9,10 and {tilde{C}}_{7γ, 9,10} with phenomenological results derived from model-independent analyses.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R.; Brooks, Dusty Marie

    In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions andmore » experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.« less

  7. A Model-Free No-arbitrage Price Bound for Variance Options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonnans, J. Frederic, E-mail: frederic.bonnans@inria.fr; Tan Xiaolu, E-mail: xiaolu.tan@polytechnique.edu

    2013-08-01

    We suggest a numerical approximation for an optimization problem, motivated by its applications in finance to find the model-free no-arbitrage bound of variance options given the marginal distributions of the underlying asset. A first approximation restricts the computation to a bounded domain. Then we propose a gradient projection algorithm together with the finite difference scheme to solve the optimization problem. We prove the general convergence, and derive some convergence rate estimates. Finally, we give some numerical examples to test the efficiency of the algorithm.

  8. Coexistence of bounded and unbounded motions in a bouncing ball model

    NASA Astrophysics Data System (ADS)

    Marò, Stefano

    2013-05-01

    We consider the model describing the vertical motion of a ball falling with constant acceleration on a wall and elastically reflected. The wall is supposed to move in the vertical direction according to a given periodic function f. We apply the Aubry-Mather theory to the generating function in order to prove the existence of bounded motions with prescribed mean time between the bounces. As the existence of unbounded motions is known, it is possible to find a class of functions f that allow both bounded and unbounded motions.

  9. Re-derived overclosure bound for the inert doublet model

    NASA Astrophysics Data System (ADS)

    Biondini, S.; Laine, M.

    2017-08-01

    We apply a formalism accounting for thermal effects (such as modified Sommerfeld effect; Salpeter correction; decohering scatterings; dissociation of bound states), to one of the simplest WIMP-like dark matter models, associated with an "inert" Higgs doublet. A broad temperature range T ˜ M/20 . . . M/104 is considered, stressing the importance and less-understood nature of late annihilation stages. Even though only weak interactions play a role, we find that resummed real and virtual corrections increase the tree-level overclosure bound by 1 . . . 18%, depending on quartic couplings and mass splittings.

  10. Upper and lower bounds for semi-Markov reliability models of reconfigurable systems

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.

  11. Flight control application of new stability robustness bounds for linear uncertain systems

    NASA Technical Reports Server (NTRS)

    Yedavalli, Rama K.

    1993-01-01

    This paper addresses the issue of obtaining bounds on the real parameter perturbations of a linear state-space model for robust stability. Based on Kronecker algebra, new, easily computable sufficient bounds are derived that are much less conservative than the existing bounds since the technique is meant for only real parameter perturbations (in contrast to specializing complex variation case to real parameter case). The proposed theory is illustrated with application to several flight control examples.

  12. Social-cognitive determinants of the tick check: a cross-sectional study on self-protective behavior in combatting Lyme disease.

    PubMed

    van der Heijden, Amy; Mulder, Bob C; Poortvliet, P Marijn; van Vliet, Arnold J H

    2017-11-25

    Performing a tick check after visiting nature is considered the most important preventive measure to avoid contracting Lyme disease. Checking the body for ticks after visiting nature is the only measure that can fully guarantee whether one has been bitten by a tick and provides the opportunity to remove the tick as soon as possible, thereby greatly reducing the chance of contracting Lyme disease. However, compliance to performing the tick check is low. In addition, most previous studies on determinants of preventive measures to avoid Lyme disease lack a clear definition and/or operationalization of the term "preventive measures". Those that do distinguish multiple behaviors including the tick check, fail to describe the systematic steps that should be followed in order to perform the tick check effectively. Hence, the purpose of this study was to identify determinants of systematically performing the tick check, based on social cognitive theory. A cross-sectional self-administered survey questionnaire was filled out online by 508 respondents (M age  = 51.7, SD = 16.0; 50.2% men; 86.4% daily or weekly nature visitors). Bivariate correlations and multivariate regression analyses were conducted to identify associations between socio-cognitive determinants (i.e. concepts related to humans' intrinsic and extrinsic motivation to perform certain behavior), and the tick check, and between socio-cognitive determinants and proximal goal to do the tick check. The full regression model explained 28% of the variance in doing the tick check. Results showed that performing the tick check was associated with proximal goal (β = .23, p < 0.01), self-efficacy (β = .22, p < 0.01), self-evaluative outcome expectations (β = .21, p < 0.01), descriptive norm (β = .16, p < 0.01), and experience (β = .13, p < 0.01). Our study is among the first to examine the determinants of systematic performance of the tick check, using an extended version of social cognitive theory to identify determinants. Based on the results, a number of practical recommendations can be made to promote the performance of the tick check.

  13. Curvature bound from gravitational catalysis

    NASA Astrophysics Data System (ADS)

    Gies, Holger; Martini, Riccardo

    2018-04-01

    We determine bounds on the curvature of local patches of spacetime from the requirement of intact long-range chiral symmetry. The bounds arise from a scale-dependent analysis of gravitational catalysis and its influence on the effective potential for the chiral order parameter, as induced by fermionic fluctuations on a curved spacetime with local hyperbolic properties. The bound is expressed in terms of the local curvature scalar measured in units of a gauge-invariant coarse-graining scale. We argue that any effective field theory of quantum gravity obeying this curvature bound is safe from chiral symmetry breaking through gravitational catalysis and thus compatible with the simultaneous existence of chiral fermions in the low-energy spectrum. With increasing number of dimensions, the curvature bound in terms of the hyperbolic scale parameter becomes stronger. Applying the curvature bound to the asymptotic safety scenario for quantum gravity in four spacetime dimensions translates into bounds on the matter content of particle physics models.

  14. Correlations and sum rules in a half-space for a quantum two-dimensional one-component plasma

    NASA Astrophysics Data System (ADS)

    Jancovici, B.; Šamaj, L.

    2007-05-01

    This paper is the continuation of a previous one (Šamaj and Jancovici, 2007 J. Stat. Mech. P02002); for a nearly classical quantum fluid in a half-space bounded by a plain plane hard wall (no image forces), we had generalized the Wigner Kirkwood expansion of the equilibrium statistical quantities in powers of Planck's constant \\hbar . As a model system for a more detailed study, we consider the quantum two-dimensional one-component plasma: a system of charged particles of one species, interacting through the logarithmic Coulomb potential in two dimensions, in a uniformly charged background of opposite sign, such that the total charge vanishes. The corresponding classical system is exactly solvable in a variety of geometries, including the present one of a half-plane, when βe2 = 2, where β is the inverse temperature and e is the charge of a particle: all the classical n-body densities are known. In the present paper, we have calculated the expansions of the quantum density profile and truncated two-body density up to order \\hbar ^2 (instead of only to order \\hbar as in the previous paper). These expansions involve the classical n-body densities up to n = 4; thus we obtain exact expressions for these quantum expansions in this special case. For the quantum one-component plasma, two sum rules involving the truncated two-body density (and, for one of them, the density profile) have been derived, a long time ago, by using heuristic macroscopic arguments: one sum rule concerns the asymptotic form along the wall of the truncated two-body density; the other one concerns the dipole moment of the structure factor. In the two-dimensional case at βe2 = 2, we now have explicit expressions up to order \\hbar^2 for these two quantum densities; thus we can microscopically check the sum rules at this order. The checks are positive, reinforcing the idea that the sum rules are correct.

  15. KSC-2012-3604

    NASA Image and Video Library

    2012-07-02

    CAPE CANAVERAL, Fla. – U.S. Senator Bill Nelson checks out NASA's first space-bound Orion capsule at NASA's Kennedy Space Center in Florida. Nelson and the spacecraft are in Kennedy's Operations and Checkout Building high bay for an event marking its arrival at Kennedy. Slated for Exploration Flight Test-1, an uncrewed mission planned for 2014, the capsule will travel farther into space than any human spacecraft has gone in more than 40 years. The capsule was shipped to Kennedy from NASA's Michoud Assembly Facility in New Orleans where the crew module pressure vessel was built. The Orion production team will prepare the module for flight at Kennedy by installing heat-shielding thermal protection systems, avionics and other subsystems. For more information, visit http://www.nasa.gov/orion. Photo credit: NASA/Kim Shiflett

  16. Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks.

    PubMed

    Arunkumar, A; Sakthivel, R; Mathiyalagan, K; Park, Ju H

    2014-07-01

    This paper focuses the issue of robust stochastic stability for a class of uncertain fuzzy Markovian jumping discrete-time neural networks (FMJDNNs) with various activation functions and mixed time delay. By employing the Lyapunov technique and linear matrix inequality (LMI) approach, a new set of delay-dependent sufficient conditions are established for the robust stochastic stability of uncertain FMJDNNs. More precisely, the parameter uncertainties are assumed to be time varying, unknown and norm bounded. The obtained stability conditions are established in terms of LMIs, which can be easily checked by using the efficient MATLAB-LMI toolbox. Finally, numerical examples with simulation result are provided to illustrate the effectiveness and less conservativeness of the obtained results. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Security bound of cheat sensitive quantum bit commitment.

    PubMed

    He, Guang Ping

    2015-03-23

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.

  18. 75 FR 43801 - Airworthiness Directives; Eurocopter France (ECF) Model EC225LP Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-27

    ... time. Also, we use inspect rather than check when referring to an action required by a mechanic as... the various levels of government. Therefore, I certify this AD: 1. Is not a ``significant regulatory... compliance time. Also, we use inspect rather than check when referring to an action required by a mechanic as...

  19. Mandatory Identification Bar Checks: How Bouncers Are Doing Their Job

    ERIC Educational Resources Information Center

    Monk-Turner, Elizabeth; Allen, John; Casten, John; Cowling, Catherine; Gray, Charles; Guhr, David; Hoofnagle, Kara; Huffman, Jessica; Mina, Moises; Moore, Brian

    2011-01-01

    The behavior of bouncers at on site establishments that served alcohol was observed. Our aim was to better understand how bouncers went about their job when the bar had a mandatory policy to check identification of all customers. Utilizing an ethnographic decision model, we found that bouncers were significantly more likely to card customers that…

  20. Enhancing Classroom Management Using the Classroom Check-up Consultation Model with In-Vivo Coaching and Goal Setting Components

    ERIC Educational Resources Information Center

    Kleinert, Whitney L.; Silva, Meghan R.; Codding, Robin S.; Feinberg, Adam B.; St. James, Paula S.

    2017-01-01

    Classroom management is essential to promote learning in schools, and as such it is imperative that teachers receive adequate support to maximize their competence implementing effective classroom management strategies. One way to improve teachers' classroom managerial competence is through consultation. The Classroom Check-Up (CCU) is a structured…

  1. 76 FR 18964 - Airworthiness Directives; Costruzioni Aeronautiche Tecnam srl Model P2006T Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... Landing Gear retraction/extension ground checks performed on the P2006T, a loose Seeger ring was found on... condition for the specified products. The MCAI states: During Landing Gear retraction/extension ground... retraction/extension ground checks performed on the P2006T, a loose Seeger ring was found on the nose landing...

  2. 78 FR 69987 - Airworthiness Directives; Erickson Air-Crane Incorporated Helicopters (Type Certificate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-22

    ... to require recurring checks of the Blade Inspection Method (BIM) indicator on each blade to determine whether the BIM indicator is signifying that the blade pressure may have been compromised by a blade crack... check procedures for BIM blades installed on the Model S-64E and S-64F helicopters. Several blade spars...

  3. Motivational Interviewing for Effective Classroom Management: The Classroom Check-Up. Practical Intervention in the Schools Series

    ERIC Educational Resources Information Center

    Reinke, Wendy M.; Herman, Keith C.; Sprick, Randy

    2011-01-01

    Highly accessible and user-friendly, this book focuses on helping K-12 teachers increase their use of classroom management strategies that work. It addresses motivational aspects of teacher consultation that are essential, yet often overlooked. The Classroom Check-Up is a step-by-step model for assessing teachers' organizational, instructional,…

  4. 75 FR 63045 - Airworthiness Directives; BAE SYSTEMS (OPERATIONS) LIMITED Model BAe 146 and Avro 146-RJ Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-14

    ... the fitting and wing structure. Checking the nuts with a suitable torque spanner to the specifications in the torque figures shown in Table 2. of the Accomplishment Instructions of BAE SYSTEMS (OPERATIONS... installed, and Doing either an ultrasonic inspection for damaged bolts or torque check of the tension bolts...

  5. 76 FR 13069 - Airworthiness Directives; BAE Systems (Operations) Limited Model ATP Airplanes; BAE Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-10

    ..., an operator found an aileron trim tab hinge pin that had migrated sufficiently to cause a rubbing.... Recently, during a walk round check, an operator found an aileron trim tab hinge pin that had migrated... walk round check, an operator found an aileron trim tab hinge pin that had migrated sufficiently to...

  6. Measures and limits of models of fixation selection.

    PubMed

    Wilming, Niklas; Betz, Torsten; Kietzmann, Tim C; König, Peter

    2011-01-01

    Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.

  7. Estimation of Rainfall Rates from Passive Microwave Remote Sensing.

    NASA Astrophysics Data System (ADS)

    Sharma, Awdhesh Kumar

    Rainfall rates have been estimated using the passive microwave and visible/infrared remote sensing techniques. Data of September 14, 1978 from the Scanning Multichannel Microwave Radiometer (SMMR) on board SEA SAT-A and the Visible and Infrared Spin Scan Radiometer (VISSR) on board GOES-W (Geostationary Operational Environmental Satellite - West) was obtained and analyzed for rainfall rate retrieval. Microwave brightness temperatures (MBT) are simulated, using the microwave radiative transfer model (MRTM) and atmospheric scattering models. These MBT were computed as a function of rates of rainfall from precipitating clouds which are in a combined phase of ice and water. Microwave extinction due to ice and liquid water are calculated using Mie-theory and Gamma drop size distributions. Microwave absorption due to oxygen and water vapor are based on the schemes given by Rosenkranz, and Barret and Chung. The scattering phase matrix involved in the MRTM is found using Eddington's two stream approximation. The surface effects due to winds and foam are included through the ocean surface emissivity model. Rainfall rates are then inverted from MBT using the optimization technique "Leaps and Bounds" and multiple linear regression leading to a relationship between the rainfall rates and MBT. This relationship has been used to infer the oceanic rainfall rates from SMMR data. The VISSR data has been inverted for the rainfall rates using Griffith's scheme. This scheme provides an independent means of estimating rainfall rates for cross checking SMMR estimates. The inferred rainfall rates from both techniques have been plotted on a world map for comparison. A reasonably good correlation has been obtained between the two estimates.

  8. Stochastic Local Search for Core Membership Checking in Hedonic Games

    NASA Astrophysics Data System (ADS)

    Keinänen, Helena

    Hedonic games have emerged as an important tool in economics and show promise as a useful formalism to model multi-agent coalition formation in AI as well as group formation in social networks. We consider a coNP-complete problem of core membership checking in hedonic coalition formation games. No previous algorithms to tackle the problem have been presented. In this work, we overcome this by developing two stochastic local search algorithms for core membership checking in hedonic games. We demonstrate the usefulness of the algorithms by showing experimentally that they find solutions efficiently, particularly for large agent societies.

  9. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  10. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  11. Model Checking Satellite Operational Procedures

    NASA Astrophysics Data System (ADS)

    Cavaliere, Federico; Mari, Federico; Melatti, Igor; Minei, Giovanni; Salvo, Ivano; Tronci, Enrico; Verzino, Giovanni; Yushtein, Yuri

    2011-08-01

    We present a model checking approach for the automatic verification of satellite operational procedures (OPs). Building a model for a complex system as a satellite is a hard task. We overcome this obstruction by using a suitable simulator (SIMSAT) for the satellite. Our approach aims at improving OP quality assurance by automatic exhaustive exploration of all possible simulation scenarios. Moreover, our solution decreases OP verification costs by using a model checker (CMurphi) to automatically drive the simulator. We model OPs as user-executed programs observing the simulator telemetries and sending telecommands to the simulator. In order to assess feasibility of our approach we present experimental results on a simple meaningful scenario. Our results show that we can save up to 90% of verification time.

  12. W-Z-top-quark bags

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crichigno, Marcos P.; Shuryak, Edward; Flambaum, Victor V.

    2010-10-01

    We discuss a new family of multiquanta-bound states in the standard model which exist due to the mutual Higgs-based attraction of the heaviest members of the standard model, namely, gauge quanta W, Z, and (anti)top quarks, t, t. We use a self-consistent mean-field approximation, up to a rather large particle number N. In this paper we do not focus on weakly bound, nonrelativistic bound states, but rather on 'bags' in which the Higgs vacuum expectation value is significantly modified or depleted. The minimal number N above which such states appear strongly depends on the ratio of the Higgs mass tomore » the masses of W, Z, t, t: For a light Higgs mass, m{sub H{approx}}50 GeV, bound states start from N{approx}O(10), but for a ''realistic'' Higgs mass, m{sub H{approx}}100 GeV, one finds metastable/bound W, Z bags only for N{approx}O(1000). We also found that in the latter case pure top bags disappear for all N, although top quarks can still be well bound to the W bags. Anticipating the cosmological applications (discussed in the following Article [Phys. Rev. D 82, 073019]) of these bags as 'doorway states' for baryosynthesis, we also consider here the existence of such metastable bags at finite temperatures, when standard-model parameters such as Higgs, gauge, and top masses are significantly modified.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakaguchi, Yuji, E-mail: nkgc2003@yahoo.co.jp; Ono, Takeshi; Onitsuka, Ryota

    COMPASS system (IBA Dosimetry, Schwarzenbruck, Germany) and ArcCHECK with 3DVH software (Sun Nuclear Corp., Melbourne, FL) are commercial quasi-3-dimensional (3D) dosimetry arrays. Cross-validation to compare them under the same conditions, such as a treatment plan, allows for clear evaluation of such measurement devices. In this study, we evaluated the accuracy of reconstructed dose distributions from the COMPASS system and ArcCHECK with 3DVH software using Monte Carlo simulation (MC) for multi-leaf collimator (MLC) test patterns and clinical VMAT plans. In a phantom study, ArcCHECK 3DVH showed clear differences from COMPASS, measurement and MC due to the detector resolution and the dosemore » reconstruction method. Especially, ArcCHECK 3DVH showed 7% difference from MC for the heterogeneous phantom. ArcCHECK 3DVH only corrects the 3D dose distribution of treatment planning system (TPS) using ArcCHECK measurement, and therefore the accuracy of ArcCHECK 3DVH depends on TPS. In contrast, COMPASS showed good agreement with MC for all cases. However, the COMPASS system requires many complicated installation procedures such as beam modeling, and appropriate commissioning is needed. In terms of clinical cases, there were no large differences for each QA device. The accuracy of the compass and ArcCHECK 3DVH systems for phantoms and clinical cases was compared. Both systems have advantages and disadvantages for clinical use, and consideration of the operating environment is important. The QA system selection is depending on the purpose and workflow in each hospital.« less

  14. Relativistic Few-Body Hadronic Physics Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polyzou, Wayne

    2016-06-20

    The goal of this research proposal was to use ``few-body'' methods to understand the structure and reactions of systems of interacting hadrons (neutrons, protons, mesons, quarks) over a broad range of energy scales. Realistic mathematical models of few-hadron systems have the advantage that they are sufficiently simple that they can be solved with mathematically controlled errors. These systems are also simple enough that it is possible to perform complete accurate experimental measurements on these systems. Comparison between theory and experiment puts strong constraints on the structure of the models. Even though these systems are ``simple'', both the experiments and computationsmore » push the limits of technology. The important property of ``few-body'' systems is that the ``cluster property'' implies that the interactions that appear in few-body systems are identical to the interactions that appear in complicated many-body systems. Of particular interest are models that correctly describe physics at distance scales that are sensitive to the internal structure of the individual nucleons. The Heisenberg uncertainty principle implies that in order to be sensitive to physics on distance scales that are a fraction of the proton or neutron radius, a relativistic treatment of quantum mechanics is necessary. The research supported by this grant involved 30 years of effort devoted to studying all aspects of interacting two and three-body systems. Realistic interactions were used to compute bound states of two- and three-nucleon, and two- and three-quark systems. Scattering observables for these systems were computed for a broad range of energies - from zero energy scattering to few GeV scattering, where experimental evidence of sub-nucleon degrees of freedom is beginning to appear. Benchmark calculations were produced, which when compared with calculations of other groups provided an essential check on these complicated calculations. In addition to computing bound state properties and scattering cross section, we also computed electron scattering cross sections in few-nucleon and few-quark systems, which are sensitive to the electric currents in these systems. We produced the definitive review on article on relativistic quantum mechanics, which and been used by many groups. In addition we developed and tested many computational techniques are used by other groups. Many of these techniques have applications in other areas of physics. The research benefited by collaborations with physicists from many different institutions and countries. It also involved working with seventeen undergraduate and graduate students.« less

  15. Vacuum stability in the U(1)χ extended model with vanishing scalar potential at the Planck scale

    NASA Astrophysics Data System (ADS)

    Haba, Naoyuki; Yamaguchi, Yuya

    2015-09-01

    We investigate the vacuum stability in a scale invariant local {U}(1)_χ model with vanishing scalar potential at the Planck scale. We find that it is impossible to realize the Higgs mass of 125 GeV while keeping the Higgs quartic coupling λ _H positive in all energy scales, that is, the same as the standard model. Once one allows λ _H<0, the lower bounds of the Z' boson mass ares obtained through the positive definiteness of the scalar mass squared eigenvalues, while the bounds are smaller than the LHC bounds. On the other hand, the upper bounds strongly depend on the number of relevant Majorana Yukawa couplings of the right-handed neutrinos N_ν . Considering decoupling effects of the Z' boson and the right-handed neutrinos, the condition of the singlet scalar quartic coupling λ _φ >0 gives the upper bound in the N_ν =1 case, while it does not constrain the N_ν =2 and 3 cases. In particular, we find that the Z' boson mass is tightly restricted for the N_ν =1 case as M_{Z'} &lsim 3.7 TeV.

  16. Local Rademacher Complexity: sharper risk bounds with and without unlabeled samples.

    PubMed

    Oneto, Luca; Ghio, Alessandro; Ridella, Sandro; Anguita, Davide

    2015-05-01

    We derive in this paper a new Local Rademacher Complexity risk bound on the generalization ability of a model, which is able to take advantage of the availability of unlabeled samples. Moreover, this new bound improves state-of-the-art results even when no unlabeled samples are available. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Resolvent-based modeling of passive scalar dynamics in wall-bounded turbulence

    NASA Astrophysics Data System (ADS)

    Dawson, Scott; Saxton-Fox, Theresa; McKeon, Beverley

    2017-11-01

    The resolvent formulation of the Navier-Stokes equations expresses the system state as the output of a linear (resolvent) operator acting upon a nonlinear forcing. Previous studies have demonstrated that a low-rank approximation of this linear operator predicts many known features of incompressible wall-bounded turbulence. In this work, this resolvent model for wall-bounded turbulence is extended to include a passive scalar field. This formulation allows for a number of additional simplifications that reduce model complexity. Firstly, it is shown that the effect of changing scalar diffusivity can be approximated through a transformation of spatial wavenumbers and temporal frequencies. Secondly, passive scalar dynamics may be studied through the low-rank approximation of a passive scalar resolvent operator, which is decoupled from velocity response modes. Thirdly, this passive scalar resolvent operator is amenable to approximation by semi-analytic methods. We investigate the extent to which this resulting hierarchy of models can describe and predict passive scalar dynamics and statistics in wall-bounded turbulence. The support of AFOSR under Grant Numbers FA9550-16-1-0232 and FA9550-16-1-0361 is gratefully acknowledged.

  18. Model Checking Abstract PLEXIL Programs with SMART

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.

    2007-01-01

    We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.

  19. Fast maximum likelihood estimation using continuous-time neural point process models.

    PubMed

    Lepage, Kyle Q; MacDonald, Christopher J

    2015-06-01

    A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.

  20. Lower Bounds to the Reliabilities of Factor Score Estimators.

    PubMed

    Hessen, David J

    2016-10-06

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone's factor score estimators, Bartlett's factor score estimators, and McDonald's factor score estimators are derived and conditions are given under which these lower bounds are equal. The relative performance of the derived lower bounds is studied using classic example data sets. The results show that estimates of the lower bounds to the reliabilities of Thurstone's factor score estimators are greater than or equal to the estimates of the lower bounds to the reliabilities of Bartlett's and McDonald's factor score estimators.

  1. Adjustment of Adaptive Gain with Bounded Linear Stability Analysis to Improve Time-Delay Margin for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.

  2. Mirror energy difference and the structure of loosely bound proton-rich nuclei around A =20

    NASA Astrophysics Data System (ADS)

    Yuan, Cenxi; Qi, Chong; Xu, Furong; Suzuki, Toshio; Otsuka, Takaharu

    2014-04-01

    The properties of loosely bound proton-rich nuclei around A =20 are investigated within the framework of the nuclear shell model. In these nuclei, the strength of the effective interactions involving the loosely bound proton s1/2 orbit is significantly reduced in comparison with that of those in their mirror nuclei. We evaluate the reduction of the effective interaction by calculating the monopole-based-universal interaction (VMU) in the Woods-Saxon basis. The shell-model Hamiltonian in the sd shell, such as USD, can thus be modified to reproduce the binding energies and energy levels of the weakly bound proton-rich nuclei around A =20. The effect of the reduction of the effective interaction on the structure and decay properties of these nuclei is also discussed.

  3. Skew information in the XY model with staggered Dzyaloshinskii-Moriya interaction

    NASA Astrophysics Data System (ADS)

    Qiu, Liang; Quan, Dongxiao; Pan, Fei; Liu, Zhi

    2017-06-01

    We study the performance of the lower bound of skew information in the vicinity of transition point for the anisotropic spin-1/2 XY chain with staggered Dzyaloshinskii-Moriya interaction by use of quantum renormalization-group method. For a fixed value of the Dzyaloshinskii-Moriya interaction, there are two saturated values for the lower bound of skew information corresponding to the spin-fluid and Néel phases, respectively. The scaling exponent of the lower bound of skew information closely relates to the correlation length of the model and the Dzyaloshinskii-Moriya interaction shifts the factorization point. Our results show that the lower bound of skew information can be a good candidate to detect the critical point of XY spin chain with staggered Dzyaloshinskii-Moriya interaction.

  4. Continuous Opinion Dynamics Under Bounded Confidence:. a Survey

    NASA Astrophysics Data System (ADS)

    Lorenz, Jan

    Models of continuous opinion dynamics under bounded confidence have been presented independently by Krause and Hegselmann and by Deffuant et al. in 2000. They have raised a fair amount of attention in the communities of social simulation, sociophysics and complexity science. The researchers working on it come from disciplines such as physics, mathematics, computer science, social psychology and philosophy. In these models agents hold continuous opinions which they can gradually adjust if they hear the opinions of others. The idea of bounded confidence is that agents only interact if they are close in opinion to each other. Usually, the models are analyzed with agent-based simulations in a Monte Carlo style, but they can also be reformulated on the agent's density in the opinion space in a master equation style. The contribution of this survey is fourfold. First, it will present the agent-based and density-based modeling frameworks including the cases of multidimensional opinions and heterogeneous bounds of confidence. Second, it will give the bifurcation diagrams of cluster configuration in the homogeneous model with uniformly distributed initial opinions. Third, it will review the several extensions and the evolving phenomena which have been studied so far, and fourth it will state some open questions.

  5. An approach to checking case-crossover analyses based on equivalence with time-series methods.

    PubMed

    Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L

    2008-03-01

    The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.

  6. Robust inference in the negative binomial regression model with an application to falls data.

    PubMed

    Aeberhard, William H; Cantoni, Eva; Heritier, Stephane

    2014-12-01

    A popular way to model overdispersed count data, such as the number of falls reported during intervention studies, is by means of the negative binomial (NB) distribution. Classical estimating methods are well-known to be sensitive to model misspecifications, taking the form of patients falling much more than expected in such intervention studies where the NB regression model is used. We extend in this article two approaches for building robust M-estimators of the regression parameters in the class of generalized linear models to the NB distribution. The first approach achieves robustness in the response by applying a bounded function on the Pearson residuals arising in the maximum likelihood estimating equations, while the second approach achieves robustness by bounding the unscaled deviance components. For both approaches, we explore different choices for the bounding functions. Through a unified notation, we show how close these approaches may actually be as long as the bounding functions are chosen and tuned appropriately, and provide the asymptotic distributions of the resulting estimators. Moreover, we introduce a robust weighted maximum likelihood estimator for the overdispersion parameter, specific to the NB distribution. Simulations under various settings show that redescending bounding functions yield estimates with smaller biases under contamination while keeping high efficiency at the assumed model, and this for both approaches. We present an application to a recent randomized controlled trial measuring the effectiveness of an exercise program at reducing the number of falls among people suffering from Parkinsons disease to illustrate the diagnostic use of such robust procedures and their need for reliable inference. © 2014, The International Biometric Society.

  7. Coefficient of performance and its bounds with the figure of merit for a general refrigerator

    NASA Astrophysics Data System (ADS)

    Long, Rui; Liu, Wei

    2015-02-01

    A general refrigerator model with non-isothermal processes is studied. The coefficient of performance (COP) and its bounds at maximum χ figure of merit are obtained and analyzed. This model accounts for different heat capacities during the heat transfer processes. So, different kinds of refrigerator cycles can be considered. Under the constant heat capacity condition, the upper bound of the COP is the Curzon-Ahlborn (CA) coefficient of performance and is independent of the time durations of the heat exchanging processes. With the maximum χ criterion, in the refrigerator cycles, such as the reversed Brayton refrigerator cycle, the reversed Otto refrigerator cycle and the reversed Atkinson refrigerator cycle, where the heat capacity in the heat absorbing process is not less than that in the heat releasing process, their COPs are bounded by the CA coefficient of performance; otherwise, such as for the reversed Diesel refrigerator cycle, its COP can exceed the CA coefficient of performance. Furthermore, the general refined upper and lower bounds have been proposed.

  8. The metamorphosis of 'culture-bound' syndromes.

    PubMed

    Jilek, W G; Jilek-Aall, L

    1985-01-01

    Starting from a critical review of the concept of 'culture-bound' disorders and its development in comparative psychiatry, the authors present the changing aspects of two so-called culture-bound syndromes as paradigms of transcultural metamorphosis (koro) and intra-cultural metamorphosis (Salish Indian spirit sickness), respectively. The authors present recent data on epidemics of koro, which is supposedly bound to Chinese culture, in Thailand and India among non-Chinese populations. Neither the model of Oedipal castration anxiety nor the model of culture-specific pathogenicity, commonly adduced in psychiatric and ethnological literature, explain these phenomena. The authors' data on Salish Indian spirit sickness describes the contemporary condition as anomic depression, which is significantly different from its traditional namesake. The traditional concept was redefined by Salish ritual specialists in response to current needs imposed by social changes. The stresses involved in creating the contemporary phenomena of koro and spirit sickness are neither culture-specific nor culture-inherent, as postulated for 'culture-bound' syndromes, rather they are generated by a feeling of powerlessness caused by perceived threats to ethnic survival.

  9. STATIC QUARK ANTI-QUARK FREE AND INTERNAL ENERGY IN 2-FLAVOR QCD AND BOUND STATES IN THE QGP.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ZANTOW, F.; KACZMAREK, O.

    2005-07-25

    We present results on heavy quark free energies in 2-flavour QCD. The temperature dependence of the interaction between static quark anti-quark pairs will be analyzed in terms of temperature dependent screening radii, which give a first estimate on the medium modification of (heavy quark) bound states in the quark gluon plasma. Comparing those radii to the (zero temperature) mean squared charge radii of chasmonium states indicates that the J/{Psi} may survive the phase transition as a bound state, while {chi}{sub c} and {Psi}{prime} are expected to show significant thermal modifications at temperatures close to the transition. Furthermore we will analyzemore » the relation between heavy quark free energies, entropy contributions and internal energy and discuss their relation to potential models used to analyze the melting of heavy quark bound states above the deconfinement temperature. Results of different groups and various potential models for bound states in the deconfined phase of QCD are compared.« less

  10. Classical Physics and the Bounds of Quantum Correlations.

    PubMed

    Frustaglia, Diego; Baltanás, José P; Velázquez-Ahumada, María C; Fernández-Prieto, Armando; Lujambio, Aintzane; Losada, Vicente; Freire, Manuel J; Cabello, Adán

    2016-06-24

    A unifying principle explaining the numerical bounds of quantum correlations remains elusive, despite the efforts devoted to identifying it. Here, we show that these bounds are indeed not exclusive to quantum theory: for any abstract correlation scenario with compatible measurements, models based on classical waves produce probability distributions indistinguishable from those of quantum theory and, therefore, share the same bounds. We demonstrate this finding by implementing classical microwaves that propagate along meter-size transmission-line circuits and reproduce the probabilities of three emblematic quantum experiments. Our results show that the "quantum" bounds would also occur in a classical universe without quanta. The implications of this observation are discussed.

  11. Biodistribution of charged F(ab')2 photoimmunoconjugates in a xenograft model of ovarian cancer.

    PubMed

    Duska, L R; Hamblin, M R; Bamberg, M P; Hasan, T

    1997-01-01

    The effect of charge modification of photoimmunoconjugates (PICs) on their biodistribution in a xenograft model of ovarian cancer was investigated. Chlorin(e6)c(e6) was attached site specifically to the F(ab')2 fragment of the murine monoclonal antibody OC125, directed against human ovarian cancer cells, via poly-1-lysine linkers carrying cationic or anionic charges. Preservation of immunoreactivity was checked by enzyme-linked immunosorbent assay (ELISA). PICs were radiolabelled with 125I and compared with non-specific rabbit IgG PICs after intraperitoneal (i.p.) injection into nude mice. Samples were taken from normal organs and tumour at 3 h and 24 h. Tumour to normal 125I ratios showed that the cationic OC125F(ab')2 PIC had the highest tumour selectivity. Ratios for c(e6) were uniformly higher than for 125I, indicating that c(e6) became separated from 125I. OC125F(ab')2 gave highest tissue values of 125I, followed by cationic OC125F(ab')2 PIC; other species were much lower. The amounts of c(e6) delivered per gram of tumour were much higher for cationic OC125F(ab')2 PIC than for other species. The results indicate that cationic charge stimulates the endocytosis and lysosomal degradation of the OC125F(ab')2-pl-c(e6) that has bound to the i.p. tumour. Positively charged PICs may have applications in the i.p. photoimmunotherapy of minimal residual ovarian cancer.

  12. The Problem of Limited Inter-rater Agreement in Modelling Music Similarity

    PubMed Central

    Flexer, Arthur; Grill, Thomas

    2016-01-01

    One of the central goals of Music Information Retrieval (MIR) is the quantification of similarity between or within pieces of music. These quantitative relations should mirror the human perception of music similarity, which is however highly subjective with low inter-rater agreement. Unfortunately this principal problem has been given little attention in MIR so far. Since it is not meaningful to have computational models that go beyond the level of human agreement, these levels of inter-rater agreement present a natural upper bound for any algorithmic approach. We will illustrate this fundamental problem in the evaluation of MIR systems using results from two typical application scenarios: (i) modelling of music similarity between pieces of music; (ii) music structure analysis within pieces of music. For both applications, we derive upper bounds of performance which are due to the limited inter-rater agreement. We compare these upper bounds to the performance of state-of-the-art MIR systems and show how the upper bounds prevent further progress in developing better MIR systems. PMID:28190932

  13. Fragment-based modelling of single stranded RNA bound to RNA recognition motif containing proteins

    PubMed Central

    de Beauchene, Isaure Chauvot; de Vries, Sjoerd J.; Zacharias, Martin

    2016-01-01

    Abstract Protein-RNA complexes are important for many biological processes. However, structural modeling of such complexes is hampered by the high flexibility of RNA. Particularly challenging is the docking of single-stranded RNA (ssRNA). We have developed a fragment-based approach to model the structure of ssRNA bound to a protein, based on only the protein structure, the RNA sequence and conserved contacts. The conformational diversity of each RNA fragment is sampled by an exhaustive library of trinucleotides extracted from all known experimental protein–RNA complexes. The method was applied to ssRNA with up to 12 nucleotides which bind to dimers of the RNA recognition motifs (RRMs), a highly abundant eukaryotic RNA-binding domain. The fragment based docking allows a precise de novo atomic modeling of protein-bound ssRNA chains. On a benchmark of seven experimental ssRNA–RRM complexes, near-native models (with a mean heavy-atom deviation of <3 Å from experiment) were generated for six out of seven bound RNA chains, and even more precise models (deviation < 2 Å) were obtained for five out of seven cases, a significant improvement compared to the state of the art. The method is not restricted to RRMs but was also successfully applied to Pumilio RNA binding proteins. PMID:27131381

  14. The instant sequencing task: Toward constraint-checking a complex spacecraft command sequence interactively

    NASA Technical Reports Server (NTRS)

    Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Amador, Arthur V.; Spitale, Joseph N.

    1993-01-01

    Robotic spacecraft are controlled by sets of commands called 'sequences.' These sequences must be checked against mission constraints. Making our existing constraint checking program faster would enable new capabilities in our uplink process. Therefore, we are rewriting this program to run on a parallel computer. To do so, we had to determine how to run constraint-checking algorithms in parallel and create a new method of specifying spacecraft models and constraints. This new specification gives us a means of representing flight systems and their predicted response to commands which could be used in a variety of applications throughout the command process, particularly during anomaly or high-activity operations. This commonality could reduce operations cost and risk for future complex missions. Lessons learned in applying some parts of this system to the TOPEX/Poseidon mission will be described.

  15. Rotational relaxation of molecular hydrogen at moderate temperatures

    NASA Technical Reports Server (NTRS)

    Sharma, S. P.

    1994-01-01

    Using a coupled rotation-vibration-dissociation model the rotational relaxation times for molecular hydrogen as a function of final temperature (500-5000 K), in a hypothetical scenario of sudden compression, are computed. The theoretical model is based on a master equation solver. The bound-bound and bound-free transition rates have been computed using a quasiclassical trajectory method. A review of the available experimental data on the rotational relaxation of hydrogen is presented, with a critical overview of the method of measurements and data reduction, including the sources of errors. These experimental data are then compared with the computed results.

  16. Thomson scattering in the average-atom approximation.

    PubMed

    Johnson, W R; Nilsen, J; Cheng, K T

    2012-09-01

    The average-atom model is applied to study Thomson scattering of x-rays from warm dense matter with emphasis on scattering by bound electrons. Parameters needed to evaluate the dynamic structure function (chemical potential, average ionic charge, free electron density, bound and continuum wave functions, and occupation numbers) are obtained from the average-atom model. The resulting analysis provides a relatively simple diagnostic for use in connection with x-ray scattering measurements. Applications are given to dense hydrogen, beryllium, aluminum, and titanium plasmas. In the case of titanium, bound states are predicted to modify the spectrum significantly.

  17. Probing the cosmic distance duality relation using time delay lenses

    NASA Astrophysics Data System (ADS)

    Rana, Akshay; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha; Holanda, R. F. L.

    2017-07-01

    The construction of the cosmic distance-duality relation (CDDR) has been widely studied. However, its consistency with various new observables remains a topic of interest. We present a new way to constrain the CDDR η(z) using different dynamic and geometric properties of strong gravitational lenses (SGL) along with SNe Ia observations. We use a sample of 102 SGL with the measurement of corresponding velocity dispersion σ0 and Einstein radius θE. In addition, we also use a dataset of 12 two image lensing systems containing the measure of time delay Δ t between source images. Jointly these two datasets give us the angular diameter distance DAol of the lens. Further, for luminosity distance, we use the 740 observations from JLA compilation of SNe Ia. To study the combined behavior of these datasets we use a model independent method, Gaussian Process (GP). We also check the efficiency of GP by applying it on simulated datasets, which are generated in a phenomenological way by using realistic cosmological error bars. Finally, we conclude that the combined bounds from the SGL and SNe Ia observation do not favor any deviation of CDDR and are in concordance with the standard value (η=1) within 2σ confidence region, which further strengthens the theoretical acceptance of CDDR.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, Akshay; Mahajan, Shobhit; Mukherjee, Amitabha

    The construction of the cosmic distance-duality relation (CDDR) has been widely studied. However, its consistency with various new observables remains a topic of interest. We present a new way to constrain the CDDR η( z ) using different dynamic and geometric properties of strong gravitational lenses (SGL) along with SNe Ia observations. We use a sample of 102 SGL with the measurement of corresponding velocity dispersion σ{sub 0} and Einstein radius θ {sub E} . In addition, we also use a dataset of 12 two image lensing systems containing the measure of time delay Δ t between source images. Jointlymore » these two datasets give us the angular diameter distance D {sub A} {sub ol} of the lens. Further, for luminosity distance, we use the 740 observations from JLA compilation of SNe Ia. To study the combined behavior of these datasets we use a model independent method, Gaussian Process (GP). We also check the efficiency of GP by applying it on simulated datasets, which are generated in a phenomenological way by using realistic cosmological error bars. Finally, we conclude that the combined bounds from the SGL and SNe Ia observation do not favor any deviation of CDDR and are in concordance with the standard value (η=1) within 2σ confidence region, which further strengthens the theoretical acceptance of CDDR.« less

  19. Evaluating and Improving a Learning Trajectory for Linear Measurement in Elementary Grades 2 and 3: A Longitudinal Study

    ERIC Educational Resources Information Center

    Barrett, Jeffrey E.; Sarama, Julie; Clements, Douglas H.; Cullen, Craig; McCool, Jenni; Witkowski-Rumsey, Chepina; Klanderman, David

    2012-01-01

    We examined children's development of strategic and conceptual knowledge for linear measurement. We conducted teaching experiments with eight students in grades 2 and 3, based on our hypothetical learning trajectory for length to check its coherence and to strengthen the domain-specific model for learning and teaching. We checked the hierarchical…

  20. 75 FR 39185 - Airworthiness Directives; The Boeing Company Model 747-100, 747-100B, 747-100B SUD, 747-200B, 747...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-08

    ... and torque checks of the hanger fittings and strut forward bulkhead of the forward engine mount and... requires repetitive inspections and torque checks of the hanger fittings and strut forward bulkhead of the... corrective actions are replacing the fasteners; removing loose fasteners; tightening all Group A [[Page 39187...

  1. Probability bounds analysis for nonlinear population ecology models.

    PubMed

    Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A

    2015-09-01

    Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals. Copyright © 2015. Published by Elsevier Inc.

  2. Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data

    PubMed Central

    Yang, Yan; Simpson, Douglas

    2010-01-01

    Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950

  3. Visual Predictive Check in Models with Time-Varying Input Function.

    PubMed

    Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio

    2015-11-01

    The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.

  4. Gender-Specific Models of Work-Bound Korean Adolescents' Social Supports and Career Adaptability on Subsequent Job Satisfaction

    ERIC Educational Resources Information Center

    Han, Hyojung; Rojewski, Jay W.

    2015-01-01

    A Korean national database, the High School Graduates Occupational Mobility Survey, was used to examine the influence of perceived social supports (family and school) and career adaptability on the subsequent job satisfaction of work-bound adolescents 4 months after their transition from high school to work. Structural equation modeling analysis…

  5. Integrability and chemical potential in the (3 + 1)-dimensional Skyrme model

    NASA Astrophysics Data System (ADS)

    Alvarez, P. D.; Canfora, F.; Dimakis, N.; Paliathanasis, A.

    2017-10-01

    Using a remarkable mapping from the original (3 + 1)dimensional Skyrme model to the Sine-Gordon model, we construct the first analytic examples of Skyrmions as well as of Skyrmions-anti-Skyrmions bound states within a finite box in 3 + 1 dimensional flat space-time. An analytic upper bound on the number of these Skyrmions-anti-Skyrmions bound states is derived. We compute the critical isospin chemical potential beyond which these Skyrmions cease to exist. With these tools, we also construct topologically protected time-crystals: time-periodic configurations whose time-dependence is protected by their non-trivial winding number. These are striking realizations of the ideas of Shapere and Wilczek. The critical isospin chemical potential for these time-crystals is determined.

  6. Model Checker for Java Programs

    NASA Technical Reports Server (NTRS)

    Visser, Willem

    2007-01-01

    Java Pathfinder (JPF) is a verification and testing environment for Java that integrates model checking, program analysis, and testing. JPF consists of a custom-made Java Virtual Machine (JVM) that interprets bytecode, combined with a search interface to allow the complete behavior of a Java program to be analyzed, including interleavings of concurrent programs. JPF is implemented in Java, and its architecture is highly modular to support rapid prototyping of new features. JPF is an explicit-state model checker, because it enumerates all visited states and, therefore, suffers from the state-explosion problem inherent in analyzing large programs. It is suited to analyzing programs less than 10kLOC, but has been successfully applied to finding errors in concurrent programs up to 100kLOC. When an error is found, a trace from the initial state to the error is produced to guide the debugging. JPF works at the bytecode level, meaning that all of Java can be model-checked. By default, the software checks for all runtime errors (uncaught exceptions), assertions violations (supports Java s assert), and deadlocks. JPF uses garbage collection and symmetry reductions of the heap during model checking to reduce state-explosion, as well as dynamic partial order reductions to lower the number of interleavings analyzed. JPF is capable of symbolic execution of Java programs, including symbolic execution of complex data such as linked lists and trees. JPF is extensible as it allows for the creation of listeners that can subscribe to events during searches. The creation of dedicated code to be executed in place of regular classes is supported and allows users to easily handle native calls and to improve the efficiency of the analysis.

  7. Floquet resonant states and validity of the Floquet-Magnus expansion in the periodically driven Friedrichs models

    NASA Astrophysics Data System (ADS)

    Mori, Takashi

    2015-02-01

    The Floquet eigenvalue problem is analyzed for periodically driven Friedrichs models on discrete and continuous space. In the high-frequency regime, there exists a Floquet bound state consistent with the Floquet-Magnus expansion in the discrete Friedrichs model, while it is not the case in the continuous model. In the latter case, however, the bound state predicted by the Floquet-Magnus expansion appears as a metastable state whose lifetime diverges in the limit of large frequencies. We obtain the lifetime by evaluating the imaginary part of the quasienergy of the Floquet resonant state. In the low-frequency regime, there is no Floquet bound state and instead the Floquet resonant state with exponentially small imaginary part of the quasienergy appears, which is understood as the quantum tunneling in the energy space.

  8. Stability of the lepton bag model based on the Kerr–Newman solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burinskii, A., E-mail: bur@ibrae.ac.ru

    2015-11-15

    We show that the lepton bag model considered in our previous paper [10], generating the external gravitational and electromagnetic fields of the Kerr–Newman (KN) solution, is supersymmetric and represents a BPS-saturated soliton interpolating between the internal vacuum state and the external KN solution. We obtain Bogomolnyi equations for this phase transition and show that the Bogomolnyi bound determines all important features of this bag model, including its stable shape. In particular, for the stationary KN solution, the BPS bound provides stability of the ellipsoidal form of the bag and the formation of the ring–string structure at its border, while formore » the periodic electromagnetic excitations of the KN solution, the BPS bound controls the deformation of the surface of the bag, reproducing the known flexibility of bag models.« less

  9. Thermodynamic models for bounding pressurant mass requirements of cryogenic tanks

    NASA Technical Reports Server (NTRS)

    Vandresar, Neil T.; Haberbusch, Mark S.

    1994-01-01

    Thermodynamic models have been formulated to predict lower and upper bounds for the mass of pressurant gas required to pressurize a cryogenic tank and then expel liquid from the tank. Limiting conditions are based on either thermal equilibrium or zero energy exchange between the pressurant gas and initial tank contents. The models are independent of gravity level and allow specification of autogenous or non-condensible pressurants. Partial liquid fill levels may be specified for initial and final conditions. Model predictions are shown to successfully bound results from limited normal-gravity tests with condensable and non-condensable pressurant gases. Representative maximum collapse factor maps are presented for liquid hydrogen to show the effects of initial and final fill level on the range of pressurant gas requirements. Maximum collapse factors occur for partial expulsions with large final liquid fill fractions.

  10. Combining Static Analysis and Model Checking for Software Analysis

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2003-01-01

    We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.

  11. Improved bounds on the energy-minimizing strains in martensitic polycrystals

    NASA Astrophysics Data System (ADS)

    Peigney, Michaël

    2016-07-01

    This paper is concerned with the theoretical prediction of the energy-minimizing (or recoverable) strains in martensitic polycrystals, considering a nonlinear elasticity model of phase transformation at finite strains. The main results are some rigorous upper bounds on the set of energy-minimizing strains. Those bounds depend on the polycrystalline texture through the volume fractions of the different orientations. The simplest form of the bounds presented is obtained by combining recent results for single crystals with a homogenization approach proposed previously for martensitic polycrystals. However, the polycrystalline bound delivered by that procedure may fail to recover the monocrystalline bound in the homogeneous limit, as is demonstrated in this paper by considering an example related to tetragonal martensite. This motivates the development of a more detailed analysis, leading to improved polycrystalline bounds that are notably consistent with results for single crystals in the homogeneous limit. A two-orientation polycrystal of tetragonal martensite is studied as an illustration. In that case, analytical expressions of the upper bounds are derived and the results are compared with lower bounds obtained by considering laminate textures.

  12. An artificial neural network to discover hypervelocity stars: candidates in Gaia DR1/TGAS

    NASA Astrophysics Data System (ADS)

    Marchetti, T.; Rossi, E. M.; Kordopatis, G.; Brown, A. G. A.; Rimoldi, A.; Starkenburg, E.; Youakim, K.; Ashley, R.

    2017-09-01

    The paucity of hypervelocity stars (HVSs) known to date has severely hampered their potential to investigate the stellar population of the Galactic Centre and the Galactic potential. The first Gaia data release (DR1, 2016 September 14) gives an opportunity to increase the current sample. The challenge is the disparity between the expected number of HVSs and that of bound background stars. We have applied a novel data mining algorithm based on machine learning techniques, an artificial neural network, to the Tycho-Gaia astrometric solution catalogue. With no pre-selection of data, we could exclude immediately ˜99 per cent of the stars in the catalogue and find 80 candidates with more than 90 per cent predicted probability to be HVSs, based only on their position, proper motions and parallax. We have cross-checked our findings with other spectroscopic surveys, determining radial velocities for 30 and spectroscopic distances for five candidates. In addition, follow-up observations have been carried out at the Isaac Newton Telescope for 22 stars, for which we obtained radial velocities and distance estimates. We discover 14 stars with a total velocity in the Galactic rest frame >400 km s-1, and five of these have a probability of >50 per cent of being unbound from the Milky Way. Tracing back their orbits in different Galactic potential models, we find one possible unbound HVS with v ˜ 520 km s-1, five bound HVSs and, notably, five runaway stars with median velocity between 400 and 780 km s-1. At the moment, uncertainties in the distance estimates and ages are too large to confirm the nature of our candidates by narrowing down their ejection location, and we wait for future Gaia releases to validate the quality of our sample. This test successfully demonstrates the feasibility of our new data-mining routine.

  13. Neighborhood social capital is associated with participation in health checks of a general population: a multilevel analysis of a population-based lifestyle intervention- the Inter99 study.

    PubMed

    Bender, Anne Mette; Kawachi, Ichiro; Jørgensen, Torben; Pisinger, Charlotta

    2015-07-22

    Participation in population-based preventive health check has declined over the past decades. More research is needed to determine factors enhancing participation. The objective of this study was to examine the association between two measures of neighborhood level social capital on participation in the health check phase of a population-based lifestyle intervention. The study population comprised 12,568 residents of 73 Danish neighborhoods in the intervention group of a large population-based lifestyle intervention study - the Inter99. Two measures of social capital were applied; informal socializing and voting turnout. In a multilevel analysis only adjusting for age and sex, a higher level of neighborhood social capital was associated with higher probability of participating in the health check. Inclusion of both individual socioeconomic position and neighborhood deprivation in the model attenuated the coefficients for informal socializing, while voting turnout became non-significant. Higher level of neighborhood social capital was associated with higher probability of participating in the health check phase of a population-based lifestyle intervention. Most of the association between neighborhood social capital and participation in preventive health checks can be explained by differences in individual socioeconomic position and level of neighborhood deprivation. Nonetheless, there seems to be some residual association between social capital and health check participation, suggesting that activating social relations in the community may be an avenue for boosting participation rates in population-based health checks. ClinicalTrials.gov (registration no. NCT00289237 ).

  14. Computing an upper bound on contact stress with surrogate duality

    NASA Astrophysics Data System (ADS)

    Xuan, Zhaocheng; Papadopoulos, Panayiotis

    2016-07-01

    We present a method for computing an upper bound on the contact stress of elastic bodies. The continuum model of elastic bodies with contact is first modeled as a constrained optimization problem by using finite elements. An explicit formulation of the total contact force, a fraction function with the numerator as a linear function and the denominator as a quadratic convex function, is derived with only the normalized nodal contact forces as the constrained variables in a standard simplex. Then two bounds are obtained for the sum of the nodal contact forces. The first is an explicit formulation of matrices of the finite element model, derived by maximizing the fraction function under the constraint that the sum of the normalized nodal contact forces is one. The second bound is solved by first maximizing the fraction function subject to the standard simplex and then using Dinkelbach's algorithm for fractional programming to find the maximum—since the fraction function is pseudo concave in a neighborhood of the solution. These two bounds are solved with the problem dimensions being only the number of contact nodes or node pairs, which are much smaller than the dimension for the original problem, namely, the number of degrees of freedom. Next, a scheme for constructing an upper bound on the contact stress is proposed that uses the bounds on the sum of the nodal contact forces obtained on a fine finite element mesh and the nodal contact forces obtained on a coarse finite element mesh, which are problems that can be solved at a lower computational cost. Finally, the proposed method is verified through some examples concerning both frictionless and frictional contact to demonstrate the method's feasibility, efficiency, and robustness.

  15. Microscopic analysis and simulation of check-mark stain on the galvanized steel strip

    NASA Astrophysics Data System (ADS)

    So, Hongyun; Yoon, Hyun Gi; Chung, Myung Kyoon

    2010-11-01

    When galvanized steel strip is produced through a continuous hot-dip galvanizing process, the thickness of adhered zinc film is controlled by plane impinging air gas jet referred to as "air-knife system". In such a gas-jet wiping process, stain of check-mark or sag line shape frequently appears. The check-mark defect is caused by non-uniform zinc coating and the oblique patterns such as "W", "V" or "X" on the coated surface. The present paper presents a cause and analysis of the check-mark formation and a numerical simulation of sag lines by using the numerical data produced by Large Eddy Simulation (LES) of the three-dimensional compressible turbulent flow field around the air-knife system. It was found that there is alternating plane-wise vortices near the impinging stagnation region and such alternating vortices move almost periodically to the right and to the left sides on the stagnation line due to the jet flow instability. Meanwhile, in order to simulate the check-mark formation, a novel perturbation model has been developed to predict the variation of coating thickness along the transverse direction. Finally, the three-dimensional zinc coating surface was obtained by the present perturbation model. It was found that the sag line formation is determined by the combination of the instantaneous coating thickness distribution along the transverse direction near the stagnation line and the feed speed of the steel strip.

  16. Predictors of Health Service Utilization Among Older Men in Jamaica.

    PubMed

    Willie-Tyndale, Douladel; McKoy Davis, Julian; Holder-Nevins, Desmalee; Mitchell-Fearon, Kathryn; James, Kenneth; Waldron, Norman K; Eldemire-Shearer, Denise

    2018-01-03

    To determine the relative influence of sociodemographic, socioeconomic, psychosocial, and health variables on health service utilization in the last 12 months. Data were analyzed for 1,412 men ≥60 years old from a 2012 nationally representative community-based survey in Jamaica. Associations between six health service utilization variables and several explanatory variables were explored. Logistic regression models were used to identify independent predictors of each utilization measure and determine the strengths of associations. More than 75% reported having health visits and blood pressure checks. Blood sugar (69.6%) and cholesterol (63.1%) checks were less common, and having a prostate check (35.1%) was the least utilized service. Adjusted models confirmed that the presence of chronic diseases and health insurance most strongly predicted utilization. A daughter or son as the main source of financial support (vs self) doubled or tripled, respectively, the odds of routine doctors' visits. Compared with primary or lower education, tertiary education doubled [2.37 (1.12, 4.95)] the odds of a blood pressure check. Regular attendance at club/society/religious organizations' meetings increased the odds of having a prostate check by 45%. Although need and financial resources most strongly influenced health service utilization, psychosocial variables may be particularly influential for underutilized services. © The Author(s) 2018. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Backstepping Design of Adaptive Neural Fault-Tolerant Control for MIMO Nonlinear Systems.

    PubMed

    Gao, Hui; Song, Yongduan; Wen, Changyun

    In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.

  18. Error analysis of analytic solutions for self-excited near-symmetric rigid bodies - A numerical study

    NASA Technical Reports Server (NTRS)

    Kia, T.; Longuski, J. M.

    1984-01-01

    Analytic error bounds are presented for the solutions of approximate models for self-excited near-symmetric rigid bodies. The error bounds are developed for analytic solutions to Euler's equations of motion. The results are applied to obtain a simplified analytic solution for Eulerian rates and angles. The results of a sample application of the range and error bound expressions for the case of the Galileo spacecraft experiencing transverse torques demonstrate the use of the bounds in analyses of rigid body spin change maneuvers.

  19. Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection

    NASA Astrophysics Data System (ADS)

    Denuit, Michel; Dhaene, Jan

    2007-06-01

    In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.

  20. Graph Embedding Techniques for Bounding Condition Numbers of Incomplete Factor Preconditioning

    NASA Technical Reports Server (NTRS)

    Guattery, Stephen

    1997-01-01

    We extend graph embedding techniques for bounding the spectral condition number of preconditioned systems involving symmetric, irreducibly diagonally dominant M-matrices to systems where the preconditioner is not diagonally dominant. In particular, this allows us to bound the spectral condition number when the preconditioner is based on an incomplete factorization. We provide a review of previous techniques, describe our extension, and give examples both of a bound for a model problem, and of ways in which our techniques give intuitive way of looking at incomplete factor preconditioners.

  1. Stability of proton-bound clusters of alkyl alcohols, aldehydes and ketones in Ion Mobility Spectrometry.

    PubMed

    Jurado-Campos, Natividad; Garrido-Delgado, Rocío; Martínez-Haya, Bruno; Eiceman, Gary A; Arce, Lourdes

    2018-08-01

    Significant substances in emerging applications of ion mobility spectrometry such as breath analysis for clinical diagnostics and headspace analysis for food purity include low molar mass alcohols, ketones, aldehydes and esters which produce mobility spectra containing protonated monomers and proton-bound dimers. Spectra for all n- alcohols, aldehydes and ketones from carbon number three to eight exhibited protonated monomers and proton-bound dimers with ion drift times of 6.5-13.3 ms at ambient pressure and from 35° to 80 °C in nitrogen. Only n-alcohols from 1-pentanol to 1-octanol produced proton-bound trimers which were sufficiently stable to be observed at these temperatures and drift times of 12.8-16.3 ms. Polar functional groups were protected in compact structures in ab initio models for proton-bound dimers of alcohols, ketones and aldehydes. Only alcohols formed a V-shaped arrangement for proton-bound trimers strengthening ion stability and lifetime. In contrast, models for proton-bound trimers of aldehydes and ketones showed association of the third neutral through weak, non-specific, long-range interactions consistent with ion dissociation in the ion mobility drift tube before arriving at the detector. Collision cross sections derived from reduced mobility coefficients in nitrogen gas atmosphere support the predicted ion structures and approximate degrees of hydration. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Simulation-based MDP verification for leading-edge masks

    NASA Astrophysics Data System (ADS)

    Su, Bo; Syrel, Oleg; Pomerantsev, Michael; Hagiwara, Kazuyuki; Pearman, Ryan; Pang, Leo; Fujimara, Aki

    2017-07-01

    For IC design starts below the 20nm technology node, the assist features on photomasks shrink well below 60nm and the printed patterns of those features on masks written by VSB eBeam writers start to show a large deviation from the mask designs. Traditional geometry-based fracturing starts to show large errors for those small features. As a result, other mask data preparation (MDP) methods have become available and adopted, such as rule-based Mask Process Correction (MPC), model-based MPC and eventually model-based MDP. The new MDP methods may place shot edges slightly differently from target to compensate for mask process effects, so that the final patterns on a mask are much closer to the design (which can be viewed as the ideal mask), especially for those assist features. Such an alteration generally produces better masks that are closer to the intended mask design. Traditional XOR-based MDP verification cannot detect problems caused by eBeam effects. Much like model-based OPC verification which became a necessity for OPC a decade ago, we see the same trend in MDP today. Simulation-based MDP verification solution requires a GPU-accelerated computational geometry engine with simulation capabilities. To have a meaningful simulation-based mask check, a good mask process model is needed. The TrueModel® system is a field tested physical mask model developed by D2S. The GPU-accelerated D2S Computational Design Platform (CDP) is used to run simulation-based mask check, as well as model-based MDP. In addition to simulation-based checks such as mask EPE or dose margin, geometry-based rules are also available to detect quality issues such as slivers or CD splits. Dose margin related hotspots can also be detected by setting a correct detection threshold. In this paper, we will demonstrate GPU-acceleration for geometry processing, and give examples of mask check results and performance data. GPU-acceleration is necessary to make simulation-based mask MDP verification acceptable.

  3. Vehicular traffic noise prediction using soft computing approach.

    PubMed

    Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek

    2016-12-01

    A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A generalized statistical model for the size distribution of wealth

    NASA Astrophysics Data System (ADS)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  5. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    NASA Astrophysics Data System (ADS)

    Rivera, Diego; Rivas, Yessica; Godoy, Alex

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  6. Timing analysis by model checking

    NASA Technical Reports Server (NTRS)

    Naydich, Dimitri; Guaspari, David

    2000-01-01

    The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.

  7. Location contexts of user check-ins to model urban geo life-style patterns.

    PubMed

    Hasan, Samiul; Ukkusuri, Satish V

    2015-01-01

    Geo-location data from social media offers us information, in new ways, to understand people's attitudes and interests through their activity choices. In this paper, we explore the idea of inferring individual life-style patterns from activity-location choices revealed in social media. We present a model to understand life-style patterns using the contextual information (e. g. location categories) of user check-ins. Probabilistic topic models are developed to infer individual geo life-style patterns from two perspectives: i) to characterize the patterns of user interests to different types of places and ii) to characterize the patterns of user visits to different neighborhoods. The method is applied to a dataset of Foursquare check-ins of the users from New York City. The co-existence of several location contexts and the corresponding probabilities in a given pattern provide useful information about user interests and choices. It is found that geo life-style patterns have similar items-either nearby neighborhoods or similar location categories. The semantic and geographic proximity of the items in a pattern reflects the hidden regularity in user preferences and location choice behavior.

  8. Constraint-Based Abstract Semantics for Temporal Logic: A Direct Approach to Design and Implementation

    NASA Astrophysics Data System (ADS)

    Banda, Gourinath; Gallagher, John P.

    interpretation provides a practical approach to verifying properties of infinite-state systems. We apply the framework of abstract interpretation to derive an abstract semantic function for the modal μ-calculus, which is the basis for abstract model checking. The abstract semantic function is constructed directly from the standard concrete semantics together with a Galois connection between the concrete state-space and an abstract domain. There is no need for mixed or modal transition systems to abstract arbitrary temporal properties, as in previous work in the area of abstract model checking. Using the modal μ-calculus to implement CTL, the abstract semantics gives an over-approximation of the set of states in which an arbitrary CTL formula holds. Then we show that this leads directly to an effective implementation of an abstract model checking algorithm for CTL using abstract domains based on linear constraints. The implementation of the abstract semantic function makes use of an SMT solver. We describe an implemented system for proving properties of linear hybrid automata and give some experimental results.

  9. Oscillating-Linear-Drive Vacuum Compressor for CO2

    NASA Technical Reports Server (NTRS)

    Izenson, Michael G.; Shimko, Martin

    2005-01-01

    A vacuum compressor has been designed to compress CO2 from approximately equal to 1 psia (approximately equal to 6.9 kPa absolute pressure) to approximately equal to 75 psia (approximately equal to 0.52 MPa), to be insensitive to moisture, to have a long operational life, and to be lightweight, compact, and efficient. The compressor consists mainly of (1) a compression head that includes hydraulic diaphragms, a gas-compression diaphragm, and check valves; and (2) oscillating linear drive that includes a linear motor and a drive spring, through which compression force is applied to the hydraulic diaphragms. The motor is driven at the resonance vibrational frequency of the motor/spring/compression-head system, the compression head acting as a damper that takes energy out of the oscillation. The net effect of the oscillation is to cause cyclic expansion and contraction of the gas-compression diaphragm, and, hence, of the volume bounded by this diaphragm. One-way check valves admit gas into this volume from the low-pressure side during expansion and allow the gas to flow out to the high-pressure side during contraction. Fatigue data and the results of diaphragm stress calculations have been interpreted as signifying that the compressor can be expected to have an operational life of greater than 30 years with a confidence level of 99.9 percent.

  10. Key management and encryption under the bounded storage model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draelos, Timothy John; Neumann, William Douglas; Lanzone, Andrew J.

    2005-11-01

    There are several engineering obstacles that need to be solved before key management and encryption under the bounded storage model can be realized. One of the critical obstacles hindering its adoption is the construction of a scheme that achieves reliable communication in the event that timing synchronization errors occur. One of the main accomplishments of this project was the development of a new scheme that solves this problem. We show in general that there exist message encoding techniques under the bounded storage model that provide an arbitrarily small probability of transmission error. We compute the maximum capacity of this channelmore » using the unsynchronized key-expansion as side-channel information at the decoder and provide tight lower bounds for a particular class of key-expansion functions that are pseudo-invariant to timing errors. Using our results in combination with Dziembowski et al. [11] encryption scheme we can construct a scheme that solves the timing synchronization error problem. In addition to this work we conducted a detailed case study of current and future storage technologies. We analyzed the cost, capacity, and storage data rate of various technologies, so that precise security parameters can be developed for bounded storage encryption schemes. This will provide an invaluable tool for developing these schemes in practice.« less

  11. Perturbative unitarity constraints on gauge portals

    NASA Astrophysics Data System (ADS)

    El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.

    2017-12-01

    Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs and dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. We briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.

  12. Robust model predictive control of nonlinear systems with unmodeled dynamics and bounded uncertainties based on neural networks.

    PubMed

    Yan, Zheng; Wang, Jun

    2014-03-01

    This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach.

  13. Ensembles vs. information theory: supporting science under uncertainty

    NASA Astrophysics Data System (ADS)

    Nearing, Grey S.; Gupta, Hoshin V.

    2018-05-01

    Multi-model ensembles are one of the most common ways to deal with epistemic uncertainty in hydrology. This is a problem because there is no known way to sample models such that the resulting ensemble admits a measure that has any systematic (i.e., asymptotic, bounded, or consistent) relationship with uncertainty. Multi-model ensembles are effectively sensitivity analyses and cannot - even partially - quantify uncertainty. One consequence of this is that multi-model approaches cannot support a consistent scientific method - in particular, multi-model approaches yield unbounded errors in inference. In contrast, information theory supports a coherent hypothesis test that is robust to (i.e., bounded under) arbitrary epistemic uncertainty. This paper may be understood as advocating a procedure for hypothesis testing that does not require quantifying uncertainty, but is coherent and reliable (i.e., bounded) in the presence of arbitrary (unknown and unknowable) uncertainty. We conclude by offering some suggestions about how this proposed philosophy of science suggests new ways to conceptualize and construct simulation models of complex, dynamical systems.

  14. 76 FR 477 - Airworthiness Directives; Bombardier, Inc. Model CL-600-2A12 (CL-601) and CL-600-2B16 (CL-601-3A...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-05

    ... to these aircraft if Bombardier Service Bulletin (SB) 601-0590 [Scheduled Maintenance Instructions... information: Challenger 601 Time Limits/Maintenance Checks, PSP 601-5, Revision 38, dated June 19, 2009. Challenger 601 Time Limits/Maintenance Checks, PSP 601A-5, Revision 34, dated June 19, 2009. Challenger 604...

  15. Model Checking with Edge-Valued Decision Diagrams

    NASA Technical Reports Server (NTRS)

    Roux, Pierre; Siminiceanu, Radu I.

    2010-01-01

    We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library. We provide efficient algorithms for manipulating EVMDDs and review the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi- Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools. Compared to the CUDD package, our tool is several orders of magnitude faster

  16. Verification and Validation of Autonomy Software at NASA

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles

    2000-01-01

    Autonomous software holds the promise of new operation possibilities, easier design and development and lower operating costs. However, as those system close control loops and arbitrate resources on board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques and concrete experiments at NASA.

  17. Verification and Validation of Autonomy Software at NASA

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles

    2000-01-01

    Autonomous software holds the promise of new operation possibilities, easier design and development, and lower operating costs. However, as those system close control loops and arbitrate resources on-board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques, and concrete experiments at NASA.

  18. A shock spectra and impedance method to determine a bound for spacecraft structural loads

    NASA Technical Reports Server (NTRS)

    Bamford, R.; Trubert, M.

    1974-01-01

    A method to determine a bound of structural loads for a spacecraft mounted on a launch vehicle is developed. The method utilizes the interface shock spectra and the relative impedance of the spacecraft and launch vehicle. The method is developed for single-degree-of-freedom models and then generalized to multidegree-of-freedom models.

  19. Statistical Modeling for Radiation Hardness Assurance: Toward Bigger Data

    NASA Technical Reports Server (NTRS)

    Ladbury, R.; Campola, M. J.

    2015-01-01

    New approaches to statistical modeling in radiation hardness assurance are discussed. These approaches yield quantitative bounds on flight-part radiation performance even in the absence of conventional data sources. This allows the analyst to bound radiation risk at all stages and for all decisions in the RHA process. It also allows optimization of RHA procedures for the project's risk tolerance.

  20. Education as Literacy for Freedom: Implications for Latin America and the Caribbean from an Upward Bound Project.

    ERIC Educational Resources Information Center

    Dottin, Erskine S.

    The Upward Bound Project for low income youth in Florida emphasizes humanistic education rather than education based on the capitalistic model of production, consumption, and competition. The project, which can serve as a model for education in developing countries, focuses on creating self-concepts and values to counteract those of an acquisitive…

  1. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G; Anitescu, Mihai

    2009-03-14

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  2. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  3. Quantum speed limit for arbitrary initial states

    PubMed Central

    Zhang, Ying-Jie; Han, Wei; Xia, Yun-Jie; Cao, Jun-Peng; Fan, Heng

    2014-01-01

    The minimal time a system needs to evolve from an initial state to its one orthogonal state is defined as the quantum speed limit time, which can be used to characterize the maximal speed of evolution of a quantum system. This is a fundamental question of quantum physics. We investigate the generic bound on the minimal evolution time of the open dynamical quantum system. This quantum speed limit time is applicable to both mixed and pure initial states. We then apply this result to the damped Jaynes-Cummings model and the Ohimc-like dephasing model starting from a general time-evolution state. The bound of this time-dependent state at any point in time can be found. For the damped Jaynes-Cummings model, when the system starts from the excited state, the corresponding bound first decreases and then increases in the Markovian dynamics. While in the non-Markovian regime, the speed limit time shows an interesting periodic oscillatory behavior. For the case of Ohimc-like dephasing model, this bound would be gradually trapped to a fixed value. In addition, the roles of the relativistic effects on the speed limit time for the observer in non-inertial frames are discussed. PMID:24809395

  4. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Knepley, Matthew G.; Anitescu, Mihai

    2009-03-01

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  5. Bounding filter - A simple solution to lack of exact a priori statistics.

    NASA Technical Reports Server (NTRS)

    Nahi, N. E.; Weiss, I. M.

    1972-01-01

    Wiener and Kalman-Bucy estimation problems assume that models describing the signal and noise stochastic processes are exactly known. When this modeling information, i.e., the signal and noise spectral densities for Wiener filter and the signal and noise dynamic system and disturbing noise representations for Kalman-Bucy filtering, is inexactly known, then the filter's performance is suboptimal and may even exhibit apparent divergence. In this paper a system is designed whereby the actual estimation error covariance is bounded by the covariance calculated by the estimator. Therefore, the estimator obtains a bound on the actual error covariance which is not available, and also prevents its apparent divergence.

  6. The Cramér-Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations.

    PubMed

    Wang, Zhiguo; Shen, Xiaojing; Wang, Ping; Zhu, Yunmin

    2018-04-05

    This paper considers the problems of the posterior Cramér-Rao bound and sensor selection for multi-sensor nonlinear systems with uncertain observations. In order to effectively overcome the difficulties caused by uncertainty, we investigate two methods to derive the posterior Cramér-Rao bound. The first method is based on the recursive formula of the Cramér-Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computation burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0-1 latent variables to deal with the Gaussian mixture model. Since the regular condition of the posterior Cramér-Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér-Rao bound can be achieved by a limiting process of the Cramér-Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Based on the new posterior Cramér-Rao bound, the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of a large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.

  7. KSC-2012-3605

    NASA Image and Video Library

    2012-07-02

    CAPE CANAVERAL, Fla. – U.S. Senator Bill Nelson, left, checks out NASA's first space-bound Orion capsule at NASA's Kennedy Space Center in Florida. With Nelson in Kennedy's Operations and Checkout Building high bay for an event marking the spacecraft's arrival at Kennedy are NASA Deputy Director Lori Garver and Kennedy Director Robert Cabana. Slated for Exploration Flight Test-1, an uncrewed mission planned for 2014, the capsule will travel farther into space than any human spacecraft has gone in more than 40 years. The capsule was shipped to Kennedy from NASA's Michoud Assembly Facility in New Orleans where the crew module pressure vessel was built. The Orion production team will prepare the module for flight at Kennedy by installing heat-shielding thermal protection systems, avionics and other subsystems. For more information, visit http://www.nasa.gov/orion. Photo credit: NASA/Kim Shiflett

  8. Smart Sensor Node Development, Testing and Implementation for ISHM

    NASA Technical Reports Server (NTRS)

    Mengers, Timothy; Shipley, John; Merrill, Richard; Eggett, Mark; Lemon, Leon; Johnson, Mont; Morris, Jonathan; Figueroa, Fernando; Schmalzel, John; Turowski, Mark

    2007-01-01

    A main design criterion for a robust Integrated Systems Health Management (ISHM) system is summed up best by the statement "No data is better than bad data". Traditional data acquisition systems are calibrated in a controlled environment and guaranteed to perform bounded by their tested conditions. To successfully design and implement a real world ISHM system, the data acquisition and signal conditioning needs to function in an uncontrolled environment. Development and testing focuses on a design with the ability to self check in order to extend calibration times, report internal faults and drifts and notify the overall system when the data acquisition is not performing as it should. All of this will be designed in a system that is flexible, requiring little redesign to be deployed on a wide variety of systems. Development progress and testing results will be reported.

  9. Low energy theorems and the unitarity bounds in the extra U(1) superstring inspired E{sub 6} models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, N.K.; Saxena, Pranav; Nagawat, Ashok K.

    2005-11-01

    The conventional method using low energy theorems derived by Chanowitz et al. [Phys. Rev. Lett. 57, 2344 (1986);] does not seem to lead to an explicit unitarity limit in the scattering processes of longitudinally polarized gauge bosons for the high energy case in the extra U(1) superstring inspired models, commonly known as {eta} model, emanating from E{sub 6} group of superstring theory. We have made use of an alternative procedure given by Durand and Lopez [Phys. Lett. B 217, 463 (1989);], which is applicable to supersymmetric grand unified theories. Explicit unitarity bounds on the superpotential couplings (identified as Yukawa couplings)more » are obtained from both using unitarity constraints as well as using renormalization group equations (RGE) analysis at one-loop level utilizing critical couplings concepts implying divergence of scalar coupling at M{sub G}. These are found to be consistent with finiteness over the entire range M{sub Z}{<=}{radical}(s){<=}M{sub G} i.e. from grand unification scale to weak scale. For completeness, the similar approach has been made use of in other models i.e., {chi}, {psi}, and {nu} models emanating from E{sub 6} and it has been noticed that at weak scale, the unitarity bounds on Yukawa couplings do not differ among E{sub 6} extra U(1) models significantly except for the case of {chi} model in 16 representations. For the case of the E{sub 6}-{eta} model ({beta}{sub E} congruent with 9.64), the analysis using the unitarity constraints leads to the following bounds on various parameters: {lambda}{sub t(max.)}(M{sub Z})=1.294, {lambda}{sub b(max.)}(M{sub Z})=1.278, {lambda}{sub H(max.)}(M{sub Z})=0.955, {lambda}{sub D(max.)}(M{sub Z})=1.312. The analytical analysis of RGE at the one-loop level provides the following critical bounds on superpotential couplings: {lambda}{sub t,c}(M{sub Z}) congruent with 1.295, {lambda}{sub b,c}(M{sub Z}) congruent with 1.279, {lambda}{sub H,c}(M{sub Z}) congruent with 0.968, {lambda}{sub D,c}(M{sub Z}) congruent with 1.315. Thus superpotential coupling values obtained by both the approaches are in good agreement. Theoretically we have obtained bounds on physical mass parameters using the unitarity constrained superpotential couplings. The bounds are as follows: (i) Absolute upper bound on top quark mass m{sub t}{<=}225 GeV (ii) the upper bound on the lightest neutral Higgs boson mass at the tree level is m{sub H{sub 2}{sup 0}}{sup tree}{<=}169 GeV, and after the inclusion of the one-loop radiative correction it is m{sub H{sub 2}{sup 0}}{<=}229 GeV when {lambda}{sub t}{ne}{lambda}{sub b} at the grand unified theory scale. On the other hand, these are m{sub H{sub 2}{sup 0}}{sup tree}{<=}159 GeV, m{sub H{sub 2}{sup 0}}{<=}222 GeV, respectively, when {lambda}{sub t}={lambda}{sub b} at the grand unified theory scale. A plausible range on D-quark mass as a function of mass scale M{sub Z{sub 2}} is m{sub D}{approx_equal}O(3 TeV) for M{sub Z{sub 2}}{approx_equal}O(1 TeV) for the favored values of tan{beta}{<=}1. The bounds on aforesaid physical parameters in the case of {chi}, {psi}, and {nu} models in the 27 representation are almost identical with those of {eta} model and are consistent with the present day experimental precision measurements.« less

  10. Combining Static Model Checking with Dynamic Enforcement Using the Statecall Policy Language

    NASA Astrophysics Data System (ADS)

    Madhavapeddy, Anil

    Internet protocols encapsulate a significant amount of state, making implementing the host software complex. In this paper, we define the Statecall Policy Language (SPL) which provides a usable middle ground between ad-hoc coding and formal reasoning. It enables programmers to embed automata in their code which can be statically model-checked using SPIN and dynamically enforced. The performance overheads are minimal, and the automata also provide higher-level debugging capabilities. We also describe some practical uses of SPL by describing the automata used in an SSH server written entirely in OCaml/SPL.

  11. Data Needs for Stellar Atmosphere and Spectrum Modeling

    NASA Technical Reports Server (NTRS)

    Short, C. I.

    2006-01-01

    The main data need for stellar atmosphere and spectrum modeling remains atomic and molecular transition data, particularly energy levels and transition cross-sections. We emphasize that data is needed for bound-free (b - f) as well as bound-bound (b - b), and collisional as well as radiative transitions. Data is now needed for polyatomic molecules as well as atoms, ions, and diatomic molecules. In addition, data for the formation of, and extinction due to, liquid and solid phase dust grains is needed. A prioritization of species and data types is presented, and gives emphasis to Fe group elements, and elements important for the investigation of nucleosynthesis and Galactic chemical evolution, such as the -elements and n-capture elements. Special data needs for topical problems in the modeling of cool stars and brown dwarfs are described.

  12. Breakpoint-forced and bound long waves in the nearshore: A model comparison

    USGS Publications Warehouse

    List, Jeffrey H.; ,

    1993-01-01

    A finite-difference model is used to compare long wave amplitudes arising from two-group forced generation mechanisms in the nearshore: long waves generated at a time-varying breakpoint and the shallow-water extension of the bound long wave. Plane beach results demonstrate that the strong frequency selection in the outgoing wave predicted by the breakpoint-forcing mechanism may not be observable in field data due to this wave's relatively small size and its predicted phase relation with the bound wave. Over a bar/trough nearshore, it is shown that a strong frequency selection in shoreline amplitudes is not a unique result of the time-varying breakpoint model, but a general result of the interaction between topography and any broad-banded forcing of nearshore long waves.

  13. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  14. Tidal disruption of Periodic Comet Shoemaker-Levy 9 and a constraint on its mean density

    NASA Technical Reports Server (NTRS)

    Boss, Alan P.

    1994-01-01

    The apparent tidal disruption of Periodic Comet Shoemaker-Levy 9 (1993e) during a close encounter within approximately 1.62 planetary radii of Jupiter can be used along with theoretical models of tidal disruption to place an upper bound on the density of the predisruption body. Depending on the theoretical model used, these upper bounds range from rho(sub c) less than 0.702 +/- 0.080 g/cu cm for a simple analytical model calibrated by numerical smoothed particle hydrodynamics (SPH) simulations to rho(sub c) less than 1.50 +/- 0.17 g/cu cm for a detailed semianalytical model. The quoted uncertainties stem from an assumed uncertainty in the perijove radius. However, the uncertainty introduced by the different theoretical models is the major source of error; this uncertainty could be eliminated by future SPH simulations specialized to cometary disruptions, including the effects of initially prolate, spinning comets. If the SPH-based upper bound turns out to be most appropriate, it would be consistent with the predisruption body being a comet with a relatively low density and porous structure, as has been asserted previously based on observations of cometary outgassing. Regardless of which upper bound is preferable, the models all agree that the predisruption body could not have been a relatively high-density body, such as an asteroid with rho approximately = 2 g/cu cm.

  15. The role of endocytic pathways in cellular uptake of plasma non-transferrin iron

    PubMed Central

    Sohn, Yang-Sung; Ghoti, Hussam; Breuer, William; Rachmilewitz, Eliezer; Attar, Samah; Weiss, Guenter; Cabantchik, Z. Ioav

    2012-01-01

    Background In transfusional siderosis, the iron binding capacity of plasma transferrin is often surpassed, with concomitant generation of non-transferrin-bound iron. Although implicated in tissue siderosis, non-transferrin-bound iron modes of cell ingress remain undefined, largely because of its variable composition and association with macromolecules. Using fluorescent tracing of labile iron in endosomal vesicles and cytosol, we examined the hypothesis that non-transferrin-bound iron fractions detected in iron overloaded patients enter cells via bulk endocytosis. Design and Methods Fluorescence microscopy and flow cytometry served as analytical tools for tracing non-transferrin-bound iron entry into endosomes with the redox-reactive macromolecular probe Oxyburst-Green and into the cytosol with cell-laden calcein green and calcein blue. Non-transferrin-bound iron-containing media were from sera of polytransfused thalassemia major patients and model iron substances detected in thalassemia major sera; cell models were cultured macrophages, and cardiac myoblasts and myocytes. Results Exposure of cells to ferric citrate together with albumin, or to non-transferrin-bound iron-containing sera from thalassemia major patients caused an increase in labile iron content of endosomes and cytosol in macrophages and cardiac cells. This increase was more striking in macrophages, but in both cell types was largely reduced by co-exposure to non-transferrin-bound iron-containing media with non-penetrating iron chelators or apo-transferrin, or by treatment with inhibitors of endocytosis. Endosomal iron accumulation traced with calcein-green was proportional to input non-transferrin-bound iron levels (r2=0.61) and also preventable by pre-chelation. Conclusions Our studies indicate that macromolecule-associated non-transferrin-bound iron can initially gain access into various cells via endocytic pathways, followed by iron translocation to the cytosol. Endocytic uptake of plasma non-transferrin-bound iron is a possible mechanism that can contribute to iron loading of cell types engaged in bulk/adsorptive endocytosis, highlighting the importance of its prevention by iron chelation. PMID:22180428

  16. Investigation on extracellular polymeric substances, sludge flocs morphology, bound water release and dewatering performance of sewage sludge under pretreatment with modified phosphogypsum.

    PubMed

    Dai, Quxiu; Ma, Liping; Ren, Nanqi; Ning, Ping; Guo, Zhiying; Xie, Longgui; Gao, Haijun

    2018-06-06

    Modified phosphogypsum (MPG) was developed to improve dewaterability of sewage sludge, and dewatering performance, properties of treated sludge, composition and morphology distribution of EPS, dynamic analysis and multiple regression model on bound water release were investigated. The results showed that addition of MPG caused extracellular polymeric substances (EPS) disintegration through charge neutralization. Destruction of EPS promoted the formation of larger sludge flocs and the release of bound water into supernatant. Simultaneously, content of organics with molecular weight between 1000 and 7000 Da in soluble EPS (SB-EPS) increased with increasing of EPS dissolved into the liquid phase. Besides, about 8.8 kg•kg -1 DS of bound water was released after pretreatment with 40%DS MPG dosage. Additionally, a multiple linear regression model for bound water release was established, showing that lower loosely bond EPS (LB-EPS) content and specific resistance of filtration (SRF) may improve dehydration performance, and larger sludge flocs may be beneficial for sludge dewatering. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Broken Replica Symmetry Bounds in the Mean Field Spin Glass Model

    NASA Astrophysics Data System (ADS)

    Guerra, Francesco

    By using a simple interpolation argument, in previous work we have proven the existence of the thermodynamic limit, for mean field disordered models, including the Sherrington-Kirkpatrick model, and the Derrida p-spin model. Here we extend this argument in order to compare the limiting free energy with the expression given by the Parisi Ansatz, and including full spontaneous replica symmetry breaking. Our main result is that the quenched average of the free energy is bounded from below by the value given in the Parisi Ansatz, uniformly in the size of the system. Moreover, the difference between the two expressions is given in the form of a sum rule, extending our previous work on the comparison between the true free energy and its replica symmetric Sherrington-Kirkpatrick approximation. We give also a variational bound for the infinite volume limit of the ground state energy per site.

  18. Discrete persistent-chain model for protein binding on DNA.

    PubMed

    Lam, Pui-Man; Zhen, Yi

    2011-04-01

    We describe and solve a discrete persistent-chain model of protein binding on DNA, involving an extra σ(i) at a site i of the DNA. This variable takes the value 1 or 0, depending on whether or not the site is occupied by a protein. In addition, if the site is occupied by a protein, there is an extra energy cost ɛ. For a small force, we obtain analytic expressions for the force-extension curve and the fraction of bound protein on the DNA. For higher forces, the model can be solved numerically to obtain force-extension curves and the average fraction of bound proteins as a function of applied force. Our model can be used to analyze experimental force-extension curves of protein binding on DNA, and hence deduce the number of bound proteins in the case of nonspecific binding. ©2011 American Physical Society

  19. Bounded excursion stable gravastars and black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocha, P; Miguelote, A Y; Chan, R

    2008-06-15

    Dynamical models of prototype gravastars were constructed in order to study their stability. The models are the Visser-Wiltshire three-layer gravastars, in which an infinitely thin spherical shell of stiff fluid divides the whole spacetime into two regions, where the internal region is de Sitter, and the external one is Schwarzschild. It is found that in some cases the models represent the 'bounded excursion' stable gravastars, where the thin shell is oscillating between two finite radii, while in other cases they collapse until the formation of black holes occurs. In the phase space, the region for the 'bounded excursion' gravastars ismore » very small in comparison to that of black holes, but not empty. Therefore, although the possibility of the existence of gravastars cannot be excluded from such dynamical models, our results indicate that, even if gravastars do indeed exist, that does not exclude the possibility of the existence of black holes.« less

  20. Symbolic Analysis of Concurrent Programs with Polymorphism

    NASA Technical Reports Server (NTRS)

    Rungta, Neha Shyam

    2010-01-01

    The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.

  1. Diagnosis checking of statistical analysis in RCTs indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-11-01

    Statistical analysis is essential for reporting of the results of randomized controlled trials (RCTs), as well as evaluating their effectiveness. However, the validity of a statistical analysis also depends on whether the assumptions of that analysis are valid. To review all RCTs published in journals indexed in PubMed during December 2014 to provide a complete picture of how RCTs handle assumptions of statistical analysis. We reviewed all RCTs published in December 2014 that appeared in journals indexed in PubMed using the Cochrane highly sensitive search strategy. The 2014 impact factors of the journals were used as proxies for their quality. The type of statistical analysis used and whether the assumptions of the analysis were tested were reviewed. In total, 451 papers were included. Of the 278 papers that reported a crude analysis for the primary outcomes, 31 (27·2%) reported whether the outcome was normally distributed. Of the 172 papers that reported an adjusted analysis for the primary outcomes, diagnosis checking was rarely conducted, with only 20%, 8·6% and 7% checked for generalized linear model, Cox proportional hazard model and multilevel model, respectively. Study characteristics (study type, drug trial, funding sources, journal type and endorsement of CONSORT guidelines) were not associated with the reporting of diagnosis checking. The diagnosis of statistical analyses in RCTs published in PubMed-indexed journals was usually absent. Journals should provide guidelines about the reporting of a diagnosis of assumptions. © 2017 Stichting European Society for Clinical Investigation Journal Foundation.

  2. Neutrinos secretly converting to lighter particles to please both KATRIN and the cosmos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farzan, Yasaman; Hannestad, Steen, E-mail: yasaman@theory.ipm.ac.ir, E-mail: sth@phys.au.dk

    Within the framework of the Standard Model of particle physics and standard cosmology, observations of the Cosmic Microwave Background (CMB) and Baryon Acoustic Oscillations (BAO) set stringent bounds on the sum of the masses of neutrinos. If these bounds are satisfied, the upcoming KATRIN experiment which is designed to probe neutrino mass down to ∼ 0.2 eV will observe only a null signal. We show that the bounds can be relaxed by introducing new interactions for the massive active neutrinos, making neutrino masses in the range observable by KATRIN compatible with cosmological bounds. Within this scenario, neutrinos convert to new stablemore » light particles by resonant production of intermediate states around a temperature of T∼ keV in the early Universe, leading to a much less pronounced suppression of density fluctuations compared to the standard model.« less

  3. Performance bounds on parallel self-initiating discrete-event

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.

  4. Selective determination of aluminum bound with tannin in tea infusion.

    PubMed

    Erdemoğlu, Sema B; Güçer, Seref

    2005-08-01

    In this study, an analytical method for indirect measurement of Al bound with tannin in tea infusion was studied. This method utilizes the ability of the tannins to precipitate with protein. Separation conditions were investigated using model solutions. This method is uncomplicated, inexpensive and suitable for real samples. About 34% of the total Al in brew extracted from commercially available teas was bound to condensed and hydrolyzable tannins.

  5. Non-College Bound Student Demonstration Project in Electronics and Laser-ElectroOptics--in Cooperation with Area High Schools, the Private Industry Council, and the Business Labor Council. Final Report.

    ERIC Educational Resources Information Center

    Alfano, Kathleen

    A model program was developed to increase the number of noncollege-bound students who were capable of succeeding in electronics and laser/electro-optics technology (LET) vocational training. The target population was noncollege-bound disadvantaged students, at least 60 percent minorities and women who were historically underrepresented in…

  6. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  7. A Late Cenozoic Kinematic Model for Deformation Within the Greater Cascadia Subduction System

    NASA Astrophysics Data System (ADS)

    Wilson, D. S.; McCrory, P. A.

    2016-12-01

    Relatively low fault slip rates have complicated efforts to characterize seismic hazards associated with the diffuse subduction boundary between North America and offshore oceanic plates in the Pacific Northwest region. A kinematic forward model that encompasses a broader region, and incorporates seismologic and geodetic as well as geologic and paleomagnetic constraints offers a tool for constraining fault rupture chronologies—all within a framework tracking relative motion of the Juan de Fuca, Pacific, and North American plates during late Cenozoic time. Our kinematic model tracks motions as a system of rigid microplates, bounded by the more important mapped faults of the region or zones of distributed deformation. Though our emphasis is on Washington and Oregon, the scope of the model extends eastward to the rigid craton in Montana and Wyoming, and southward to the Sierra Nevada block of California to provide important checks on its internal consistency. The model reproduces observed geodetic velocities [e.g., McCaffrey et al., 2013, JGR], for 6 Ma to present, with only minor reorganization for 12-6 Ma. Constraints for the older deformation history are based on paleomagnetic rotations within the Columbia River Basalt Group, and geologic details of fault offsets. Since 17 Ma, our model includes 50 km of N-S shortening across the central Yakima fold and thrust belt, substantial NW-SE right-lateral strike slip distributed among faults in the Washington Cascade Range, 90 km of shortening on thrusts of Puget Lowland, and substantial oroclinal bending of the Crescent Formation basement surrounding the Olympic Peninsula. This kinematic reconstruction provides an integrated, quantitative framework with which to investigate the motions of various PNW forearc and backarc blocks during late Cenozoic time, an essential tool for characterizing the seismic risk associated with the Puget Sound and Portland urban areas, hydroelectric dams, and other critical infrastructure.

  8. Local approximation of a metapopulation's equilibrium.

    PubMed

    Barbour, A D; McVinish, R; Pollett, P K

    2018-04-18

    We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.

  9. Neighborhood deprivation is strongly associated with participation in a population-based health check.

    PubMed

    Bender, Anne Mette; Kawachi, Ichiro; Jørgensen, Torben; Pisinger, Charlotta

    2015-01-01

    We sought to examine whether neighborhood deprivation is associated with participation in a large population-based health check. Such analyses will help answer the question whether health checks, which are designed to meet the needs of residents in deprived neighborhoods, may increase participation and prove to be more effective in preventing disease. In Europe, no study has previously looked at the association between neighborhood deprivation and participation in a population-based health check. The study population comprised 12,768 persons invited for a health check including screening for ischemic heart disease and lifestyle counseling. The study population was randomly drawn from a population of 179,097 persons living in 73 neighborhoods in Denmark. Data on neighborhood deprivation (percentage with basic education, with low income and not in work) and individual socioeconomic position were retrieved from national administrative registers. Multilevel regression analyses with log links and binary distributions were conducted to obtain relative risks, intraclass correlation coefficients and proportional change in variance. Large differences between neighborhoods existed in both deprivation levels and neighborhood health check participation rate (mean 53%; range 35-84%). In multilevel analyses adjusted for age and sex, higher levels of all three indicators of neighborhood deprivation and a deprivation score were associated with lower participation in a dose-response fashion. Persons living in the most deprived neighborhoods had up to 37% decreased probability of participating compared to those living in the least deprived neighborhoods. Inclusion of individual socioeconomic position in the model attenuated the neighborhood deprivation coefficients, but all except for income deprivation remained statistically significant. Neighborhood deprivation was associated with participation in a population-based health check in a dose-response manner, in which increasing neighborhood deprivation was associated with decreasing participation. This suggests the need to develop preventive health checks tailored to deprived neighborhoods.

  10. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  11. CheckMyMetal: a macromolecular metal-binding validation tool

    PubMed Central

    Porebski, Przemyslaw J.

    2017-01-01

    Metals are essential in many biological processes, and metal ions are modeled in roughly 40% of the macromolecular structures in the Protein Data Bank (PDB). However, a significant fraction of these structures contain poorly modeled metal-binding sites. CheckMyMetal (CMM) is an easy-to-use metal-binding site validation server for macromolecules that is freely available at http://csgid.org/csgid/metal_sites. The CMM server can detect incorrect metal assignments as well as geometrical and other irregularities in the metal-binding sites. Guidelines for metal-site modeling and validation in macromolecules are illustrated by several practical examples grouped by the type of metal. These examples show CMM users (and crystallographers in general) problems they may encounter during the modeling of a specific metal ion. PMID:28291757

  12. Method and system to perform energy-extraction based active noise control

    NASA Technical Reports Server (NTRS)

    Kelkar, Atul (Inventor); Joshi, Suresh M. (Inventor)

    2009-01-01

    A method to provide active noise control to reduce noise and vibration in reverberant acoustic enclosures such as aircraft, vehicles, appliances, instruments, industrial equipment and the like is presented. A continuous-time multi-input multi-output (MIMO) state space mathematical model of the plant is obtained via analytical modeling and system identification. Compensation is designed to render the mathematical model passive in the sense of mathematical system theory. The compensated system is checked to ensure robustness of the passive property of the plant. The check ensures that the passivity is preserved if the mathematical model parameters are perturbed from nominal values. A passivity-based controller is designed and verified using numerical simulations and then tested. The controller is designed so that the resulting closed-loop response shows the desired noise reduction.

  13. Preparatory steps for a robust dynamic model for organically bound tritium dynamics in agricultural crops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melintescu, A.; Galeriu, D.; Diabate, S.

    2015-03-15

    The processes involved in tritium transfer in crops are complex and regulated by many feedback mechanisms. A full mechanistic model is difficult to develop due to the complexity of the processes involved in tritium transfer and environmental conditions. First, a review of existing models (ORYZA2000, CROPTRIT and WOFOST) presenting their features and limits, is made. Secondly, the preparatory steps for a robust model are discussed, considering the role of dry matter and photosynthesis contribution to the OBT (Organically Bound Tritium) dynamics in crops.

  14. Model Selection Methods for Mixture Dichotomous IRT Models

    ERIC Educational Resources Information Center

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  15. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    PubMed

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  16. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    PubMed Central

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  17. An effective automatic procedure for testing parameter identifiability of HIV/AIDS models.

    PubMed

    Saccomani, Maria Pia

    2011-08-01

    Realistic HIV models tend to be rather complex and many recent models proposed in the literature could not yet be analyzed by traditional identifiability testing techniques. In this paper, we check a priori global identifiability of some of these nonlinear HIV models taken from the recent literature, by using a differential algebra algorithm based on previous work of the author. The algorithm is implemented in a software tool, called DAISY (Differential Algebra for Identifiability of SYstems), which has been recently released (DAISY is freely available on the web site http://www.dei.unipd.it/~pia/ ). The software can be used to automatically check global identifiability of (linear and) nonlinear models described by polynomial or rational differential equations, thus providing a general and reliable tool to test global identifiability of several HIV models proposed in the literature. It can be used by researchers with a minimum of mathematical background.

  18. 75 FR 36298 - Airworthiness Directives; McDonnell Douglas Corporation Model DC-8-31, DC-8-32, DC-8-33, DC-8-41...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-25

    ... Airworthiness Limitations inspections (ALIs). This proposed AD results from a design review of the fuel tank...,'' and also adds ALI 30-1 for a pneumatic system decay check to minimize the risk of hot air impingement... 5, 2010, adds ALI 28-1, ``DC-8 Alternate and Center Auxiliary Tank Fuel Pump Control Systems Check...

  19. Development of a Model of Soldier Effectiveness: Retranslation Materials and Results

    DTIC Science & Technology

    1987-05-01

    covering financial responsibility, particularly the family checking account . Consequent- ly, the bad check rate for the unit drop- ped from 70 a month...Alcohol, and Aggressive Acts " Showing prudence in financial management and responsibility in personal/family matters; avoiding alcohol and other drugs or...threatening others, etc. versus " Acting irresponsibly in financial or personal/family affairs such that command time is required to counsel or otherwise

  20. Evaluation of an Imputed Pitch Velocity Model of the Auditory Kappa Effect

    ERIC Educational Resources Information Center

    Henry, Molly J.; McAuley, J. Devin

    2009-01-01

    Three experiments evaluated an imputed pitch velocity model of the auditory kappa effect. Listeners heard 3-tone sequences and judged the timing of the middle (target) tone relative to the timing of the 1st and 3rd (bounding) tones. Experiment 1 held pitch constant but varied the time (T) interval between bounding tones (T = 728, 1,000, or 1,600…

  1. Constraining the generalized uncertainty principle with the atomic weak-equivalence-principle test

    NASA Astrophysics Data System (ADS)

    Gao, Dongfeng; Wang, Jin; Zhan, Mingsheng

    2017-04-01

    Various models of quantum gravity imply the Planck-scale modifications of Heisenberg's uncertainty principle into a so-called generalized uncertainty principle (GUP). The GUP effects on high-energy physics, cosmology, and astrophysics have been extensively studied. Here, we focus on the weak-equivalence-principle (WEP) violation induced by the GUP. Results from the WEP test with the 85Rb-87Rb dual-species atom interferometer are used to set upper bounds on parameters in two GUP proposals. A 1045-level bound on the Kempf-Mangano-Mann proposal and a 1027-level bound on Maggiore's proposal, which are consistent with bounds from other experiments, are obtained. All these bounds have huge room for improvement in the future.

  2. Absorption enhancement in type-II coupled quantum rings due to existence of quasi-bound states

    NASA Astrophysics Data System (ADS)

    Hsieh, Chi-Ti; Lin, Shih-Yen; Chang, Shu-Wei

    2018-02-01

    The absorption of type-II nanostructures is often weaker than type-I counterpart due to spatially separated electrons and holes. We model the bound-to-continuum absorption of type-II quantum rings (QRs) using a multiband source-radiation approach using the retarded Green function in the cylindrical coordinate system. The selection rules due to the circular symmetry for allowed transitions of absorption are utilized. The bound-tocontinuum absorptions of type-II GaSb coupled and uncoupled QRs embedded in GaAs matrix are compared here. The GaSb QRs act as energy barriers for electrons but potential wells for holes. For the coupled QR structure, the region sandwiched between two QRs forms a potential reservoir of quasi-bound electrons. Electrons in these states, though look like bound ones, would ultimately tunnel out of the reservoir through barriers. Multiband perfectly-matched layers are introduced to model the tunneling of quasi-bound states into open space. Resonance peaks are observed on the absorption spectra of type-II coupled QRs due to the formation of quasi-bound states in conduction bands, but no resonance exist in the uncoupled QR. The tunneling time of these metastable states can be extracted from the resonance and is in the order of ten femtoseconds. Absorption of coupled QRs is significantly enhanced as compared to that of uncoupled ones in certain spectral windows of interest. These features may improve the performance of photon detectors and photovoltaic devices based on type-II semiconductor nanostructures.

  3. Dark energy and extended dark matter halos

    NASA Astrophysics Data System (ADS)

    Chernin, A. D.; Teerikorpi, P.; Valtonen, M. J.; Dolgachev, V. P.; Domozhilova, L. M.; Byrd, G. G.

    2012-03-01

    The cosmological mean matter (dark and baryonic) density measured in the units of the critical density is Ωm = 0.27. Independently, the local mean density is estimated to be Ωloc = 0.08-0.23 from recent data on galaxy groups at redshifts up to z = 0.01-0.03 (as published by Crook et al. 2007, ApJ, 655, 790 and Makarov & Karachentsev 2011, MNRAS, 412, 2498). If the lower values of Ωloc are reliable, as Makarov & Karachentsev and some other observers prefer, does this mean that the Local Universe of 100-300 Mpc across is an underdensity in the cosmic matter distribution? Or could it nevertheless be representative of the mean cosmic density or even be an overdensity due to the Local Supercluster therein. We focus on dark matter halos of groups of galaxies and check how much dark mass the invisible outer layers of the halos are able to host. The outer layers are usually devoid of bright galaxies and cannot be seen at large distances. The key factor which bounds the size of an isolated halo is the local antigravity produced by the omnipresent background of dark energy. A gravitationally bound halo does not extend beyond the zero-gravity surface where the gravity of matter and the antigravity of dark energy balance, thus defining a natural upper size of a system. We use our theory of local dynamical effects of dark energy to estimate the maximal sizes and masses of the extended dark halos. Using data from three recent catalogs of galaxy groups, we show that the calculated mass bounds conform with the assumption that a significant amount of dark matter is located in the invisible outer parts of the extended halos, sufficient to fill the gap between the observed and expected local matter density. Nearby groups of galaxies and the Virgo cluster have dark halos which seem to extend up to their zero-gravity surfaces. If the extended halo is a common feature of gravitationally bound systems on scales of galaxy groups and clusters, the Local Universe could be typical or even an overdense region, with a low density contrast ~1.

  4. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  5. Sub-pixel analysis to support graphic security after scanning at low resolution

    NASA Astrophysics Data System (ADS)

    Haas, Bertrand; Cordery, Robert; Gou, Hongmei; Decker, Steve

    2006-02-01

    Whether in the domain of audio, video or finance, our world tends to become increasingly digital. However, for diverse reasons, the transition from analog to digital is often much extended in time, and proceeds by long steps (and sometimes never completes). One such step is the conversion of information on analog media to digital information. We focus in this paper on the conversion (scanning) of printed documents to digital images. Analog media have the advantage over digital channels that they can harbor much imperceptible information that can be used for fraud detection and forensic purposes. But this secondary information usually fails to be retrieved during the conversion step. This is particularly relevant since the Check-21 act (Check Clearing for the 21st Century act) became effective in 2004 and allows images of checks to be handled by banks as usual paper checks. We use here this situation of check scanning as our primary benchmark for graphic security features after scanning. We will first present a quick review of the most common graphic security features currently found on checks, with their specific purpose, qualities and disadvantages, and we demonstrate their poor survivability after scanning in the average scanning conditions expected from the Check-21 Act. We will then present a novel method of measurement of distances between and rotations of line elements in a scanned image: Based on an appropriate print model, we refine direct measurements to an accuracy beyond the size of a scanning pixel, so we can then determine expected distances, periodicity, sharpness and print quality of known characters, symbols and other graphic elements in a document image. Finally we will apply our method to fraud detection of documents after gray-scale scanning at 300dpi resolution. We show in particular that alterations on legitimate checks or copies of checks can be successfully detected by measuring with sub-pixel accuracy the irregularities inherently introduced by the illegitimate process.

  6. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    PubMed

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of the uncertainties concerning the biosphere on very long timescales, stylised biosphere models are shown to provide a useful point of reference in themselves and remain a valuable tool for nuclear waste disposal licencing procedures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Benchmarking hydrological model predictive capability for UK River flows and flood peaks.

    NASA Astrophysics Data System (ADS)

    Lane, Rosanna; Coxon, Gemma; Freer, Jim; Wagener, Thorsten

    2017-04-01

    Data and hydrological models are now available for national hydrological analyses. However, hydrological model performance varies between catchments, and lumped, conceptual models are not able to produce adequate simulations everywhere. This study aims to benchmark hydrological model performance for catchments across the United Kingdom within an uncertainty analysis framework. We have applied four hydrological models from the FUSE framework to 1128 catchments across the UK. These models are all lumped models and run at a daily timestep, but differ in the model structural architecture and process parameterisations, therefore producing different but equally plausible simulations. We apply FUSE over a 20 year period from 1988-2008, within a GLUE Monte Carlo uncertainty analyses framework. Model performance was evaluated for each catchment, model structure and parameter set using standard performance metrics. These were calculated both for the whole time series and to assess seasonal differences in model performance. The GLUE uncertainty analysis framework was then applied to produce simulated 5th and 95th percentile uncertainty bounds for the daily flow time-series and additionally the annual maximum prediction bounds for each catchment. The results show that the model performance varies significantly in space and time depending on catchment characteristics including climate, geology and human impact. We identify regions where models are systematically failing to produce good results, and present reasons why this could be the case. We also identify regions or catchment characteristics where one model performs better than others, and have explored what structural component or parameterisation enables certain models to produce better simulations in these catchments. Model predictive capability was assessed for each catchment, through looking at the ability of the models to produce discharge prediction bounds which successfully bound the observed discharge. These results improve our understanding of the predictive capability of simple conceptual hydrological models across the UK and help us to identify where further effort is needed to develop modelling approaches to better represent different catchment and climate typologies.

  8. Software Safety Analysis of a Flight Guidance System

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W. (Technical Monitor); Tribble, Alan C.; Miller, Steven P.; Lempia, David L.

    2004-01-01

    This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.

  9. Space shuttle prototype check valve development

    NASA Technical Reports Server (NTRS)

    Tellier, G. F.

    1976-01-01

    Contaminant-resistant seal designs and a dynamically stable prototype check valve for the orbital maneuvering and reaction control helium pressurization systems of the space shuttle were developed. Polymer and carbide seal models were designed and tested. Perfluoroelastomers compatible with N2O4 and N2H4 types were evaluated and compared with Teflon in flat and captive seal models. Low load sealing and contamination resistance tests demonstrated cutter seal superiority over polymer seals. Ceramic and carbide materials were evaluated for N2O4 service using exposure to RFNA as a worst case screen; chemically vapor deposited tungsten carbide was shown to be impervious to the acid after 6 months immersion. A unique carbide shell poppet/cutter seat check valve was designed and tested to demonstrate low cracking pressure ( 2.0 psid), dynamic stability under all test bench flow conditions, contamination resistance (0.001 inch CRES wires cut with 1.5 pound seat load) and long life of 100,000 cycles (leakage 1.0 scc/hr helium from 0.1 to 400 psig).

  10. Implementing and Bounding a Cascade Heuristic for Large-Scale Optimization

    DTIC Science & Technology

    2017-06-01

    solving the monolith, we develop a method for producing lower bounds to the optimal objective function value. To do this, we solve a new integer...as developing and analyzing methods for producing lower bounds to the optimal objective function value of the seminal problem monolith, which this...length of the window decreases, the end effects of the model typically increase (Zerr, 2016). There are four primary methods for correcting end

  11. Perturbative unitarity constraints on gauge portals

    DOE PAGES

    El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.

    2017-10-03

    Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs andmore » dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. Here, we briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.« less

  12. Perturbative unitarity constraints on gauge portals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Hedri, Sonia; Shepherd, William; Walker, Devin G. E.

    Dark matter that was once in thermal equilibrium with the Standard Model is generally prohibited from obtaining all of its mass from the electroweak phase transition. This implies a new scale of physics and mediator particles to facilitate dark matter annihilation. In this work, we focus on dark matter that annihilates through a generic gauge boson portal. We show how partial wave unitarity places upper bounds on the dark gauge boson, dark Higgs and dark matter masses. Outside of well-defined fine-tuned regions, we find an upper bound of 9 TeV for the dark matter mass when the dark Higgs andmore » dark gauge bosons both facilitate the dark matter annihilations. In this scenario, the upper bound on the dark Higgs and dark gauge boson masses are 10 TeV and 16 TeV, respectively. When only the dark gauge boson facilitates dark matter annihilations, we find an upper bound of 3 TeV and 6 TeV for the dark matter and dark gauge boson, respectively. Overall, using the gauge portal as a template, we describe a method to not only place upper bounds on the dark matter mass but also on the new particles with Standard Model quantum numbers. Here, we briefly discuss the reach of future accelerator, direct and indirect detection experiments for this class of models.« less

  13. Limits on estimating the width of thin tubular structures in 3D images.

    PubMed

    Wörz, Stefan; Rohr, Karl

    2006-01-01

    This work studies limits on estimating the width of thin tubular structures in 3D images. Based on nonlinear estimation theory we analyze the minimal stochastic error of estimating the width. Given a 3D analytic model of the image intensities of tubular structures, we derive a closed-form expression for the Cramér-Rao bound of the width estimate under image noise. We use the derived lower bound as a benchmark and compare it with three previously proposed accuracy limits for vessel width estimation. Moreover, by experimental investigations we demonstrate that the derived lower bound can be achieved by fitting a 3D parametric intensity model directly to the image data.

  14. Birth/death process model

    NASA Technical Reports Server (NTRS)

    Solloway, C. B.; Wakeland, W.

    1976-01-01

    First-order Markov model developed on digital computer for population with specific characteristics. System is user interactive, self-documenting, and does not require user to have complete understanding of underlying model details. Contains thorough error-checking algorithms on input and default capabilities.

  15. Confidence set inference with a prior quadratic bound

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1988-01-01

    In the uniqueness part of a geophysical inverse problem, the observer wants to predict all likely values of P unknown numerical properties z = (z sub 1,...,z sub p) of the earth from measurement of D other numerical properties y(0)=(y sub 1(0),...,y sub D(0)) knowledge of the statistical distribution of the random errors in y(0). The data space Y containing y(0) is D-dimensional, so when the model space X is infinite-dimensional the linear uniqueness problem usually is insoluble without prior information about the correct earth model x. If that information is a quadratic bound on x (e.g., energy or dissipation rate), Bayesian inference (BI) and stochastic inversion (SI) inject spurious structure into x, implied by neither the data nor the quadratic bound. Confidence set inference (CSI) provides an alternative inversion technique free of this objection. CSI is illustrated in the problem of estimating the geomagnetic field B at the core-mantle boundary (CMB) from components of B measured on or above the earth's surface. Neither the heat flow nor the energy bound is strong enough to permit estimation of B(r) at single points on the CMB, but the heat flow bound permits estimation of uniform averages of B(r) over discs on the CMB, and both bounds permit weighted disc-averages with continous weighting kernels. Both bounds also permit estimation of low-degree Gauss coefficients at the CMB. The heat flow bound resolves them up to degree 8 if the crustal field at satellite altitudes must be treated as a systematic error, but can resolve to degree 11 under the most favorable statistical treatment of the crust. These two limits produce circles of confusion on the CMB with diameters of 25 deg and 19 deg respectively.

  16. Symbolic discrete event system specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.; Chi, Sungdo

    1992-01-01

    Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.

  17. Generalized Skyrme model with the loosely bound potential

    NASA Astrophysics Data System (ADS)

    Gudnason, Sven Bjarke; Zhang, Baiyang; Ma, Nana

    2016-12-01

    We study a generalization of the loosely bound Skyrme model which consists of the Skyrme model with a sixth-order derivative term—motivated by its fluidlike properties—and the second-order loosely bound potential—motivated by lowering the classical binding energies of higher-charged Skyrmions. We use the rational map approximation for the Skyrmion of topological charge B =4 , calculate the binding energy of the latter, and estimate the systematic error in using this approximation. In the parameter space that we can explore within the rational map approximation, we find classical binding energies as low as 1.8%, and once taking into account the contribution from spin-isospin quantization, we obtain binding energies as low as 5.3%. We also calculate the contribution from the sixth-order derivative term to the electric charge density and axial coupling.

  18. Cryptography in the Bounded-Quantum-Storage Model

    NASA Astrophysics Data System (ADS)

    Schaffner, Christian

    2007-09-01

    This thesis initiates the study of cryptographic protocols in the bounded-quantum-storage model. On the practical side, simple protocols for Rabin Oblivious Transfer, 1-2 Oblivious Transfer and Bit Commitment are presented. No quantum memory is required for honest players, whereas the protocols can only be broken by an adversary controlling a large amount of quantum memory. The protocols are efficient, non-interactive and can be implemented with today's technology. On the theoretical side, new entropic uncertainty relations involving min-entropy are established and used to prove the security of protocols according to new strong security definitions. For instance, in the realistic setting of Quantum Key Distribution (QKD) against quantum-memory-bounded eavesdroppers, the uncertainty relation allows to prove the security of QKD protocols while tolerating considerably higher error rates compared to the standard model with unbounded adversaries.

  19. Bounds on isocurvature perturbations from cosmic microwave background and large scale structure data.

    PubMed

    Crotty, Patrick; García-Bellido, Juan; Lesgourgues, Julien; Riazuelo, Alain

    2003-10-24

    We obtain very stringent bounds on the possible cold dark matter, baryon, and neutrino isocurvature contributions to the primordial fluctuations in the Universe, using recent cosmic microwave background and large scale structure data. Neglecting the possible effects of spatial curvature, tensor perturbations, and reionization, we perform a Bayesian likelihood analysis with nine free parameters, and find that the amplitude of the isocurvature component cannot be larger than about 31% for the cold dark matter mode, 91% for the baryon mode, 76% for the neutrino density mode, and 60% for the neutrino velocity mode, at 2sigma, for uncorrelated models. For correlated adiabatic and isocurvature components, the fraction could be slightly larger. However, the cross-correlation coefficient is strongly constrained, and maximally correlated/anticorrelated models are disfavored. This puts strong bounds on the curvaton model.

  20. SU-F-T-300: Impact of Electron Density Modeling of ArcCHECK Cylindricaldiode Array On 3DVH Patient Specific QA Software Tool Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patwe, P; Mhatre, V; Dandekar, P

    Purpose: 3DVH software is a patient specific quality assurance tool which estimates the 3D dose to the patient specific geometry with the help of Planned Dose Perturbation algorithm. The purpose of this study is to evaluate the impact of HU value of ArcCHECK phantom entered in Eclipse TPS on 3D dose & DVH QA analysis. Methods: Manufacturer of ArcCHECK phantom provides CT data set of phantom & recommends considering it as a homogeneous phantom with electron density (1.19 gm/cc or 282 HU) close to PMMA. We performed this study on Eclipse TPS (V13, VMS) & trueBEAM STx VMS Linac &more » ArcCHECK phantom (SNC). Plans were generated for 6MV photon beam, 20cm×20cm field size at isocentre & SPD (Source to phantom distance) of 86.7 cm to deliver 100cGy at isocentre. 3DVH software requires patients DICOM data generated by TPS & plan delivered on ArcCHECK phantom. Plans were generated in TPS by assigning different HU values to phantom. We analyzed gamma index & the dose profile for all plans along vertical down direction of beam’s central axis for Entry, Exit & Isocentre dose. Results: The global gamma passing rate (2% & 2mm) for manufacturer recommended HU value 282 was 96.3%. Detector entry, Isocentre & detector exit Doses were 1.9048 (1.9270), 1.00(1.0199) & 0.5078(0.527) Gy for TPS (Measured) respectively.The global gamma passing rate for electron density 1.1302 gm/cc was 98.6%. Detector entry, Isocentre & detector exit Doses were 1.8714 (1.8873), 1.00(0.9988) & 0.5211(0.516) Gy for TPS (Measured) respectively. Conclusion: Electron density value assigned by manufacturer does not hold true for every user. Proper modeling of electron density of ArcCHECK in TPS is essential to avoid systematic error in dose calculation of patient specific QA.« less

Top