Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2004-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
A Formal Approach to Requirements-Based Programming
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
No significant general-purpose method is currently available to mechanically transform system requirements into a provably equivalent model. The widespread use of such a method represents a necessary step toward high-dependability system engineering for numerous application domains. Current tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" unfilled by such tools and methods is that the formal models cannot be proven to be equivalent to the requirements. We offer a method for mechanically transforming requirements into a provably equivalent formal model that can be used as the basis for code generation and other transformations. This method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. Finally, we describe further application areas we are investigating for use of the approach.
Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a: system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the ciasses of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
A general-purpose method to mechanically transform system requirements into a probably equivalent model has yet to appeal: Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a probably equivalent implementation are valuable but not su8cient. The "gap" unfilled by such tools and methods is that their. formal models cannot be proven to be equivalent to the system requirements as originated by the customel: For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a probably equivalent formal model that can be used as the basis for code generation and other transformations.
Systems, methods and apparatus for pattern matching in procedure development and verification
NASA Technical Reports Server (NTRS)
Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor)
2011-01-01
Systems, methods and apparatus are provided through which, in some embodiments, a formal specification is pattern-matched from scenarios, the formal specification is analyzed, and flaws in the formal specification are corrected. The systems, methods and apparatus may include pattern-matching an equivalent formal model from an informal specification. Such a model can be analyzed for contradictions, conflicts, use of resources before the resources are available, competition for resources, and so forth. From such a formal model, an implementation can be automatically generated in a variety of notations. The approach can improve the resulting implementation, which, in some embodiments, is provably equivalent to the procedures described at the outset, which in turn can improve confidence that the system reflects the requirements, and in turn reduces system development time and reduces the amount of testing required of a new system. Moreover, in some embodiments, two or more implementations can be "reversed" to appropriate formal models, the models can be combined, and the resulting combination checked for conflicts. Then, the combined, error-free model can be used to generate a new (single) implementation that combines the functionality of the original separate implementations, and may be more likely to be correct.
Formal Requirements-Based Programming for Complex Systems
NASA Technical Reports Server (NTRS)
Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis
2005-01-01
Computer science as a field has not yet produced a general method to mechanically transform complex computer system requirements into a provably equivalent implementation. Such a method would be one major step towards dealing with complexity in computing, yet it remains the elusive holy grail of system development. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that such tools and methods leave unfilled is that the formal models cannot be proven to be equivalent to the system requirements as originated by the customer For the classes of complex systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations. While other techniques are available, this method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. We illustrate the application of the method to an example procedure from the Hubble Robotic Servicing Mission currently under study and preliminary formulation at NASA Goddard Space Flight Center.
Chen, Wen; Yu, Chao; Dong, Danan; Cai, Miaomiao; Zhou, Feng; Wang, Zhiren; Zhang, Lei; Zheng, Zhengqi
2017-02-20
With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments.
Chen, Wen; Yu, Chao; Dong, Danan; Cai, Miaomiao; Zhou, Feng; Wang, Zhiren; Zhang, Lei; Zheng, Zhengqi
2017-01-01
With multi-antenna synchronized global navigation satellite system (GNSS) receivers, the single difference (SD) between two antennas is able to eliminate both satellite and receiver clock error, thus it becomes necessary to reconsider the equivalency problem between the SD and double difference (DD) models. In this paper, we quantitatively compared the formal uncertainties and dispersions between multiple SD models and the DD model, and also carried out static and kinematic short baseline experiments. The theoretical and experimental results show that under a non-common clock scheme the SD and DD model are equivalent. Under a common clock scheme, if we estimate stochastic uncalibrated phase delay (UPD) parameters every epoch, this SD model is still equivalent to the DD model, but if we estimate only one UPD parameter for all epochs or take it as a known constant, the SD (here called SD2) and DD models are no longer equivalent. For the vertical component of baseline solutions, the formal uncertainties of the SD2 model are two times smaller than those of the DD model, and the dispersions of the SD2 model are even more than twice smaller than those of the DD model. In addition, to obtain baseline solutions, the SD2 model requires a minimum of three satellites, while the DD model requires a minimum of four satellites, which makes the SD2 more advantageous in attitude determination under sheltered environments. PMID:28230753
Formal and physical equivalence in two cases in contemporary quantum physics
NASA Astrophysics Data System (ADS)
Fraser, Doreen
2017-08-01
The application of analytic continuation in quantum field theory (QFT) is juxtaposed to T-duality and mirror symmetry in string theory. Analytic continuation-a mathematical transformation that takes the time variable t to negative imaginary time-it-was initially used as a mathematical technique for solving perturbative Feynman diagrams, and was subsequently the basis for the Euclidean approaches within mainstream QFT (e.g., Wilsonian renormalization group methods, lattice gauge theories) and the Euclidean field theory program for rigorously constructing non-perturbative models of interacting QFTs. A crucial difference between theories related by duality transformations and those related by analytic continuation is that the former are judged to be physically equivalent while the latter are regarded as physically inequivalent. There are other similarities between the two cases that make comparing and contrasting them a useful exercise for clarifying the type of argument that is needed to support the conclusion that dual theories are physically equivalent. In particular, T-duality and analytic continuation in QFT share the criterion for predictive equivalence that two theories agree on the complete set of expectation values and the mass spectra and the criterion for formal equivalence that there is a "translation manual" between the physically significant algebras of observables and sets of states in the two theories. The analytic continuation case study illustrates how predictive and formal equivalence are compatible with physical inequivalence, but not in the manner of standard underdetermination cases. Arguments for the physical equivalence of dual theories must cite considerations beyond predictive and formal equivalence. The analytic continuation case study is an instance of the strategy of developing a physical theory by extending the formal or mathematical equivalence with another physical theory as far as possible. That this strategy has resulted in developments in pure mathematics as well as theoretical physics is another feature that this case study has in common with dualities in string theory.
Can Non-Formal Education Keep Working Children in School? A Case Study from Punjab, India
ERIC Educational Resources Information Center
Sud, Pamela
2010-01-01
This paper analyses the effectiveness of non-formal schools for working children in Jalandhar, Punjab, India, in mainstreaming child labourers into the formal education system through incentivised, informal schooling. Using a family fixed effects model and sibling data as an equivalent population comparison group, I find that the non-formal…
Commentary on "Idiographic Filters for Psychological Constructs"
ERIC Educational Resources Information Center
Molenaar, Peter C. M.
2009-01-01
Definitions of measurement equivalence in terms of invariance can be specified at a very general formal level. Presently, however, only the operationalization of measurement equivalence in terms of factor models is at stake. In this article, the author discusses the proposed alternative definition in terms of idiographic filters in the context of…
On the equivalence among stress tensors in a gauge-fluid system
NASA Astrophysics Data System (ADS)
Mitra, Arpan Krishna; Banerjee, Rabin; Ghosh, Subir
2017-12-01
In this paper, we bring out the subtleties involved in the study of a first-order relativistic field theory with auxiliary field variables playing an essential role. In particular, we discuss the nonisentropic Eulerian (or Hamiltonian) fluid model. Interactions are introduced by coupling the fluid to a dynamical Maxwell (U(1)) gauge field. This dynamical nature of the gauge field is crucial in showing the equivalence, on the physical subspace, of the stress tensor derived from two definitions, i.e. the canonical (Noether) one and the symmetric one. In the conventional equal-time formalism, we have shown that the generators of the space-time transformations obtained from these two definitions agree modulo the Gauss constraint. This equivalence in the physical sector has been achieved only because of the dynamical nature of the gauge fields. Subsequently, we have explicitly demonstrated the validity of the Schwinger condition. A detailed analysis of the model in lightcone formalism has also been done where several interesting features are revealed.
Large deviations in the presence of cooperativity and slow dynamics
NASA Astrophysics Data System (ADS)
Whitelam, Stephen
2018-06-01
We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.
NASA Technical Reports Server (NTRS)
Fennelly, A. J.
1981-01-01
The TH epsilon mu formalism, used in analyzing equivalence principle experiments of metric and nonmetric gravity theories, is adapted to the description of the electroweak interaction using the Weinberg-Salam unified SU(2) x U(1) model. The use of the TH epsilon mu formalism is thereby extended to the weak interactions, showing how the gravitational field affects W sub mu (+ or -1) and Z sub mu (0) boson propagation and the rates of interactions mediated by them. The possibility of a similar extension to the strong interactions via SU(5) grand unified theories is briefly discussed. Also, using the effects of the potentials on the baryon and lepton wave functions, the effects of gravity on transition mediated in high-A atoms which are electromagnetically forbidden. Three possible experiments to test the equivalence principle in the presence of the weak interactions, which are technologically feasible, are then briefly outline: (1) K-capture by the FE nucleus (counting the emitted X-ray); (2) forbidden absorption transitions in high-A atoms' vapor; and (3) counting the relative Beta-decay rates in a suitable alpha-beta decay chain, assuming the strong interactions obey the equivalence principle.
Main factors in E-Learning for the Equivalency Education Program (E-LEEP)
NASA Astrophysics Data System (ADS)
Yel, M. B.; Sfenrianto
2018-03-01
There is a tremendous learning gap between formal education and non-formal education. E-Learning can facilitate non-formal education learners in improving the learning process. In this study, we present the main factors behind the E-learning for the Equivalency Education Program (E-LEEP) initiative in Indonesia. There are four main factors proposed, namely: standardization, learning materials, learning process, and learners’ characteristics. Each factor supports each other to achieve the learning process of E-LEEP in Indonesia. Although not yet proven, the E-learning should be developed followed the main factors for the non-formal education. This is because those factors can improve the quality of E-Learning for the Equivalency Education Program.
A Process Algebraic Approach to Software Architecture Design
NASA Astrophysics Data System (ADS)
Aldini, Alessandro; Bernardo, Marco; Corradini, Flavio
Process algebra is a formal tool for the specification and the verification of concurrent and distributed systems. It supports compositional modeling through a set of operators able to express concepts like sequential composition, alternative composition, and parallel composition of action-based descriptions. It also supports mathematical reasoning via a two-level semantics, which formalizes the behavior of a description by means of an abstract machine obtained from the application of structural operational rules and then introduces behavioral equivalences able to relate descriptions that are syntactically different. In this chapter, we present the typical behavioral operators and operational semantic rules for a process calculus in which no notion of time, probability, or priority is associated with actions. Then, we discuss the three most studied approaches to the definition of behavioral equivalences - bisimulation, testing, and trace - and we illustrate their congruence properties, sound and complete axiomatizations, modal logic characterizations, and verification algorithms. Finally, we show how these behavioral equivalences and some of their variants are related to each other on the basis of their discriminating power.
NASA Astrophysics Data System (ADS)
Cattaneo, Alberto S.; Schiavina, Michele
2017-02-01
This note describes the restoration of time in one-dimensional parameterization-invariant (hence timeless) models, namely, the classically equivalent Jacobi action and gravity coupled to matter. It also serves as a timely introduction by examples to the classical and quantum BV-BFV formalism as well as to the AKSZ method.
Partition-based discrete-time quantum walks
NASA Astrophysics Data System (ADS)
Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo
2018-04-01
We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.
A Tool for Requirements-Based Programming
NASA Technical Reports Server (NTRS)
Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis; Erickson, John
2005-01-01
Absent a general method for mathematically sound, automated transformation of customer requirements into a formal model of the desired system, developers must resort to either manual application of formal methods or to system testing (either manual or automated). While formal methods have afforded numerous successes, they present serious issues, e.g., costs to gear up to apply them (time, expensive staff), and scalability and reproducibility when standards in the field are not settled. The testing path cannot be walked to the ultimate goal, because exhaustive testing is infeasible for all but trivial systems. So system verification remains problematic. System or requirements validation is similarly problematic. The alternatives available today depend on either having a formal model or pursuing enough testing to enable the customer to be certain that system behavior meets requirements. The testing alternative for non-trivial systems always have some system behaviors unconfirmed and therefore is not the answer. To ensure that a formal model is equivalent to the customer s requirements necessitates that the customer somehow fully understands the formal model, which is not realistic. The predominant view that provably correct system development depends on having a formal model of the system leads to a desire for a mathematically sound method to automate the transformation of customer requirements into a formal model. Such a method, an augmentation of requirements-based programming, will be briefly described in this paper, and a prototype tool to support it will be described. The method and tool enable both requirements validation and system verification for the class of systems whose behavior can be described as scenarios. An application of the tool to a prototype automated ground control system for NASA mission is presented.
Radiative transfer calculated from a Markov chain formalism
NASA Technical Reports Server (NTRS)
Esposito, L. W.; House, L. L.
1978-01-01
The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.
Field-antifield and BFV formalisms for quadratic systems with open gauge algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nirov, K.S.; Razumov, A.V.
1992-09-20
In this paper the Lagrangian field-antifield (BV) and Hamiltonian (BFV) BRST formalisms for the general quadratic systems with open gauge algebra are considered. The equivalence between the Lagrangian and Hamiltonian formalisms is proven.
An alternative Biot's displacement formulation for porous materials.
Dazel, Olivier; Brouard, Bruno; Depollier, Claude; Griffiths, Stéphane
2007-06-01
This paper proposes an alternative displacement formulation of Biot's linear model for poroelastic materials. Its advantage is a simplification of the formalism without making any additional assumptions. The main difference between the method proposed in this paper and the original one is the choice of the generalized coordinates. In the present approach, the generalized coordinates are chosen in order to simplify the expression of the strain energy, which is expressed as the sum of two decoupled terms. Hence, new equations of motion are obtained whose elastic forces are decoupled. The simplification of the formalism is extended to Biot and Willis thought experiments, and simpler expressions of the parameters of the three Biot waves are also provided. A rigorous derivation of equivalent and limp models is then proposed. It is finally shown that, for the particular case of sound-absorbing materials, additional simplifications of the formalism can be obtained.
On the Equivalence of Formal Grammars and Machines.
ERIC Educational Resources Information Center
Lund, Bruce
1991-01-01
Explores concepts of formal language and automata theory underlying computational linguistics. A computational formalism is described known as a "logic grammar," with which computational systems process linguistic data, with examples in declarative and procedural semantics and definite clause grammars. (13 references) (CB)
NASA Astrophysics Data System (ADS)
Astorino, Maria Denise; Frezza, Fabrizio; Tedeschi, Nicola
2018-03-01
The analysis of the transmission and reflection spectra of stacked slot-based 2D periodic structures of arbitrary geometry and the ability to devise and control their electromagnetic responses have been a matter of extensive research for many decades. The purpose of this paper is to develop an equivalent Π circuit model based on the transmission-line theory and Floquet harmonic interactions, for broadband and short longitudinal period analysis. The proposed circuit model overcomes the limits of identical and symmetrical configurations imposed by the even/odd excitation approach, exploiting both the circuit topology of a single 2D periodic array of apertures and the ABCD matrix formalism. The transmission spectra obtained through the equivalent-circuit model have been validated by comparison with full-wave simulations carried out with a finite-element commercial electromagnetic solver. This allowed for a physical insight into the spectral and angular responses of multilayer devices with arbitrary aperture shapes, guaranteeing a noticeable saving of computational resources.
Classroom Activities for Introducing Equivalence Relations
ERIC Educational Resources Information Center
Brandt, Jim
2013-01-01
Equivalence relations and partitions are two interconnected ideas that play important roles in advanced mathematics. While students encounter the informal notion of equivalence in many courses, the formal definition of an equivalence relation is typically introduced in a junior level transition-to-proof course. This paper reports the results of a…
NASA Astrophysics Data System (ADS)
Maneval, Daniel; Bouchard, Hugo; Ozell, Benoît; Després, Philippe
2018-01-01
The equivalent restricted stopping power formalism is introduced for proton mean energy loss calculations under the continuous slowing down approximation. The objective is the acceleration of Monte Carlo dose calculations by allowing larger steps while preserving accuracy. The fractional energy loss per step length ɛ was obtained with a secant method and a Gauss-Kronrod quadrature estimation of the integral equation relating the mean energy loss to the step length. The midpoint rule of the Newton-Cotes formulae was then used to solve this equation, allowing the creation of a lookup table linking ɛ to the equivalent restricted stopping power L eq, used here as a key physical quantity. The mean energy loss for any step length was simply defined as the product of the step length with L eq. Proton inelastic collisions with electrons were added to GPUMCD, a GPU-based Monte Carlo dose calculation code. The proton continuous slowing-down was modelled with the L eq formalism. GPUMCD was compared to Geant4 in a validation study where ionization processes alone were activated and a voxelized geometry was used. The energy straggling was first switched off to validate the L eq formalism alone. Dose differences between Geant4 and GPUMCD were smaller than 0.31% for the L eq formalism. The mean error and the standard deviation were below 0.035% and 0.038% respectively. 99.4 to 100% of GPUMCD dose points were consistent with a 0.3% dose tolerance. GPUMCD 80% falloff positions (R80 ) matched Geant’s R80 within 1 μm. With the energy straggling, dose differences were below 2.7% in the Bragg peak falloff and smaller than 0.83% elsewhere. The R80 positions matched within 100 μm. The overall computation times to transport one million protons with GPUMCD were 31-173 ms. Under similar conditions, Geant4 computation times were 1.4-20 h. The L eq formalism led to an intrinsic efficiency gain factor ranging between 30-630, increasing with the prescribed accuracy of simulations. The L eq formalism allows larger steps leading to a O(constant) algorithmic time complexity. It significantly accelerates Monte Carlo proton transport while preserving accuracy. It therefore constitutes a promising variance reduction technique for computing proton dose distributions in a clinical context.
NASA Astrophysics Data System (ADS)
Takahashi, Tatsuji; Gunji, Yukio-Pegio
2008-10-01
We pursue anticipation in second person or normative anticipation. As the first step, we make the three concepts second person, internal measurement and asynchroneity clearer by introducing the velocity of logic νl and the velocity of communication νc, in the context of social communication. After proving anticipatory nature of rule-following or language use in general via Kripke's "rule-following paradox," we present a mathematical model expressing the internality essential to second person, taking advantage of equivalences and differences in the formal language theory. As a consequence, we show some advantages of negatively considered concepts and arguments by concretizing them into an elementary and explicit formal model. The time development of the model shows a self-organizing property which never results if we adopt a third person stance.
Equivalence of quantum Boltzmann equation and Kubo formula for dc conductivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Z.B.; Chen, L.Y.
1990-02-01
This paper presents a derivation of the quantum Boltzmann equation for linear dc transport with a correction term to Mahan-Hansch's equations and derive a formal solution to it. Based on this formal solution, the authors find the electric conductivity can be expressed as the retarded current-current correlation. Therefore, the authors explicitly demonstrate the equivalence of the two most important theoretical methods: quantum Boltzmann equation and Kubo formula.
ERIC Educational Resources Information Center
Hilbig, Benjamin E.
2012-01-01
Extending the well-established negativity bias in human cognition to truth judgments, it was recently shown that negatively framed statistical statements are more likely to be considered true than formally equivalent statements framed positively. However, the underlying processes responsible for this effect are insufficiently understood.…
49 CFR Appendix E to Part 40 - SAP Equivalency Requirements for Certification Organizations
Code of Federal Regulations, 2011 CFR
2011-10-01
... formal education, in-service training, and professional development courses. Part of any professional counselor's development is participation in formal and non-formal education opportunities within the field... is important if the individual is to be considered a professional in the field of alcohol and drug...
Open-Closed Homotopy Algebras and Strong Homotopy Leibniz Pairs Through Koszul Operad Theory
NASA Astrophysics Data System (ADS)
Hoefel, Eduardo; Livernet, Muriel
2012-08-01
Open-closed homotopy algebras (OCHA) and strong homotopy Leibniz pairs (SHLP) were introduced by Kajiura and Stasheff in 2004. In an appendix to their paper, Markl observed that an SHLP is equivalent to an algebra over the minimal model of a certain operad, without showing that the operad is Koszul. In the present paper, we show that both OCHA and SHLP are algebras over the minimal model of the zeroth homology of two versions of the Swiss-cheese operad and prove that these two operads are Koszul. As an application, we show that the OCHA operad is non-formal as a 2-colored operad but is formal as an algebra in the category of 2-collections.
Rigorous simulations of a helical core fiber by the use of transformation optics formalism.
Napiorkowski, Maciej; Urbanczyk, Waclaw
2014-09-22
We report for the first time on rigorous numerical simulations of a helical-core fiber by using a full vectorial method based on the transformation optics formalism. We modeled the dependence of circular birefringence of the fundamental mode on the helix pitch and analyzed the effect of a birefringence increase caused by the mode displacement induced by a core twist. Furthermore, we analyzed the complex field evolution versus the helix pitch in the first order modes, including polarization and intensity distribution. Finally, we show that the use of the rigorous vectorial method allows to better predict the confinement loss of the guided modes compared to approximate methods based on equivalent in-plane bending models.
Formalization, equivalence and generalization of basic resonance electrical circuits
NASA Astrophysics Data System (ADS)
Penev, Dimitar; Arnaudov, Dimitar; Hinov, Nikolay
2017-12-01
In the work are presented basic resonance circuits, which are used in resonance energy converters. The following resonant circuits are considered: serial, serial with parallel load parallel capacitor, parallel and parallel with serial loaded inductance. For the circuits under consideration, expressions are generated for the frequencies of own oscillations and for the equivalence of the active power emitted in the load. Mathematical expressions are graphically constructed and verified using computer simulations. The results obtained are used in the model based design of resonant energy converters with DC or AC output. This guaranteed the output indicators of power electronic devices.
Equivalency Programmes (EPs) for Promoting Lifelong Learning
ERIC Educational Resources Information Center
Haddad, Caroline, Ed.
2006-01-01
Equivalency programmes (EPs) refers to alternative education programmes that are equivalent to the formal education system in terms of curriculum and certification, policy support mechanisms, mode of delivery, staff training, and other support activities such as monitoring, evaluation and assessment. The development of EPs is potentially an…
NASA Technical Reports Server (NTRS)
Windley, P. J.
1991-01-01
In this paper we explore the specification and verification of VLSI designs. The paper focuses on abstract specification and verification of functionality using mathematical logic as opposed to low-level boolean equivalence verification such as that done using BDD's and Model Checking. Specification and verification, sometimes called formal methods, is one tool for increasing computer dependability in the face of an exponentially increasing testing effort.
Assessing Formal Knowledge of Math Equivalence among Algebra and Pre-Algebra Students
ERIC Educational Resources Information Center
Fyfe, Emily R.; Matthews, Percival G.; Amsel, Eric; McEldoon, Katherine L.; McNeil, Nicole M.
2018-01-01
A central understanding in mathematics is knowledge of "math equivalence," the relation indicating that 2 quantities are equal and interchangeable. Decades of research have documented elementary-school (ages 7 to 11) children's (mis)understanding of math equivalence, and recent work has developed a construct map and comprehensive…
Extension of Liouville Formalism to Postinstability Dynamics
NASA Technical Reports Server (NTRS)
Zak, Michail
2003-01-01
A mathematical formalism has been developed for predicting the postinstability motions of a dynamic system governed by a system of nonlinear equations and subject to initial conditions. Previously, there was no general method for prediction and mathematical modeling of postinstability behaviors (e.g., chaos and turbulence) in such a system. The formalism of nonlinear dynamics does not afford means to discriminate between stable and unstable motions: an additional stability analysis is necessary for such discrimination. However, an additional stability analysis does not suggest any modifications of a mathematical model that would enable the model to describe postinstability motions efficiently. The most important type of instability that necessitates a postinstability description is associated with positive Lyapunov exponents. Such an instability leads to exponential growth of small errors in initial conditions or, equivalently, exponential divergence of neighboring trajectories. The development of the present formalism was undertaken in an effort to remove positive Lyapunov exponents. The means chosen to accomplish this is coupling of the governing dynamical equations with the corresponding Liouville equation that describes the evolution of the flow of error probability. The underlying idea is to suppress the divergences of different trajectories that correspond to different initial conditions, without affecting a target trajectory, which is one that starts with prescribed initial conditions.
Black holes as quantum gravity condensates
NASA Astrophysics Data System (ADS)
Oriti, Daniele; Pranzetti, Daniele; Sindoni, Lorenzo
2018-03-01
We model spherically symmetric black holes within the group field theory formalism for quantum gravity via generalized condensate states, involving sums over arbitrarily refined graphs (dual to three-dimensional triangulations). The construction relies heavily on both the combinatorial tools of random tensor models and the quantum geometric data of loop quantum gravity, both part of the group field theory formalism. Armed with the detailed microscopic structure, we compute the entropy associated with the black hole horizon, which turns out to be equivalently the Boltzmann entropy of its microscopic degrees of freedom and the entanglement entropy between the inside and outside regions. We recover the area law under very general conditions, as well as the Bekenstein-Hawking formula. The result is also shown to be generically independent of any specific value of the Immirzi parameter.
Theory and experiment in gravitational physics
NASA Technical Reports Server (NTRS)
Will, C. M.
1981-01-01
New technological advances have made it feasible to conduct measurements with precision levels which are suitable for experimental tests of the theory of general relativity. This book has been designed to fill a new need for a complete treatment of techniques for analyzing gravitation theory and experience. The Einstein equivalence principle and the foundations of gravitation theory are considered, taking into account the Dicke framework, basic criteria for the viability of a gravitation theory, experimental tests of the Einstein equivalence principle, Schiff's conjecture, and a model theory devised by Lightman and Lee (1973). Gravitation as a geometric phenomenon is considered along with the parametrized post-Newtonian formalism, the classical tests, tests of the strong equivalence principle, gravitational radiation as a tool for testing relativistic gravity, the binary pulsar, and cosmological tests.
Theory and experiment in gravitational physics
NASA Astrophysics Data System (ADS)
Will, C. M.
New technological advances have made it feasible to conduct measurements with precision levels which are suitable for experimental tests of the theory of general relativity. This book has been designed to fill a new need for a complete treatment of techniques for analyzing gravitation theory and experience. The Einstein equivalence principle and the foundations of gravitation theory are considered, taking into account the Dicke framework, basic criteria for the viability of a gravitation theory, experimental tests of the Einstein equivalence principle, Schiff's conjecture, and a model theory devised by Lightman and Lee (1973). Gravitation as a geometric phenomenon is considered along with the parametrized post-Newtonian formalism, the classical tests, tests of the strong equivalence principle, gravitational radiation as a tool for testing relativistic gravity, the binary pulsar, and cosmological tests.
NASA Astrophysics Data System (ADS)
Bellver, Fernando Gimeno; Garratón, Manuel Caravaca; Soto Meca, Antonio; López, Juan Antonio Vera; Guirao, Juan L. G.; Fernández-Martínez, Manuel
In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems. The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software. Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper.
Higgs potential from derivative interactions
NASA Astrophysics Data System (ADS)
Quadri, A.
2017-06-01
A formulation of the linear σ model with derivative interactions is studied. The classical theory is on-shell equivalent to the σ model with the standard quartic Higgs potential. The mass of the scalar mode only appears in the quadratic part and not in the interaction vertices, unlike in the ordinary formulation of the theory. Renormalization of the model is discussed. A nonpower-counting renormalizable extension, obeying the defining functional identities of the theory, is presented. This extension is physically equivalent to the tree-level inclusion of a dimension-six effective operator ∂μ(Φ†Φ)∂μ(Φ†Φ). The resulting UV divergences are arranged in a perturbation series around the power-counting renormalizable theory. The application of the formalism to the Standard Model in the presence of the dimension-six operator ∂μ(Φ†Φ)∂μ(Φ†Φ) is discussed.
NASA Astrophysics Data System (ADS)
Das, Anusheela; Chaudhury, Srabanti
2015-11-01
Metal nanoparticles are heterogeneous catalysts and have a multitude of non-equivalent, catalytic sites on the nanoparticle surface. The product dissociation step in such reaction schemes can follow multiple pathways. Proposed here for the first time is a completely analytical theoretical framework, based on the first passage time distribution, that incorporates the effect of heterogeneity in nanoparticle catalysis explicitly by considering multiple, non-equivalent catalytic sites on the nanoparticle surface. Our results show that in nanoparticle catalysis, the effect of dynamic disorder is manifested even at limiting substrate concentrations in contrast to an enzyme that has only one well-defined active site.
Ensuring Cross-Cultural Equivalence in Translation of Research Consents and Clinical Documents
Lee, Cheng-Chih; Li, Denise; Arai, Shoshana; Puntillo, Kathleen
2010-01-01
The aim of this article is to describe a formal process used to translate research study materials from English into traditional Chinese characters. This process may be useful for translating documents for use by both research participants and clinical patients. A modified Brislin model was used as the systematic translation process. Four bilingual translators were involved, and a Flaherty 3-point scale was used to evaluate the translated documents. The linguistic discrepancies that arise in the process of ensuring cross-cultural congruency or equivalency between the two languages are presented to promote the development of patient-accessible cross-cultural documents. PMID:18948451
On the Perturbative Equivalence Between the Hamiltonian and Lagrangian Quantizations
NASA Astrophysics Data System (ADS)
Batalin, I. A.; Tyutin, I. V.
The Hamiltonian (BFV) and Lagrangian (BV) quantization schemes are proved to be perturbatively equivalent to each other. It is shown in particular that the quantum master equation being treated perturbatively possesses a local formal solution.
On isometry anomalies in minimal 𝒩 = (0,1) and 𝒩 = (0,2) sigma models
NASA Astrophysics Data System (ADS)
Chen, Jin; Cui, Xiaoyi; Shifman, Mikhail; Vainshtein, Arkady
2016-09-01
The two-dimensional minimal supersymmetric sigma models with homogeneous target spaces G/H and chiral fermions of the same chirality are revisited. In particular, we look into the isometry anomalies in O(N) and CP(N - 1) models. These anomalies are generated by fermion loop diagrams which we explicitly calculate. In the case of O(N) sigma models the first Pontryagin class vanishes, so there is no global obstruction for the minimal 𝒩 = (0, 1) supersymmetrization of these models. We show that at the local level isometries in these models can be made anomaly free by specifying the counterterms explicitly. Thus, there are no obstructions to quantizing the minimal 𝒩 = (0, 1) models with the SN-1 = SO(N)/SO(N - 1) target space while preserving the isometries. This also includes CP(1) (equivalent to S2) which is an exceptional case from the CP(N - 1) series. For other CP(N - 1) models, the isometry anomalies cannot be rescued even locally, this leads us to a discussion on the relation between the geometric and gauged formulations of the CP(N - 1) models to compare the original of different anomalies. A dual formalism of O(N) model is also given, in order to show the consistency of our isometry anomaly analysis in different formalisms. The concrete counterterms to be added, however, will be formalism dependent.
Quantum simulation of disordered systems with cold atoms
NASA Astrophysics Data System (ADS)
Garreau, Jean-Claude
2017-01-01
This paper reviews the physics of quantum disorder in relation with a series of experiments using laser-cooled atoms exposed to "kicks" of a standing wave, realizing a paradigmatic model of quantum chaos, the kicked rotor. This dynamical system can be mapped onto a tight-binding Hamiltonian with pseudo-disorder, formally equivalent to the Anderson model of quantum disorder, with quantum chaos playing the role of disorder. This provides a very good quantum simulator for the Anderson physics. xml:lang="fr"
Geometrothermodynamic model for the evolution of the Universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruber, Christine; Quevedo, Hernando, E-mail: christine.gruber@correo.nucleares.unam.mx, E-mail: quevedo@nucleares.unam.mx
Using the formalism of geometrothermodynamics to derive a fundamental thermodynamic equation, we construct a cosmological model in the framework of relativistic cosmology. In a first step, we describe a system without thermodynamic interaction, and show it to be equivalent to the standard ΛCDM paradigm. The second step includes thermodynamic interaction and produces a model consistent with the main features of inflation. With the proposed fundamental equation we are thus able to describe all the known epochs in the evolution of our Universe, starting from the inflationary phase.
Maximal aggregation of polynomial dynamical systems
Cardelli, Luca; Tschaikowski, Max
2017-01-01
Ordinary differential equations (ODEs) with polynomial derivatives are a fundamental tool for understanding the dynamics of systems across many branches of science, but our ability to gain mechanistic insight and effectively conduct numerical evaluations is critically hindered when dealing with large models. Here we propose an aggregation technique that rests on two notions of equivalence relating ODE variables whenever they have the same solution (backward criterion) or if a self-consistent system can be written for describing the evolution of sums of variables in the same equivalence class (forward criterion). A key feature of our proposal is to encode a polynomial ODE system into a finitary structure akin to a formal chemical reaction network. This enables the development of a discrete algorithm to efficiently compute the largest equivalence, building on approaches rooted in computer science to minimize basic models of computation through iterative partition refinements. The physical interpretability of the aggregation is shown on polynomial ODE systems for biochemical reaction networks, gene regulatory networks, and evolutionary game theory. PMID:28878023
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-12
... equivalent arrangements, as permitted by the ICLL and U.S. regulations, regarding hatch covers for hopper... Guard issued formal notification to the IMO of equivalent arrangements for hatch covers for certain... arrangements for hatch covers for certain manned, self-propelled open hopper dredges on November 12, 2009. We...
The equivalence of a human observer and an ideal observer in binary diagnostic tasks
NASA Astrophysics Data System (ADS)
He, Xin; Samuelson, Frank; Gallas, Brandon D.; Sahiner, Berkman; Myers, Kyle
2013-03-01
The Ideal Observer (IO) is "ideal" for given data populations. In the image perception process, as the raw images are degraded by factors such as display and eye optics, there is an equivalent IO (EIO). The EIO uses the statistical information that exits the perception/cognitive degradations as the data. We assume a human observer who received sufficient training, e.g., radiologists, and hypothesize that such a human observer can be modeled as if he is an EIO. To measure the likelihood ratio (LR) distributions of an EIO, we formalize experimental design principles that encourage rationality based on von Neumann and Morgenstern's (vNM) axioms. We present examples to show that many observer study design refinements, although motivated by empirical principles explicitly, implicitly encourage rationality. Our hypothesis is supported by a recent review paper on ROC curve convexity by Pesce, Metz, and Berbaum. We also provide additional evidence based on a collection of observer studies in medical imaging. EIO theory shows that the "sub-optimal" performance of a human observer can be mathematically formalized in the form of an IO, and measured through rationality encouragement.
On the Solutions of a 2+1-Dimensional Model for Epitaxial Growth with Axial Symmetry
NASA Astrophysics Data System (ADS)
Lu, Xin Yang
2018-04-01
In this paper, we study the evolution equation derived by Xu and Xiang (SIAM J Appl Math 69(5):1393-1414, 2009) to describe heteroepitaxial growth in 2+1 dimensions with elastic forces on vicinal surfaces is in the radial case and uniform mobility. This equation is strongly nonlinear and contains two elliptic integrals and defined via Cauchy principal value. We will first derive a formally equivalent parabolic evolution equation (i.e., full equivalence when sufficient regularity is assumed), and the main aim is to prove existence, uniqueness and regularity of strong solutions. We will extensively use techniques from the theory of evolution equations governed by maximal monotone operators in Banach spaces.
Affordance Equivalences in Robotics: A Formalism
Andries, Mihai; Chavez-Garcia, Ricardo Omar; Chatila, Raja; Giusti, Alessandro; Gambardella, Luca Maria
2018-01-01
Automatic knowledge grounding is still an open problem in cognitive robotics. Recent research in developmental robotics suggests that a robot's interaction with its environment is a valuable source for collecting such knowledge about the effects of robot's actions. A useful concept for this process is that of an affordance, defined as a relationship between an actor, an action performed by this actor, an object on which the action is performed, and the resulting effect. This paper proposes a formalism for defining and identifying affordance equivalence. By comparing the elements of two affordances, we can identify equivalences between affordances, and thus acquire grounded knowledge for the robot. This is useful when changes occur in the set of actions or objects available to the robot, allowing to find alternative paths to reach goals. In the experimental validation phase we verify if the recorded interaction data is coherent with the identified affordance equivalences. This is done by querying a Bayesian Network that serves as container for the collected interaction data, and verifying that both affordances considered equivalent yield the same effect with a high probability. PMID:29937724
BRST Formalism for Systems with Higher Order Derivatives of Gauge Parameters
NASA Astrophysics Data System (ADS)
Nirov, Kh. S.
For a wide class of mechanical systems, invariant under gauge transformations with arbitrary higher order time derivatives of gauge parameters, the equivalence of Lagrangian and Hamiltonian BRST formalisms is proved. It is shown that the Ostrogradsky formalism establishes the natural rules to relate the BFV ghost canonical pairs with the ghosts and antighosts introduced by the Lagrangian approach. Explicit relation between corresponding gauge-fixing terms is obtained.
Ciesielski, Krzysztof Chris; Udupa, Jayaram K.
2011-01-01
In the current vast image segmentation literature, there seems to be considerable redundancy among algorithms, while there is a serious lack of methods that would allow their theoretical comparison to establish their similarity, equivalence, or distinctness. In this paper, we make an attempt to fill this gap. To accomplish this goal, we argue that: (1) every digital segmentation algorithm A should have a well defined continuous counterpart MA, referred to as its model, which constitutes an asymptotic of A when image resolution goes to infinity; (2) the equality of two such models MA and MA′ establishes a theoretical (asymptotic) equivalence of their digital counterparts A and A′. Such a comparison is of full theoretical value only when, for each involved algorithm A, its model MA is proved to be an asymptotic of A. So far, such proofs do not appear anywhere in the literature, even in the case of algorithms introduced as digitizations of continuous models, like level set segmentation algorithms. The main goal of this article is to explore a line of investigation for formally pairing the digital segmentation algorithms with their asymptotic models, justifying such relations with mathematical proofs, and using the results to compare the segmentation algorithms in this general theoretical framework. As a first step towards this general goal, we prove here that the gradient based thresholding model M∇ is the asymptotic for the fuzzy connectedness Udupa and Samarasekera segmentation algorithm used with gradient based affinity A∇. We also argue that, in a sense, M∇ is the asymptotic for the original front propagation level set algorithm of Malladi, Sethian, and Vemuri, thus establishing a theoretical equivalence between these two specific algorithms. Experimental evidence of this last equivalence is also provided. PMID:21442014
A functional renormalization method for wave propagation in random media
NASA Astrophysics Data System (ADS)
Lamagna, Federico; Calzetta, Esteban
2017-08-01
We develop the exact renormalization group approach as a way to evaluate the effective speed of the propagation of a scalar wave in a medium with random inhomogeneities. We use the Martin-Siggia-Rose formalism to translate the problem into a non equilibrium field theory one, and then consider a sequence of models with a progressively lower infrared cutoff; in the limit where the cutoff is removed we recover the problem of interest. As a test of the formalism, we compute the effective dielectric constant of an homogeneous medium interspersed with randomly located, interpenetrating bubbles. A simple approximation to the renormalization group equations turns out to be equivalent to a self-consistent two-loops evaluation of the effective dielectric constant.
Rhythmic Characteristics of Colloquial and Formal Tamil
ERIC Educational Resources Information Center
Keane, Elinor
2006-01-01
Application of recently developed rhythmic measures to passages of read speech in colloquial and formal Tamil revealed some significant differences between the two varieties, which are in diglossic distribution. Both were also distinguished from a set of control data from British English speakers reading an equivalent passage. The findings have…
Wess-Zumino and super Yang-Mills theories in D=4 integral superspace
NASA Astrophysics Data System (ADS)
Castellani, L.; Catenacci, R.; Grassi, P. A.
2018-05-01
We reconstruct the action of N = 1 , D = 4 Wess-Zumino and N = 1 , 2 , D = 4 super-Yang-Mills theories, using integral top forms on the supermanifold M^{(.4|4)} . Choosing different Picture Changing Operators, we show the equivalence of their rheonomic and superspace actions. The corresponding supergeometry and integration theory are discussed in detail. This formalism is an efficient tool for building supersymmetric models in a geometrical framework.
University of California Conference on Statistical Mechanics (4th) Held March 26-28, 1990
1990-03-28
and S. Lago, Chem. Phys., Z, 5750 (1983) Shear Viscosity Calculation via Equilibrium Molecular Dynamics: Einstenian vs. Green - Kubo Formalism by Adel A...through the application of the Green - Kubo approach. Although the theoretical equivalence between both formalisms was demonstrated by Helfand [3], their...like equations and of different expressions based on the Green - Kubo formalism. In contrast to Hoheisel and Vogelsang’s conclusions [2], we find that
Why were Matrix Mechanics and Wave Mechanics considered equivalent?
NASA Astrophysics Data System (ADS)
Perovic, Slobodan
A recent rethinking of the early history of Quantum Mechanics deemed the late 1920s agreement on the equivalence of Matrix Mechanics and Wave Mechanics, prompted by Schrödinger's 1926 proof, a myth. Schrödinger supposedly failed to prove isomorphism, or even a weaker equivalence ("Schrödinger-equivalence") of the mathematical structures of the two theories; developments in the early 1930s, especially the work of mathematician von Neumann provided sound proof of mathematical equivalence. The alleged agreement about the Copenhagen Interpretation, predicated to a large extent on this equivalence, was deemed a myth as well. In response, I argue that Schrödinger's proof concerned primarily a domain-specific ontological equivalence, rather than the isomorphism or a weaker mathematical equivalence. It stemmed initially from the agreement of the eigenvalues of Wave Mechanics and energy-states of Bohr's Model that was discovered and published by Schrödinger in his first and second communications of 1926. Schrödinger demonstrated in this proof that the laws of motion arrived at by the method of Matrix Mechanics are satisfied by assigning the auxiliary role to eigenfunctions in the derivation of matrices (while he only outlined the reversed derivation of eigenfunctions from Matrix Mechanics, which was necessary for the proof of both isomorphism and Schrödinger-equivalence of the two theories). This result was intended to demonstrate the domain-specific ontological equivalence of Matrix Mechanics and Wave Mechanics, with respect to the domain of Bohr's atom. And although the mathematical equivalence of the theories did not seem out of the reach of existing theories and methods, Schrödinger never intended to fully explore such a possibility in his proof paper. In a further development of Quantum Mechanics, Bohr's complementarity and Copenhagen Interpretation captured a more substantial convergence of the subsequently revised (in light of the experimental results) Wave and Matrix Mechanics. I argue that both the equivalence and Copenhagen Interpretation can be deemed myths if one predicates the philosophical and historical analysis on a narrow model of physical theory which disregards its historical context, and focuses exclusively on its formal aspects and the exploration of the logical models supposedly implicit in it.
Complex capacitance in the representation of modulus of the lithium niobate crystals
NASA Astrophysics Data System (ADS)
Alim, Mohammad A.; Batra, A. K.; Bhattacharjee, Sudip; Aggarwal, M. D.
2011-03-01
The lithium niobate (LiNbO 3 or LN) single crystal is grown in-house. The ac small-signal electrical characterization is conducted over a temperature range 35 ≤T≤150 °C as a function of measurement frequency (10 ≤f≤10 6 Hz). Meaningful observation is noted only in a narrow temperature range 59 ≤T≤73 °C. These electrical data when analyzed via complex plane formalisms revealed single semicircular relaxation both in the complex capacitance ( C*) and in the modulus ( M*) planes. The physical meaning of this kind of observation is obtained on identifying the relaxation type, and then incorporating respective equivalent circuit model. The simplistic non-blocking nature of the equivalent circuit model obtained via M*-plane is established as the lumped relaxation is identified in the C*-plane. The feature of the eventual equivalent circuit model allows non-blocking aspect for the LN crystal attributing to the presence of the operative dc conduction process. Identification of this leakage dc conduction via C*-plane is portrayed in the M*-plane where the blocking nature is removed. The interacting interpretation between these two complex planes is successfully presented.
Multisymplectic unified formalism for Einstein-Hilbert gravity
NASA Astrophysics Data System (ADS)
Gaset, Jordi; Román-Roy, Narciso
2018-03-01
We present a covariant multisymplectic formulation for the Einstein-Hilbert model of general relativity. As it is described by a second-order singular Lagrangian, this is a gauge field theory with constraints. The use of the unified Lagrangian-Hamiltonian formalism is particularly interesting when it is applied to these kinds of theories, since it simplifies the treatment of them, in particular, the implementation of the constraint algorithm, the retrieval of the Lagrangian description, and the construction of the covariant Hamiltonian formalism. In order to apply this algorithm to the covariant field equations, they must be written in a suitable geometrical way, which consists of using integrable distributions, represented by multivector fields of a certain type. We apply all these tools to the Einstein-Hilbert model without and with energy-matter sources. We obtain and explain the geometrical and physical meaning of the Lagrangian constraints and we construct the multimomentum (covariant) Hamiltonian formalisms in both cases. As a consequence of the gauge freedom and the constraint algorithm, we see how this model is equivalent to a first-order regular theory, without gauge freedom. In the case of the presence of energy-matter sources, we show how some relevant geometrical and physical characteristics of the theory depend on the type of source. In all the cases, we obtain explicitly multivector fields which are solutions to the gravitational field equations. Finally, a brief study of symmetries and conservation laws is done in this context.
Literacy Programs and Non-Formal Education of Bangladesh and India
ERIC Educational Resources Information Center
Rahman, Mohammad Saidur; Yasmin, Farzana; Begum, Monzil Ara; Ara, Jesmin; Nath, Tapan Kumar
2010-01-01
In both Bangladesh and India expand non-formal education (NFE) programs for unenrolled and drop-out children and adults (8-45 year cohort) for ensure comparable standard with the primary curriculum, establish equivalency of NFE with primary education and overall competency, raise qualification and training level of teachers for effective delivery…
{ital R}-matrix theory, formal Casimirs and the periodic Toda lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morosi, C.; Pizzocchero, L.
The nonunitary {ital r}-matrix theory and the associated bi- and triHamiltonian schemes are considered. The language of Poisson pencils and of their formal Casimirs is applied in this framework to characterize the biHamiltonian chains of integrals of motion, pointing out the role of the Schur polynomials in these constructions. This formalism is subsequently applied to the periodic Toda lattice. Some different algebraic settings and Lax formulations proposed in the literature for this system are analyzed in detail, and their full equivalence is exploited. In particular, the equivalence between the loop algebra approach and the method of differential-difference operators is illustrated;more » moreover, two alternative Lax formulations are considered, and appropriate reduction algorithms are found in both cases, allowing us to derive the multiHamiltonian formalism from {ital r}-matrix theory. The systems of integrals for the periodic Toda lattice known after Flaschka and H{acute e}non, and their functional relations, are recovered through systematic application of the previously outlined schemes. {copyright} {ital 1996 American Institute of Physics.}« less
Application of Adjoint Methodology to Supersonic Aircraft Design Using Reversed Equivalent Areas
NASA Technical Reports Server (NTRS)
Rallabhandi, Sriram K.
2013-01-01
This paper presents an approach to shape an aircraft to equivalent area based objectives using the discrete adjoint approach. Equivalent areas can be obtained either using reversed augmented Burgers equation or direct conversion of off-body pressures into equivalent area. Formal coupling with CFD allows computation of sensitivities of equivalent area objectives with respect to aircraft shape parameters. The exactness of the adjoint sensitivities is verified against derivatives obtained using the complex step approach. This methodology has the benefit of using designer-friendly equivalent areas in the shape design of low-boom aircraft. Shape optimization results with equivalent area cost functionals are discussed and further refined using ground loudness based objectives.
NASA Astrophysics Data System (ADS)
Noguere, Gilles; Archier, Pascal; Bouland, Olivier; Capote, Roberto; Jean, Cyrille De Saint; Kopecky, Stefan; Schillebeeckx, Peter; Sirakov, Ivan; Tamagno, Pierre
2017-09-01
A consistent description of the neutron cross sections from thermal energy up to the MeV region is challenging. One of the first steps consists in optimizing the optical model parameters using average resonance parameters, such as the neutron strength functions. They can be derived from a statistical analysis of the resolved resonance parameters, or calculated with the generalized form of the SPRT method by using scattering matrix elements provided by optical model calculations. One of the difficulties is to establish the contributions of the direct and compound nucleus reactions. This problem was solved by using a slightly modified average R-Matrix formula with an equivalent hard sphere radius deduced from the phase shift originating from the potential. The performances of the proposed formalism are illustrated with results obtained for the 238U+n nuclear systems.
A formal and data-based comparison of measures of motor-equivalent covariation.
Verrel, Julius
2011-09-15
Different analysis methods have been developed for assessing motor-equivalent organization of movement variability. In the uncontrolled manifold (UCM) method, the structure of variability is analyzed by comparing goal-equivalent and non-goal-equivalent variability components at the level of elemental variables (e.g., joint angles). In contrast, in the covariation by randomization (CR) approach, motor-equivalent organization is assessed by comparing variability at the task level between empirical and decorrelated surrogate data. UCM effects can be due to both covariation among elemental variables and selective channeling of variability to elemental variables with low task sensitivity ("individual variation"), suggesting a link between the UCM and CR method. However, the precise relationship between the notion of covariation in the two approaches has not been analyzed in detail yet. Analysis of empirical and simulated data from a study on manual pointing shows that in general the two approaches are not equivalent, but the respective covariation measures are highly correlated (ρ > 0.7) for two proposed definitions of covariation in the UCM context. For one-dimensional task spaces, a formal comparison is possible and in fact the two notions of covariation are equivalent. In situations in which individual variation does not contribute to UCM effects, for which necessary and sufficient conditions are derived, this entails the equivalence of the UCM and CR analysis. Implications for the interpretation of UCM effects are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.
Kapon, Shulamit
2014-11-01
This article presents an analysis of a scientific article written by Albert Einstein in 1946 for the general public that explains the equivalence of mass and energy and discusses the implications of this principle. It is argued that an intelligent popularization of many advanced ideas in physics requires more than the simple elimination of mathematical formalisms and complicated scientific conceptions. Rather, it is shown that Einstein developed an alternative argument for the general public that bypasses the core of the formal derivation of the equivalence of mass and energy to provide a sense of derivation based on the history of science and the nature of scientific inquiry. This alternative argument is supported and enhanced by variety of explanatory devices orchestrated to coherently support and promote the reader's understanding. The discussion centers on comparisons to other scientific expositions written by Einstein for the general public. © The Author(s) 2013.
Event-related potential correlates of emergent inference in human arbitrary relational learning.
Wang, Ting; Dymond, Simon
2013-01-01
Two experiments investigated the functional-anatomical correlates of cognition supporting untrained, emergent relational inference in a stimulus equivalence task. In Experiment 1, after learning a series of conditional relations involving words and pseudowords, participants performed a relatedness task during which EEG was recorded. Behavioural performance was faster and more accurate on untrained, indirectly related symmetry (i.e., learn AB and infer BA) and equivalence trials (i.e., learn AB and AC and infer CB) than on unrelated trials, regardless of whether or not a formal test for stimulus equivalence relations had been conducted. Consistent with previous results, event related potentials (ERPs) evoked by trained and emergent trials at parietal and occipital sites differed only for those participants who had not received a prior equivalence test. Experiment 2 further replicated and extended these behavioural and ERP findings using arbitrary symbols as stimuli and demonstrated time and frequency differences for trained and untrained relatedness trials. Overall, the findings demonstrate convincingly the ERP correlates of intra-experimentally established stimulus equivalence relations consisting entirely of arbitrary symbols and offer support for a contemporary cognitive-behavioural model of symbolic categorisation and relational inference. Copyright © 2012 Elsevier B.V. All rights reserved.
Qualitative adaptation of child behaviour problem instruments in a developing-country setting.
Khan, B; Avan, B I
2014-07-08
A key barrier to epidemiological research on child behaviour problems in developing countries is the lack of culturally relevant, internationally recognized psychometric instruments. This paper proposes a model for the qualitative adaptation of psychometric instruments in developing-country settings and presents a case study of the adaptation of 3 internationally recognized instruments in Pakistan: the Child Behavior Checklist, the Youth Self-Report and the Teacher's Report Form. This model encompassed a systematic procedure with 6 distinct phases to minimize bias and ensure equivalence with the original instruments: selection, deliberation, alteration, feasibility, testing and formal approval. The process was conducted in collaboration with the instruments' developer. A multidisciplinary working group of experts identified equivalence issues and suggested modifications. Focus group discussions with informants highlighted comprehension issues. Subsequently modified instruments were thoroughly tested. Finally, the instruments' developer approval further validated the qualitative adaptation. The study proposes a rigorous and systematic model to effectively achieve cultural adaptation of psychometric instruments.
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
Heuristics as Bayesian inference under extreme priors.
Parpart, Paula; Jones, Matt; Love, Bradley C
2018-05-01
Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These "less-is-more" effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information. In contrast, we show at the computational level (where algorithmic restrictions are set aside) that it is never optimal to discard information. Through a formal Bayesian analysis, we prove that popular heuristics, such as tallying and take-the-best, are formally equivalent to Bayesian inference under the limit of infinitely strong priors. Varying the strength of the prior yields a continuum of Bayesian models with the heuristics at one end and ordinary regression at the other. Critically, intermediate models perform better across all our simulations, suggesting that down-weighting information with the appropriate prior is preferable to entirely ignoring it. Rather than because of their simplicity, our analyses suggest heuristics perform well because they implement strong priors that approximate the actual structure of the environment. We end by considering how new heuristics could be derived by infinitely strengthening the priors of other Bayesian models. These formal results have implications for work in psychology, machine learning and economics. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Spekkens’ toy model in all dimensions and its relationship with stabiliser quantum mechanics
NASA Astrophysics Data System (ADS)
Catani, Lorenzo; E Browne, Dan
2017-07-01
Spekkens’ toy model is a non-contextual hidden variable model with an epistemic restriction, a constraint on what an observer can know about reality. The aim of the model, developed for continuous and discrete prime degrees of freedom, is to advocate the epistemic view of quantum theory, where quantum states are states of incomplete knowledge about a deeper underlying reality. Many aspects of quantum mechanics and protocols from quantum information can be reproduced in the model. In spite of its significance, a number of aspects of Spekkens’ model remained incomplete. Formal rules for the update of states after measurement had not been written down, and the theory had only been constructed for prime-dimensional and infinite dimensional systems. In this work, we remedy this, by deriving measurement update rules and extending the framework to derive models in all dimensions, both prime and non-prime. Stabiliser quantum mechanics (SQM) is a sub-theory of quantum mechanics with restricted states, transformations and measurements. First derived for the purpose of constructing error correcting codes, it now plays a role in many areas of quantum information theory. Previously, it had been shown that Spekkens’ model was operationally equivalent to SQM in the case of odd prime dimensions. Here, exploiting known results on Wigner functions, we extend this to show that Spekkens’ model is equivalent to SQM in all odd dimensions, prime and non-prime. This equivalence provides new technical tools for the study of technically difficult compound-dimensional SQM.
Solvent effects in time-dependent self-consistent field methods. I. Optical response calculations
Bjorgaard, J. A.; Kuzmenko, V.; Velizhanin, K. A.; ...
2015-01-22
In this study, we implement and examine three excited state solvent models in time-dependent self-consistent field methods using a consistent formalism which unambiguously shows their relationship. These are the linear response, state specific, and vertical excitation solvent models. Their effects on energies calculated with the equivalent of COSMO/CIS/AM1 are given for a set of test molecules with varying excited state charge transfer character. The resulting solvent effects are explained qualitatively using a dipole approximation. It is shown that the fundamental differences between these solvent models are reflected by the character of the calculated excitations.
Equivalent equations of motion for gravity and entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel
We demonstrate an equivalence between the wave equation obeyed by the entanglement entropy of CFT subregions and the linearized bulk Einstein equation in Anti-de Sitter space. In doing so, we make use of the formalism of kinematic space and fields on this space. We show that the gravitational dynamics are equivalent to a gauge invariant wave-equation on kinematic space and that this equation arises in natural correspondence to the conformal Casimir equation in the CFT.
Equivalent equations of motion for gravity and entropy
Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel; ...
2017-02-01
We demonstrate an equivalence between the wave equation obeyed by the entanglement entropy of CFT subregions and the linearized bulk Einstein equation in Anti-de Sitter space. In doing so, we make use of the formalism of kinematic space and fields on this space. We show that the gravitational dynamics are equivalent to a gauge invariant wave-equation on kinematic space and that this equation arises in natural correspondence to the conformal Casimir equation in the CFT.
Multisymplectic Lagrangian and Hamiltonian Formalisms of Classical Field Theories
NASA Astrophysics Data System (ADS)
Román-Roy, Narciso
2009-11-01
This review paper is devoted to presenting the standard multisymplectic formulation for describing geometrically classical field theories, both the regular and singular cases. First, the main features of the Lagrangian formalism are revisited and, second, the Hamiltonian formalism is constructed using Hamiltonian sections. In both cases, the variational principles leading to the Euler-Lagrange and the Hamilton-De Donder-Weyl equations, respectively, are stated, and these field equations are given in different but equivalent geometrical ways in each formalism. Finally, both are unified in a new formulation (which has been developed in the last years), following the original ideas of Rusk and Skinner for mechanical systems.
On non-abelian T-duality and deformations of supercoset string sigma-models
NASA Astrophysics Data System (ADS)
Borsato, Riccardo; Wulff, Linus
2017-10-01
We elaborate on the class of deformed T-dual (DTD) models obtained by first adding a topological term to the action of a supercoset sigma model and then performing (non-abelian) T-duality on a subalgebra \\tilde{g} of the superisometry algebra. These models inherit the classical integrability of the parent one, and they include as special cases the so-called homogeneous Yang-Baxter sigma models as well as their non-abelian T-duals. Many properties of DTD models have simple algebraic interpretations. For example we show that their (non-abelian) T-duals — including certain deformations — are again in the same class, where \\tilde{g} gets enlarged or shrinks by adding or removing generators corresponding to the dualised isometries. Moreover, we show that Weyl invariance of these models is equivalent to \\tilde{g} being unimodular; when this property is not satisfied one can always remove one generator to obtain a unimodular \\tilde{g} , which is equivalent to (formal) T-duality. We also work out the target space superfields and, as a by-product, we prove the conjectured transformation law for Ramond-Ramond (RR) fields under bosonic non-abelian T-duality of supercosets, generalising it to cases involving also fermionic T-dualities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saygin, H.; Hebert, A.
The calculation of a dilution cross section {bar {sigma}}{sub e} is the most important step in the self-shielding formalism based on the equivalence principle. If a dilution cross section that accurately characterizes the physical situation can be calculated, it can then be used for calculating the effective resonance integrals and obtaining accurate self-shielded cross sections. A new technique for the calculation of equivalent cross sections based on the formalism of Riemann integration in the resolved energy domain is proposed. This new method is compared to the generalized Stamm`ler method, which is also based on an equivalence principle, for a two-regionmore » cylindrical cell and for a small pressurized water reactor assembly in two dimensions. The accuracy of each computing approach is obtained using reference results obtained from a fine-group slowing-down code named CESCOL. It is shown that the proposed method leads to slightly better performance than the generalized Stamm`ler approach.« less
Dynamic Gate Product and Artifact Generation from System Models
NASA Technical Reports Server (NTRS)
Jackson, Maddalena; Delp, Christopher; Bindschadler, Duane; Sarrel, Marc; Wollaeger, Ryan; Lam, Doris
2011-01-01
Model Based Systems Engineering (MBSE) is gaining acceptance as a way to formalize systems engineering practice through the use of models. The traditional method of producing and managing a plethora of disjointed documents and presentations ("Power-Point Engineering") has proven both costly and limiting as a means to manage the complex and sophisticated specifications of modern space systems. We have developed a tool and method to produce sophisticated artifacts as views and by-products of integrated models, allowing us to minimize the practice of "Power-Point Engineering" from model-based projects and demonstrate the ability of MBSE to work within and supersede traditional engineering practices. This paper describes how we have created and successfully used model-based document generation techniques to extract paper artifacts from complex SysML and UML models in support of successful project reviews. Use of formal SysML and UML models for architecture and system design enables production of review documents, textual artifacts, and analyses that are consistent with one-another and require virtually no labor-intensive maintenance across small-scale design changes and multiple authors. This effort thus enables approaches that focus more on rigorous engineering work and less on "PowerPoint engineering" and production of paper-based documents or their "office-productivity" file equivalents.
Some aspects of multicomponent excess free energy models with subregular binaries
NASA Astrophysics Data System (ADS)
Cheng, Weiji; Ganguly, Jibamitra
1994-09-01
We have shown that two of the most commonly used multicomponent formulations of excess Gibbs free energy of mixing, those by WOHL (1946, 1953) and REDLICH and KISTER (1948), are formally equivalent if the binaries are constrained to have subregular properties, and also that other subregular multicomponent formulations developed in the mineralogical and geochemical literature are equivalent to, or higher order extensions of, these formulations. We have also presented a compact derivation of a multicomponent subregular solution leading to the same expression as derived by HELFFRICH and WOOD (1989). It is shown that Wohl's multicomponent formulation involves combination of binary excess free energies, which are calculated at compositions obtained by normal projection of the multicomponent composition onto the bounding binary joins, and is, thus, equivalent to the formulation developed by MUGGIANU et al. (1975). Finally, following the lead of HILLERT (1980), we have explored the limiting behavior of regular and subregular ternary solutions when a pair of components become energetically equivalent, and have, thus, derived an expression for calculating the ternary interaction parameter in a ternary solution from a knowledge of the properties of the bounding binaries, when one of these binaries is nearly ideal.
Modified fluctuation-dissipation and Einstein relation at nonequilibrium steady states
NASA Astrophysics Data System (ADS)
Chaudhuri, Debasish; Chaudhuri, Abhishek
2012-02-01
Starting from the pioneering work of Agarwal [G. S. Agarwal, Zeitschrift für PhysikEPJAFV1434-600110.1007/BF01391621 252, 25 (1972)], we present a unified derivation of a number of modified fluctuation-dissipation relations (MFDR) that relate response to small perturbations around nonequilibrium steady states to steady-state correlations. Using this formalism we show the equivalence of velocity forms of MFDR derived using continuum Langevin and discrete master equation dynamics. The resulting additive correction to the Einstein relation is exemplified using a flashing ratchet model of molecular motors.
The equivalence of Darmois-Israel and distributional method for thin shells in general relativity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansouri, R.; Khorrami, M.
1996-11-01
A distributional method to solve the Einstein{close_quote}s field equations for thin shells is formulated. The familiar field equations and jump conditions of Darmois-Israel formalism are derived. A careful analysis of the Bianchi identities shows that, for cases under consideration, they make sense as distributions and lead to jump conditions of Darmois-Israel formalism. {copyright} {ital 1996 American Institute of Physics.}
The generation of gravitational waves. 2: The post-linear formalism revisited
NASA Technical Reports Server (NTRS)
Crowley, R. J.; Thorne, K. S.
1975-01-01
Two different versions of the Green's function for the scalar wave equation in weakly curved spacetime (one due to DeWitt and DeWitt, the other to Thorne and Kovacs) are compared and contrasted; and their mathematical equivalence is demonstrated. The DeWitt-DeWitt Green's function is used to construct several alternative versions of the Thorne-Kovacs post-linear formalism for gravitational-wave generation. Finally it is shown that, in calculations of gravitational bremsstrahlung radiation, some of our versions of the post-linear formalism allow one to treat the interacting bodies as point masses, while others do not.
45 CFR 2400.66 - Completion of fellowship.
Code of Federal Regulations, 2013 CFR
2013-10-01
... has completed no fewer than 12 graduate semester hours or the equivalent of study of the Constitution, formally secured the masters degree, attended the Foundation's Summer Institute on the Constitution...
Formal development of a clock synchronization circuit
NASA Technical Reports Server (NTRS)
Miner, Paul S.
1995-01-01
This talk presents the latest stage in formal development of a fault-tolerant clock synchronization circuit. The development spans from a high level specification of the required properties to a circuit realizing the core function of the system. An abstract description of an algorithm has been verified to satisfy the high-level properties using the mechanical verification system EHDM. This abstract description is recast as a behavioral specification input to the Digital Design Derivation system (DDD) developed at Indiana University. DDD provides a formal design algebra for developing correct digital hardware. Using DDD as the principle design environment, a core circuit implementing the clock synchronization algorithm was developed. The design process consisted of standard DDD transformations augmented with an ad hoc refinement justified using the Prototype Verification System (PVS) from SRI International. Subsequent to the above development, Wilfredo Torres-Pomales discovered an area-efficient realization of the same function. Establishing correctness of this optimization requires reasoning in arithmetic, so a general verification is outside the domain of both DDD transformations and model-checking techniques. DDD represents digital hardware by systems of mutually recursive stream equations. A collection of PVS theories was developed to aid in reasoning about DDD-style streams. These theories include a combinator for defining streams that satisfy stream equations, and a means for proving stream equivalence by exhibiting a stream bisimulation. DDD was used to isolate the sub-system involved in Torres-Pomales' optimization. The equivalence between the original design and the optimized verified was verified in PVS by exhibiting a suitable bisimulation. The verification depended upon type constraints on the input streams and made extensive use of the PVS type system. The dependent types in PVS provided a useful mechanism for defining an appropriate bisimulation.
Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.
Saccenti, Edoardo; Timmerman, Marieke E
2017-03-01
Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.
Zhang, Wen-Ran
2003-01-01
Bipolar logic, bipolar sets, and equilibrium relations are proposed for bipolar cognitive mapping and visualization in online analytical processing (OLAP) and online analytical mining (OLAM). As cognitive models, cognitive maps (CMs) hold great potential for clustering and visualization. Due to the lack of a formal mathematical basis, however, CM-based OLAP and OLAM have not gained popularity. Compared with existing approaches, bipolar cognitive mapping has a number of advantages. First, bipolar CMs are formal logical models as well as cognitive models. Second, equilibrium relations (with polarized reflexivity, symmetry, and transitivity), as bipolar generalizations and fusions of equivalence relations, provide a theoretical basis for bipolar visualization and coordination. Third, an equilibrium relation or CM induces bipolar partitions that distinguish disjoint coalition subsets not involved in any conflict, disjoint coalition subsets involved in a conflict, disjoint conflict subsets, and disjoint harmony subsets. Finally, equilibrium energy analysis leads to harmony and stability measures for strategic decision and multiagent coordination. Thus, this work bridges a gap for CM-based clustering and visualization in OLAP and OLAM. Basic ideas are illustrated with example CMs in international relations.
The MFA ground states for the extended Bose-Hubbard model with a three-body constraint
NASA Astrophysics Data System (ADS)
Panov, Yu. D.; Moskvin, A. S.; Vasinovich, E. V.; Konev, V. V.
2018-05-01
We address the intensively studied extended bosonic Hubbard model (EBHM) with truncation of the on-site Hilbert space to the three lowest occupation states n = 0 , 1 , 2 in frames of the S = 1 pseudospin formalism. Similar model was recently proposed to describe the charge degree of freedom in a model high-T c cuprate with the on-site Hilbert space reduced to the three effective valence centers, nominally Cu1+;2+;3+. With small corrections the model becomes equivalent to a strongly anisotropic S = 1 quantum magnet in an external magnetic field. We have applied a generalized mean-field approach and quantum Monte-Carlo technique for the model 2D S = 1 system with a two-particle transport to find the ground state phase with its evolution under deviation from half-filling.
Using quantum process tomography to characterize decoherence in an analog electronic device
NASA Astrophysics Data System (ADS)
Ostrove, Corey; La Cour, Brian; Lanham, Andrew; Ott, Granville
The mathematical structure of a universal gate-based quantum computer can be emulated faithfully on a classical electronic device using analog signals to represent a multi-qubit state. We describe a prototype device capable of performing a programmable sequence of single-qubit and controlled two-qubit gate operations on a pair of voltage signals representing the real and imaginary parts of a two-qubit quantum state. Analog filters and true-RMS voltage measurements are used to perform unitary and measurement gate operations. We characterize the degradation of the represented quantum state with successive gate operations by formally performing quantum process tomography to estimate the equivalent decoherence channel. Experimental measurements indicate that the performance of the device may be accurately modeled as an equivalent quantum operation closely resembling a depolarizing channel with a fidelity of over 99%. This work was supported by the Office of Naval Research under Grant No. N00014-14-1-0323.
50 CFR 697.25 - Adjustment to management measures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... escape vent size, decreases in the lobster trap size, closed areas, closed seasons, landing limits, trip... equivalency for American lobster that are formally submitted to him/her in writing by the ASMFC. These...
50 CFR 697.25 - Adjustment to management measures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... escape vent size, decreases in the lobster trap size, closed areas, closed seasons, landing limits, trip... equivalency for American lobster that are formally submitted to him/her in writing by the ASMFC. These...
Gluon Bremsstrahlung in Weakly-Coupled Plasmas
NASA Astrophysics Data System (ADS)
Arnold, Peter
2009-11-01
I report on some theoretical progress concerning the calculation of gluon bremsstrahlung for very high energy particles crossing a weakly-coupled quark-gluon plasma. (i) I advertise that two of the several formalisms used to study this problem, the BDMPS-Zakharov formalism and the AMY formalism (the latter used only for infinite, uniform media), can be made equivalent when appropriately formulated. (ii) A standard technique to simplify calculations is to expand in inverse powers of logarithms ln(E/T). I give an example where such expansions are found to work well for ω/T≳10 where ω is the bremsstrahlung gluon energy. (iii) Finally, I report on perturbative calculations of q̂.
Okado, Ryohei; Nowaki, Aya; Matsuo, Jun-Ichi; Ishibashi, Hiroyuki
2012-01-01
A catalytic amount of tin(IV) chloride catalyzed formal [4+2] cycloaddition reaction of di-tert-butyl 2-ethoxycyclobutane-1,1-carboxylate with ketones or aldehydes to give diethyl 6-ethoxydihydro-2H-pyran-3,3(4H)-dicarboxylates, whereas two equivalents of trimethylsilyl triflate promoted tandem [4+2] cycloaddition and lactonization to afford 3-oxo-2,6-dioxabicyclo[2.2.2]octane-4-carboxylate esters.
ERIC Educational Resources Information Center
Hamalainen, Eila
This paper discusses subject-verb-complement (SVC) clauses and their renderings in English and Finnish. Comparisons are made on the level of surface structure and with regard to equivalence in the sense of one construction being an optimum translation of the other. The definition of congruence, i.e., formal similarity and equal number of…
Nonlocal torque operators in ab initio theory of the Gilbert damping in random ferromagnetic alloys
NASA Astrophysics Data System (ADS)
Turek, I.; Kudrnovský, J.; Drchal, V.
2015-12-01
We present an ab initio theory of the Gilbert damping in substitutionally disordered ferromagnetic alloys. The theory rests on introduced nonlocal torques which replace traditional local torque operators in the well-known torque-correlation formula and which can be formulated within the atomic-sphere approximation. The formalism is sketched in a simple tight-binding model and worked out in detail in the relativistic tight-binding linear muffin-tin orbital method and the coherent potential approximation (CPA). The resulting nonlocal torques are represented by nonrandom, non-site-diagonal, and spin-independent matrices, which simplifies the configuration averaging. The CPA-vertex corrections play a crucial role for the internal consistency of the theory and for its exact equivalence to other first-principles approaches based on the random local torques. This equivalence is also illustrated by the calculated Gilbert damping parameters for binary NiFe and FeCo random alloys, for pure iron with a model atomic-level disorder, and for stoichiometric FePt alloys with a varying degree of L 10 atomic long-range order.
Generalized contexts and consistent histories in quantum mechanics
NASA Astrophysics Data System (ADS)
Losada, Marcelo; Laura, Roberto
2014-05-01
We analyze a restriction of the theory of consistent histories by imposing that a valid description of a physical system must include quantum histories which satisfy the consistency conditions for all states. We prove that these conditions are equivalent to imposing the compatibility conditions of our formalism of generalized contexts. Moreover, we show that the theory of consistent histories with the consistency conditions for all states and the formalism of generalized context are equally useful representing expressions which involve properties at different times.
Data-Driven Method to Estimate Nonlinear Chemical Equivalence.
Mayo, Michael; Collier, Zachary A; Winton, Corey; Chappell, Mark A
2015-01-01
There is great need to express the impacts of chemicals found in the environment in terms of effects from alternative chemicals of interest. Methods currently employed in fields such as life-cycle assessment, risk assessment, mixtures toxicology, and pharmacology rely mostly on heuristic arguments to justify the use of linear relationships in the construction of "equivalency factors," which aim to model these concentration-concentration correlations. However, the use of linear models, even at low concentrations, oversimplifies the nonlinear nature of the concentration-response curve, therefore introducing error into calculations involving these factors. We address this problem by reporting a method to determine a concentration-concentration relationship between two chemicals based on the full extent of experimentally derived concentration-response curves. Although this method can be easily generalized, we develop and illustrate it from the perspective of toxicology, in which we provide equations relating the sigmoid and non-monotone, or "biphasic," responses typical of the field. The resulting concentration-concentration relationships are manifestly nonlinear for nearly any chemical level, even at the very low concentrations common to environmental measurements. We demonstrate the method using real-world examples of toxicological data which may exhibit sigmoid and biphasic mortality curves. Finally, we use our models to calculate equivalency factors, and show that traditional results are recovered only when the concentration-response curves are "parallel," which has been noted before, but we make formal here by providing mathematical conditions on the validity of this approach.
Cost-Benefit Arbitration Between Multiple Reinforcement-Learning Systems.
Kool, Wouter; Gershman, Samuel J; Cushman, Fiery A
2017-09-01
Human behavior is sometimes determined by habit and other times by goal-directed planning. Modern reinforcement-learning theories formalize this distinction as a competition between a computationally cheap but inaccurate model-free system that gives rise to habits and a computationally expensive but accurate model-based system that implements planning. It is unclear, however, how people choose to allocate control between these systems. Here, we propose that arbitration occurs by comparing each system's task-specific costs and benefits. To investigate this proposal, we conducted two experiments showing that people increase model-based control when it achieves greater accuracy than model-free control, and especially when the rewards of accurate performance are amplified. In contrast, they are insensitive to reward amplification when model-based and model-free control yield equivalent accuracy. This suggests that humans adaptively balance habitual and planned action through on-line cost-benefit analysis.
Composing Models of Geographic Physical Processes
NASA Astrophysics Data System (ADS)
Hofer, Barbara; Frank, Andrew U.
Processes are central for geographic information science; yet geographic information systems (GIS) lack capabilities to represent process related information. A prerequisite to including processes in GIS software is a general method to describe geographic processes independently of application disciplines. This paper presents such a method, namely a process description language. The vocabulary of the process description language is derived formally from mathematical models. Physical processes in geography can be described in two equivalent languages: partial differential equations or partial difference equations, where the latter can be shown graphically and used as a method for application specialists to enter their process models. The vocabulary of the process description language comprises components for describing the general behavior of prototypical geographic physical processes. These process components can be composed by basic models of geographic physical processes, which is shown by means of an example.
Toward a complete theory for predicting inclusive deuteron breakup away from stability
NASA Astrophysics Data System (ADS)
Potel, G.; Perdikakis, G.; Carlson, B. V.; Atkinson, M. C.; Dickhoff, W. H.; Escher, J. E.; Hussein, M. S.; Lei, J.; Li, W.; Macchiavelli, A. O.; Moro, A. M.; Nunes, F. M.; Pain, S. D.; Rotureau, J.
2017-09-01
We present an account of the current status of the theoretical treatment of inclusive ( d, p) reactions in the breakup-fusion formalism, pointing to some applications and making the connection with current experimental capabilities. Three independent implementations of the reaction formalism have been recently developed, making use of different numerical strategies. The codes also originally relied on two different but equivalent representations, namely the prior (Udagawa-Tamura, UT) and the post (Ichimura-Austern-Vincent, IAV) representations. The different implementations have been benchmarked for the first time, and then applied to the Ca isotopic chain. The neutron-Ca propagator is described in the Dispersive Optical Model (DOM) framework, and the interplay between elastic breakup (EB) and non-elastic breakup (NEB) is studied for three Ca isotopes at two different bombarding energies. The accuracy of the description of different reaction observables is assessed by comparing with experimental data of ( d, p) on 40,48Ca. We discuss the predictions of the model for the extreme case of an isotope (60Ca) currently unavailable experimentally, though possibly available in future facilities (nominally within production reach at FRIB). We explore the use of ( d, p) reactions as surrogates for (n,γ ) processes, by using the formalism to describe the compound nucleus formation in a (d,pγ ) reaction as a function of excitation energy, spin, and parity. The subsequent decay is then computed within a Hauser-Feshbach formalism. Comparisons between the (d,pγ ) and (n,γ ) induced gamma decay spectra are discussed to inform efforts to infer neutron captures from (d,pγ ) reactions. Finally, we identify areas of opportunity for future developments, and discuss a possible path toward a predictive reaction theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Zhi-Qiang; Herbert, John M., E-mail: herbert@chemistry.ohio-state.edu; Mewes, Jan-Michael
2015-11-28
The Marcus and Pekar partitions are common, alternative models to describe the non-equilibrium dielectric polarization response that accompanies instantaneous perturbation of a solute embedded in a dielectric continuum. Examples of such a perturbation include vertical electronic excitation and vertical ionization of a solution-phase molecule. Here, we provide a general derivation of the accompanying polarization response, for a quantum-mechanical solute described within the framework of a polarizable continuum model (PCM) of electrostatic solvation. Although the non-equilibrium free energy is formally equivalent within the two partitions, albeit partitioned differently into “fast” versus “slow” polarization contributions, discretization of the PCM integral equations failsmore » to preserve certain symmetries contained in these equations (except in the case of the conductor-like models or when the solute cavity is spherical), leading to alternative, non-equivalent matrix equations. Unlike the total equilibrium solvation energy, however, which can differ dramatically between different formulations, we demonstrate that the equivalence of the Marcus and Pekar partitions for the non-equilibrium solvation correction is preserved to high accuracy. Differences in vertical excitation and ionization energies are <0.2 eV (and often <0.01 eV), even for systems specifically selected to afford a large polarization response. Numerical results therefore support the interchangeability of the Marcus and Pekar partitions, but also caution against relying too much on the fast PCM charges for interpretive value, as these charges differ greatly between the two partitions, especially in polar solvents.« less
Spin-dependent optimized effective potential formalism for open and closed systems
NASA Astrophysics Data System (ADS)
Rigamonti, S.; Horowitz, C. M.; Proetto, C. R.
2015-12-01
Orbital-based exchange (x ) correlation (c ) energy functionals, leading to the optimized effective potential (OEP) formalism of density-functional theory (DFT), are gaining increasing importance in ground-state DFT, as applied to the calculation of the electronic structure of closed systems with a fixed number of particles, such as atoms and molecules. These types of functionals prove also to be extremely valuable for dealing with solid-state systems with reduced dimensionality, such as is the case of electrons trapped at the interface between two different semiconductors, or narrow metallic slabs. In both cases, electrons build a quasi-two-dimensional electron gas, or Q2DEG. We provide here a general DFT-OEP formal scheme valid both for Q2DEGs either isolated (closed) or in contact with a particle bath (open), and show that both possible representations are equivalent, being the choice of one or the other essentially a question of convenience. Based on this equivalence, a calculation scheme is proposed which avoids the noninvertibility problem of the density response function for closed systems. We also consider the case of spontaneously spin-polarized Q2DEGs, and find that far from the region where the Q2DEG is localized, the exact x -only exchange potential approaches two different, spin-dependent asymptotic limits. As an example, aside from these formal results, we also provide numerical results for a spin-polarized jellium slab, using the new OEP formalism for closed systems. The accuracy of the Krieger-Li-Iafrate approximation has been also tested for the same system, and found to be as good as it is for atoms and molecules.
Numerical investigation of a helicopter combustion chamber using LES and tabulated chemistry
NASA Astrophysics Data System (ADS)
Auzillon, Pierre; Riber, Eléonore; Gicquel, Laurent Y. M.; Gicquel, Olivier; Darabiha, Nasser; Veynante, Denis; Fiorina, Benoît
2013-01-01
This article presents Large Eddy Simulations (LES) of a realistic aeronautical combustor device: the chamber CTA1 designed by TURBOMECA. Under nominal operating conditions, experiments show hot spots observed on the combustor walls, in the vicinity of the injectors. These high temperature regions disappear when modifying the fuel stream equivalence ratio. In order to account for detailed chemistry effects within LES, the numerical simulation uses the recently developed turbulent combustion model F-TACLES (Filtered TAbulated Chemistry for LES). The principle of this model is first to generate a lookup table where thermochemical variables are computed from a set of filtered laminar unstrained premixed flamelets. To model the interactions between the flame and the turbulence at the subgrid scale, a flame wrinkling analytical model is introduced and the Filtered Density Function (FDF) of the mixture fraction is modeled by a β function. Filtered thermochemical quantities are stored as a function of three coordinates: the filtered progress variable, the filtered mixture fraction and the mixture fraction subgrid scale variance. The chemical lookup table is then coupled with the LES using a mathematical formalism that ensures an accurate prediction of the flame dynamics. The numerical simulation of the CTA1 chamber with the F-TACLES turbulent combustion model reproduces fairly the temperature fields observed in experiments. In particular the influence of the fuel stream equivalence ratio on the flame position is well captured.
Theoretical and observational constraints on Tachyon Inflation
NASA Astrophysics Data System (ADS)
Barbosa-Cendejas, Nandinii; De-Santiago, Josue; German, Gabriel; Hidalgo, Juan Carlos; Rigel Mora-Luna, Refugio
2018-03-01
We constrain several models in Tachyonic Inflation derived from the large-N formalism by considering theoretical aspects as well as the latest observational data. On the theoretical side, we assess the field range of our models by means of the excursion of the equivalent canonical field. On the observational side, we employ BK14+PLANCK+BAO data to perform a parameter estimation analysis as well as a Bayesian model selection to distinguish the most favoured models among all four classes here presented. We observe that the original potential V propto sech(T) is strongly disfavoured by observations with respect to a reference model with flat priors on inflationary observables. This realisation of Tachyon inflation also presents a large field range which may demand further quantum corrections. We also provide examples of potentials derived from the polynomial and the perturbative classes which are both statistically favoured and theoretically acceptable.
Kron-Branin modelling of ultra-short pulsed signal microelectrode
NASA Astrophysics Data System (ADS)
Xu, Zhifei; Ravelo, Blaise; Liu, Yang; Zhao, Lu; Delaroche, Fabien; Vurpillot, Francois
2018-06-01
An uncommon circuit modelling of microelectrode for ultra-short signal propagation is developed. The proposed model is based on the Tensorial Analysis of Network (TAN) using the Kron-Branin (KB) formalism. The systemic graph topology equivalent to the considered structure problem is established by assuming as unknown variables the branch currents. The TAN mathematical solution is determined after the KB characteristic matrix identification. The TAN can integrate various structure physical parameters. As proof of concept, via hole ended microelectrodes implemented on Kapton substrate were designed, fabricated and tested. The 0.1-MHz-to-6-GHz S-parameter KB model, simulation and measurement are in good agreement. In addition, time-domain analyses with nanosecond duration pulse signals were carried out to predict the microelectrode signal integrity. The modelled microstrip electrode is usually integrated in the atom probe tomography. The proposed unfamiliar KB method is particularly beneficial with respect to the computation speed and adaptability to various structures.
Scalar formalism for non-Abelian gauge theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hostler, L.C.
1986-09-01
The gauge field theory of an N-italic-dimensional multiplet of spin- 1/2 particles is investigated using the Klein--Gordon-type wave equation )Pi x (1+i-italicsigma) x Pi+m-italic/sup 2/)Phi = 0, Pi/sub ..mu../equivalentpartial/partiali-italicx-italic/sub ..mu../-e-italicA-italic/sub ..mu../, investigated before by a number of authors, to describe the fermions. Here Phi is a 2 x 1 Pauli spinor, and sigma repesents a Lorentz spin tensor whose components sigma/sub ..mu..//sub ..nu../ are ordinary 2 x 2 Pauli spin matrices. Feynman rules for the scalar formalism for non-Abelian gauge theory are derived starting from the conventional field theory of the multiplet and converting it to the new description. Themore » equivalence of the new and the old formalism for arbitrary radiative processes is thereby established. The conversion to the scalar formalism is accomplished in a novel way by working in terms of the path integral representation of the generating functional of the vacuum tau-functions, tau(2,1, xxx 3 xxx)equivalent<0-chemically bondT-italic(Psi/sub in/(2) Psi-bar/sub in/(1) xxx A-italic/sub ..mu../(3)/sub in/ xxx S-italic)chemically bond0->, where Psi/sub in/ is a Heisenberg operator belonging to a 4N-italic x 1 Dirac wave function of the multiplet. The Feynman rules obtained generalize earlier results for the Abelian case of quantum electrodynamics.« less
NASA Astrophysics Data System (ADS)
Tao, Guohua
2017-07-01
A general theoretical framework is derived for the recently developed multi-state trajectory (MST) approach from the time dependent Schrödinger equation, resulting in equations of motion for coupled nuclear-electronic dynamics equivalent to Hamilton dynamics or Heisenberg equation based on a new multistate Meyer-Miller (MM) model. The derived MST formalism incorporates both diabatic and adiabatic representations as limiting cases and reduces to Ehrenfest or Born-Oppenheimer dynamics in the mean-field or the single-state limits, respectively. In the general multistate formalism, nuclear dynamics is represented in terms of a set of individual state-specific trajectories, while in the active state trajectory (AST) approximation, only one single nuclear trajectory on the active state is propagated with its augmented images running on all other states. The AST approximation combines the advantages of consistent nuclear-coupled electronic dynamics in the MM model and the single nuclear trajectory in the trajectory surface hopping (TSH) treatment and therefore may provide a potential alternative to both Ehrenfest and TSH methods. The resulting algorithm features in a consistent description of coupled electronic-nuclear dynamics and excellent numerical stability. The implementation of the MST approach to several benchmark systems involving multiple nonadiabatic transitions and conical intersection shows reasonably good agreement with exact quantum calculations, and the results in both representations are similar in accuracy. The AST treatment also reproduces the exact results reasonably, sometimes even quantitatively well, with a better performance in the adiabatic representation.
Binary black hole spacetimes with a helical Killing vector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, Christian
Binary black hole spacetimes with a helical Killing vector, which are discussed as an approximation for the early stage of a binary system, are studied in a projection formalism. In this setting the four-dimensional Einstein equations are equivalent to a three-dimensional gravitational theory with a SL(2,R)/SO(1,1) sigma model as the material source. The sigma model is determined by a complex Ernst equation. 2+1 decompositions of the three-metric are used to establish the field equations on the orbit space of the Killing vector. The two Killing horizons of spherical topology which characterize the black holes, the cylinder of light where themore » Killing vector changes from timelike to spacelike, and infinity are singular points of the equations. The horizon and the light cylinder are shown to be regular singularities, i.e., the metric functions can be expanded in a formal power series in the vicinity. The behavior of the metric at spatial infinity is studied in terms of formal series solutions to the linearized Einstein equations. It is shown that the spacetime is not asymptotically flat in the strong sense to have a smooth null infinity under the assumption that the metric tends asymptotically to the Minkowski metric. In this case the metric functions have an oscillatory behavior in the radial coordinate in a nonaxisymmetric setting, the asymptotic multipoles are not defined. The asymptotic behavior of the Weyl tensor near infinity shows that there is no smooth null infinity.« less
Insight on the proof of orientifold planar equivalence on the lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patella, Agostino
2006-08-01
In a recent paper, Armoni, Shifman, and Veneziano (ASV) gave a formal nonperturbative proof of planar equivalence between the bosonic sectors of SU(N) super Yang-Mills theory and of a gauge theory with a massless quark in the antisymmetric two-indexes representation. In the case of three colors, the latter theory is nothing but one-flavor QCD. I will give a lattice version of the ASV proof of orientifold planar equivalence. It will be clear that it holds only in the strong-coupling and large-mass phase. Therefore, numerical simulations are necessary to test the validity of the orientifold planar equivalence in a physical regionmore » of the bare parameters on the lattice and to estimate the size of 1/N corrections.« less
Master of Public Health | Cancer Prevention Fellowship Program
One of the unique features of the CPFP is the opportunity to receive formal, academic training in public health. By pursuing an MPH or equivalent degree, fellows learn about the current role and historical context of cancer prevention in public health.
How Formal Dynamic Verification Tools Facilitate Novel Concurrency Visualizations
NASA Astrophysics Data System (ADS)
Aananthakrishnan, Sriram; Delisi, Michael; Vakkalanka, Sarvani; Vo, Anh; Gopalakrishnan, Ganesh; Kirby, Robert M.; Thakur, Rajeev
With the exploding scale of concurrency, presenting valuable pieces of information collected by formal verification tools intuitively and graphically can greatly enhance concurrent system debugging. Traditional MPI program debuggers present trace views of MPI program executions. Such views are redundant, often containing equivalent traces that permute independent MPI calls. In our ISP formal dynamic verifier for MPI programs, we present a collection of alternate views made possible by the use of formal dynamic verification. Some of ISP’s views help pinpoint errors, some facilitate discerning errors by eliminating redundancy, while others help understand the program better by displaying concurrent even orderings that must be respected by all MPI implementations, in the form of completes-before graphs. In this paper, we describe ISP’s graphical user interface (GUI) capabilities in all these areas which are currently supported by a portable Java based GUI, a Microsoft Visual Studio GUI, and an Eclipse based GUI whose development is in progress.
Palatini versus metric formulation in higher-curvature gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borunda, Monica; Janssen, Bert; Bastero-Gil, Mar, E-mail: mborunda@ugr.es, E-mail: bjanssen@ugr.es, E-mail: mbg@ugr.es
2008-11-15
We compare the metric and the Palatini formalism to obtain the Einstein equations in the presence of higher-order curvature corrections that consist of contractions of the Riemann tensor, but not of its derivatives. We find that there is a class of theories for which the two formalisms are equivalent. This class contains the Palatini version of Lovelock theory, but also more Lagrangians that are not Lovelock, but respect certain symmetries. For the general case, we find that imposing the Levi-Civita connection as an ansatz, the Palatini formalism is contained within the metric formalism, in the sense that any solution ofmore » the former also appears as a solution of the latter, but not necessarily the other way around. Finally we give the conditions the solutions of the metric equations should satisfy in order to solve the Palatini equations.« less
A Formalism for Covariant Polarized Radiative Transport by Ray Tracing
NASA Astrophysics Data System (ADS)
Gammie, Charles F.; Leung, Po Kin
2012-06-01
We write down a covariant formalism for polarized radiative transfer appropriate for ray tracing through a turbulent plasma. The polarized radiation field is represented by the polarization tensor (coherency matrix) N αβ ≡ langa α k a*β k rang, where ak is a Fourier coefficient for the vector potential. Using Maxwell's equations, the Liouville-Vlasov equation, and the WKB approximation, we show that the transport equation in vacuo is k μ∇μ N αβ = 0. We show that this is equivalent to Broderick & Blandford's formalism based on invariant Stokes parameters and a rotation coefficient, and suggest a modification that may reduce truncation error in some situations. Finally, we write down several alternative approaches to integrating the transfer equation.
Noncommutative gauge theories and Kontsevich's formality theorem
NASA Astrophysics Data System (ADS)
Jurčo, B.; Schupp, P.; Wess, J.
2001-09-01
The equivalence of star products that arise from the background field with and without fluctuations and Kontsevich's formality theorem allow an explicitly construction of a map that relates ordinary gauge theory and noncommutative gauge theory (Seiberg-Witten map.) Using noncommutative extra dimensions the construction is extended to noncommutative nonabelian gauge theory for arbitrary gauge groups; as a byproduct we obtain a "Mini Seiberg-Witten map" that explicitly relates ordinary abelian and nonabelian gauge fields. All constructions are also valid for non-constant B-field, and even more generally for any Poisson tensor.
Sundar, Bhuvanesh; Hamilton, Alasdair C; Courtial, Johannes
2009-02-01
We derive a formal description of local light-ray rotation in terms of complex refractive indices. We show that Fermat's principle holds, and we derive an extended Snell's law. The change in the angle of a light ray with respect to the normal of a refractive index interface is described by the modulus of the refractive index ratio; the rotation around the interface normal is described by the argument of the refractive index ratio.
An astronomer's guide to period searching
NASA Astrophysics Data System (ADS)
Schwarzenberg-Czerny, A.
2003-03-01
We concentrate on analysis of unevenly sampled time series, interrupted by periodic gaps, as often encountered in astronomy. While some of our conclusions may appear surprising, all are based on classical statistical principles of Fisher & successors. Except for discussion of the resolution issues, it is best for the reader to forget temporarily about Fourier transforms and to concentrate on problems of fitting of a time series with a model curve. According to their statistical content we divide the issues into several sections, consisting of: (ii) statistical numerical aspects of model fitting, (iii) evaluation of fitted models as hypotheses testing, (iv) the role of the orthogonal models in signal detection (v) conditions for equivalence of periodograms (vi) rating sensitivity by test power. An experienced observer working with individual objects would benefit little from formalized statistical approach. However, we demonstrate the usefulness of this approach in evaluation of performance of periodograms and in quantitative design of large variability surveys.
Twisting perturbed parafermions
NASA Astrophysics Data System (ADS)
Belitsky, A. V.
2017-07-01
The near-collinear expansion of scattering amplitudes in maximally supersymmetric Yang-Mills theory at strong coupling is governed by the dynamics of stings propagating on the five sphere. The pentagon transitions in the operator product expansion which systematize the series get reformulated in terms of matrix elements of branch-point twist operators in the two-dimensional O(6) nonlinear sigma model. The facts that the latter is an asymptotically free field theory and that there exists no local realization of twist fields prevents one from explicit calculation of their scaling dimensions and operator product expansion coefficients. This complication is bypassed making use of the equivalence of the sigma model to the infinite-level limit of WZNW models perturbed by current-current interactions, such that one can use conformal symmetry and conformal perturbation theory for systematic calculations. Presently, to set up the formalism, we consider the O(3) sigma model which is reformulated as perturbed parafermions.
Localized excitations in hydrogen-bonded molecular crystals
NASA Astrophysics Data System (ADS)
Alexander, D. M.; Krumhansl, J. A.
1986-05-01
Localized excitations analogous to the small Holstein polaron, to localized modes in alkali halides, and to localized excitonic states, are postulated for a set of internal vibrational modes in crystalline acetanilide. The theoretical framework in which one can describe the characteristics of the ir and Raman spectroscopy peaks associated with these localized states is adequately provided by the Davydov model (formally equivalent to the Holstein polaron model). The possible low-lying excitations arising from this model are determined using a variational approach. Hence, the contribution to the spectral function due to each type of excitation can be calculated. The internal modes of chief concern here are the amide-I (CO stretch) and the N-H stretch modes for which we demonstrate consistency of the theoretical model with the available ir data. Past theoretical approaches will be discussed and reasons why one should prefer one description over another will be examined.
Unitarity check in gravitational Higgs mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berezhiani, Lasha; Mirbabayi, Mehrdad
2011-03-15
The effective field theory of massive gravity has long been formulated in a generally covariant way [N. Arkani-Hamed, H. Georgi, and M. D. Schwartz, Ann. Phys. (N.Y.) 305, 96 (2003).]. Using this formalism, it has been found recently that there exists a class of massive nonlinear theories that are free of the Boulware-Deser ghosts, at least in the decoupling limit [C. de Rham and G. Gabadadze, Phys. Rev. D 82, 044020 (2010).]. In this work we study other recently proposed models that go under the name of 'gravitational Higgs theories' [A. H. Chamseddine and V. Mukhanov, J. High Energy Phys.more » 08 (2010) 011.]. We show that these models, although seemingly different from the effective field theories of massive gravity, are in fact equivalent to them. Furthermore, based on the results obtained in the effective field theory approach, we conclude that the gravitational Higgs theories need the same adjustment of the Lagrangian to avoid the ghosts. We also show the equivalence between the noncovariant mode decomposition used in the Higgs theories, and the covariant Stueckelberg parametrization adopted in the effective field theories, thus proving that the presence or absence of the ghost is independent of the parametrization used in either theory.« less
Stochastic dark energy from inflationary quantum fluctuations
NASA Astrophysics Data System (ADS)
Glavan, Dražen; Prokopec, Tomislav; Starobinsky, Alexei A.
2018-05-01
We study the quantum backreaction from inflationary fluctuations of a very light, non-minimally coupled spectator scalar and show that it is a viable candidate for dark energy. The problem is solved by suitably adapting the formalism of stochastic inflation. This allows us to self-consistently account for the backreaction on the background expansion rate of the Universe where its effects are large. This framework is equivalent to that of semiclassical gravity in which matter vacuum fluctuations are included at the one loop level, but purely quantum gravitational fluctuations are neglected. Our results show that dark energy in our model can be characterized by a distinct effective equation of state parameter (as a function of redshift) which allows for testing of the model at the level of the background.
Thermal averages in a quantum point contact with a single coherent wave packet.
Heller, E J; Aidala, K E; LeRoy, B J; Bleszynski, A C; Kalben, A; Westervelt, R M; Maranowski, K D; Gossard, A C
2005-07-01
A novel formal equivalence between thermal averages of coherent properties (e.g., conductance) and time averages of a single wave packet arises for Fermi gases and certain geometries. In the case of one open channel in a quantum point contact (QPC), only one wave packet history, with the wave packet width equal to the thermal length, completely determines the thermally averaged conductance. The formal equivalence moreover allows very simple physical interpretations of interference features surviving under thermal averaging. Simply put, pieces of the thermal wave packet returning to the QPC along independent paths must arrive at the same time in order to interfere. Remarkably, one immediate result of this approach is that higher temperature leads to narrower wave packets and therefore better resolution of events in the time domain. In effect, experiments at 4.2 K are performing time-gated experiments at better than a gigahertz. Experiments involving thermally averaged ballistic conductance in 2DEGS are presented as an application of this picture.
2011-01-01
Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay equivalence often fails to converge, while flow equivalent aggregation can lead to potentially bad results if a strong dependence of the mean completion time on the interarrival process exists.
NASA Astrophysics Data System (ADS)
Fitch, W. Tecumseh
2014-09-01
Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology.
Fitch, W Tecumseh
2014-09-01
Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally, it requires some neural equivalent of a push-down stack. I dub this unusual human propensity "dendrophilia", and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology. Copyright © 2014. Published by Elsevier B.V.
OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems
2016-01-01
Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs) by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of “ODEs and formalized flow diagrams” as OFFL.) Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative) abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler’s behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features. PMID:27270918
Computational cognitive modeling of the temporal dynamics of fatigue from sleep loss.
Walsh, Matthew M; Gunzelmann, Glenn; Van Dongen, Hans P A
2017-12-01
Computational models have become common tools in psychology. They provide quantitative instantiations of theories that seek to explain the functioning of the human mind. In this paper, we focus on identifying deep theoretical similarities between two very different models. Both models are concerned with how fatigue from sleep loss impacts cognitive processing. The first is based on the diffusion model and posits that fatigue decreases the drift rate of the diffusion process. The second is based on the Adaptive Control of Thought - Rational (ACT-R) cognitive architecture and posits that fatigue decreases the utility of candidate actions leading to microlapses in cognitive processing. A biomathematical model of fatigue is used to control drift rate in the first account and utility in the second. We investigated the predicted response time distributions of these two integrated computational cognitive models for performance on a psychomotor vigilance test under conditions of total sleep deprivation, simulated shift work, and sustained sleep restriction. The models generated equivalent predictions of response time distributions with excellent goodness-of-fit to the human data. More importantly, although the accounts involve different modeling approaches and levels of abstraction, they represent the effects of fatigue in a functionally equivalent way: in both, fatigue decreases the signal-to-noise ratio in decision processes and decreases response inhibition. This convergence suggests that sleep loss impairs psychomotor vigilance performance through degradation of the quality of cognitive processing, which provides a foundation for systematic investigation of the effects of sleep loss on other aspects of cognition. Our findings illustrate the value of treating different modeling formalisms as vehicles for discovery.
Code of Federal Regulations, 2010 CFR
2010-07-01
... will consider a sample obtained using any of the applicable sampling methods specified in appendix I to... appendix I sampling methods are not being formally adopted by the Administrator, a person who desires to employ an alternative sampling method is not required to demonstrate the equivalency of his method under...
Fienup, Daniel M; Critchfield, Thomas S
2010-01-01
Computerized lessons that reflect stimulus equivalence principles were used to teach college students concepts related to inferential statistics and hypothesis decision making. Lesson 1 taught participants concepts related to inferential statistics, and Lesson 2 taught them to base hypothesis decisions on a scientific hypothesis and the direction of an effect. Lesson 3 taught the conditional influence of inferential statistics over decisions regarding the scientific and null hypotheses. Participants entered the study with low scores on the targeted skills and left the study demonstrating a high level of accuracy on these skills, which involved mastering more relations than were taught formally. This study illustrates the efficiency of equivalence-based instruction in establishing academic skills in sophisticated learners. PMID:21358904
Relationship between the nonlinear ferroelectric and liquid crystal models for microtubules
NASA Astrophysics Data System (ADS)
Satarić, M. V.; Tuszyński, J. A.
2003-01-01
Microtubules (MTs), which are the main components of the cytoskeleton, are important in a variety of cellular activities, but some physical properties underlying the most important features of their behavior are still lacking satisfactory explanation. One of the essential enigmas regarding the energy balance in MTs is the hydrolysis of the exchangable guanosine 5'-triphosphate bound to the β monomer of the molecule. The energy released in the hydrolysis process amounts to 6.25×10-20 J and has been the subject of many attempts to answer the questions of its utilization. Earlier, we put forward a hypothesis that this energy can cause a local conformational distortion of the dimer. This distortion should have nonlinear character and could lead to the formation of a traveling kink soliton. In this paper we use the formalism of the liquid crystal theory to consider the nonlinear dynamics of MTs. We demonstrate that this new model is formally equivalent to our earlier ferroelectric model which was widely exploited in an attempt to elucidate some important dynamical activities in MTs. We also study the stability of kink solitons against small perturbations and their unusual mutual interactions as well as the interactions with structural inhomogenities of MTs. Our new approach based on liquid crystal properties of microtubules has been recently corroborated by new insights gained from the electrostatic properties of tubulin and microtubules.
Studies 1. The Yugoslav Serbo-Croatian-English Contrastive Project.
ERIC Educational Resources Information Center
Filipovic, Rudolf, Ed.
The first volume in this series on Serbo-Croatian-English contrastive analysis contains four articles. They are: "Contrasting via Translation: Formal Correspondence vs. Translation Equivalence," by Vladimir Ivir; "Approach to Contrastive Analysis," by Leonardo Spalatin; and "The Choice of the Corpus for the Contrastive Analysis of Serbo-Croatian…
Theory of quark mixing matrix and invariant functions of mass matrices
NASA Astrophysics Data System (ADS)
Jarloskog, C.
1987-10-01
The origin of the quark mixing matrix; super elementary theory of flavor projection operators; equivalences and invariances; the commutator formalism and CP violation; CP conditions for any number of families; the angle between the quark mass matrices; and application to Fritzsch and Stech mass matrices are discussed.
34 CFR 668.148 - Additional criteria for the approval of certain tests.
Code of Federal Regulations, 2014 CFR
2014-07-01
... mean score and standard deviation for Spanish-speaking students with high school diplomas who have... speaking individuals who speak a language other than Spanish and who have a high school diploma. The sample... age of compulsory school attendance who completed U.S. high school equivalency programs, formal...
34 CFR 668.148 - Additional criteria for the approval of certain tests.
Code of Federal Regulations, 2011 CFR
2011-07-01
... mean score and standard deviation for Spanish-speaking students with high school diplomas who have... speaking individuals who speak a language other than Spanish and who have a high school diploma. The sample... age of compulsory school attendance who completed U.S. high school equivalency programs, formal...
34 CFR 668.148 - Additional criteria for the approval of certain tests.
Code of Federal Regulations, 2012 CFR
2012-07-01
... mean score and standard deviation for Spanish-speaking students with high school diplomas who have... speaking individuals who speak a language other than Spanish and who have a high school diploma. The sample... age of compulsory school attendance who completed U.S. high school equivalency programs, formal...
34 CFR 668.148 - Additional criteria for the approval of certain tests.
Code of Federal Regulations, 2013 CFR
2013-07-01
... mean score and standard deviation for Spanish-speaking students with high school diplomas who have... speaking individuals who speak a language other than Spanish and who have a high school diploma. The sample... age of compulsory school attendance who completed U.S. high school equivalency programs, formal...
The Voronoi spatio-temporal data structure
NASA Astrophysics Data System (ADS)
Mioc, Darka
2002-04-01
Current GIS models cannot integrate the temporal dimension of spatial data easily. Indeed, current GISs do not support incremental (local) addition and deletion of spatial objects, and they can not support the temporal evolution of spatial data. Spatio-temporal facilities would be very useful in many GIS applications: harvesting and forest planning, cadastre, urban and regional planning, and emergency planning. The spatio-temporal model that can overcome these problems is based on a topological model---the Voronoi data structure. Voronoi diagrams are irregular tessellations of space, that adapt to spatial objects and therefore they are a synthesis of raster and vector spatial data models. The main advantage of the Voronoi data structure is its local and sequential map updates, which allows us to automatically record each event and performed map updates within the system. These map updates are executed through map construction commands that are composed of atomic actions (geometric algorithms for addition, deletion, and motion of spatial objects) on the dynamic Voronoi data structure. The formalization of map commands led to the development of a spatial language comprising a set of atomic operations or constructs on spatial primitives (points and lines), powerful enough to define the complex operations. This resulted in a new formal model for spatio-temporal change representation, where each update is uniquely characterized by the numbers of newly created and inactivated Voronoi regions. This is used for the extension of the model towards the hierarchical Voronoi data structure. In this model, spatio-temporal changes induced by map updates are preserved in a hierarchical data structure that combines events and corresponding changes in topology. This hierarchical Voronoi data structure has an implicit time ordering of events visible through changes in topology, and it is equivalent to an event structure that can support temporal data without precise temporal information. This formal model of spatio-temporal change representation is currently applied to retroactive map updates and visualization of map evolution. It offers new possibilities in the domains of temporal GIS, transaction processing, spatio-temporal queries, spatio-temporal analysis, map animation and map visualization.
Linear network representation of multistate models of transport.
Sandblom, J; Ring, A; Eisenman, G
1982-01-01
By introducing external driving forces in rate-theory models of transport we show how the Eyring rate equations can be transformed into Ohm's law with potentials that obey Kirchhoff's second law. From such a formalism the state diagram of a multioccupancy multicomponent system can be directly converted into linear network with resistors connecting nodal (branch) points and with capacitances connecting each nodal point with a reference point. The external forces appear as emf or current generators in the network. This theory allows the algebraic methods of linear network theory to be used in solving the flux equations for multistate models and is particularly useful for making proper simplifying approximation in models of complex membrane structure. Some general properties of linear network representation are also deduced. It is shown, for instance, that Maxwell's reciprocity relationships of linear networks lead directly to Onsager's relationships in the near equilibrium region. Finally, as an example of the procedure, the equivalent circuit method is used to solve the equations for a few transport models. PMID:7093425
Current fluctuations in periodically driven systems
NASA Astrophysics Data System (ADS)
Barato, Andre C.; Chetrite, Raphael
2018-05-01
Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.
Community-level cohesion without cooperation.
Tikhonov, Mikhail
2016-06-16
Recent work draws attention to community-community encounters ('coalescence') as likely an important factor shaping natural ecosystems. This work builds on MacArthur's classic model of competitive coexistence to investigate such community-level competition in a minimal theoretical setting. It is shown that the ability of a species to survive a coalescence event is best predicted by a community-level 'fitness' of its native community rather than the intrinsic performance of the species itself. The model presented here allows formalizing a macroscopic perspective whereby a community harboring organisms at varying abundances becomes equivalent to a single organism expressing genes at different levels. While most natural communities do not satisfy the strict criteria of multicellularity developed by multi-level selection theory, the effective cohesion described here is a generic consequence of resource partitioning, requires no cooperative interactions, and can be expected to be widespread in microbial ecosystems.
European Train Control System: A Case Study in Formal Verification
NASA Astrophysics Data System (ADS)
Platzer, André; Quesel, Jan-David
Complex physical systems have several degrees of freedom. They only work correctly when their control parameters obey corresponding constraints. Based on the informal specification of the European Train Control System (ETCS), we design a controller for its cooperation protocol. For its free parameters, we successively identify constraints that are required to ensure collision freedom. We formally prove the parameter constraints to be sharp by characterizing them equivalently in terms of reachability properties of the hybrid system dynamics. Using our deductive verification tool KeYmaera, we formally verify controllability, safety, liveness, and reactivity properties of the ETCS protocol that entail collision freedom. We prove that the ETCS protocol remains correct even in the presence of perturbation by disturbances in the dynamics. We verify that safety is preserved when a PI controlled speed supervision is used.
Gravitational Lagrangians, Mach's Principle, and the Equivalence Principle in an Expanding Universe
NASA Astrophysics Data System (ADS)
Essén, Hanno
2014-08-01
Gravitational Lagrangians as derived by Fock for the Einstein-Infeld-Hoffmann approach, and by Kennedy assuming only a fourth rank tensor interaction, contain long range interactions. Here we investigate how these affect the local dynamics when integrated over an expanding universe out to the Hubble radius. Taking the cosmic expansion velocity into account in a heuristic manner it is found that these long range interactions imply Mach's principle, provided the universe has the critical density, and that mass is renormalized. Suitable higher order additions to the Lagrangians make the formalism consistent with the equivalence principle.
NASA Astrophysics Data System (ADS)
Li, Zhen; Lee, Hee Sun; Darve, Eric; Karniadakis, George Em
2017-01-01
Memory effects are often introduced during coarse-graining of a complex dynamical system. In particular, a generalized Langevin equation (GLE) for the coarse-grained (CG) system arises in the context of Mori-Zwanzig formalism. Upon a pairwise decomposition, GLE can be reformulated into its pairwise version, i.e., non-Markovian dissipative particle dynamics (DPD). GLE models the dynamics of a single coarse particle, while DPD considers the dynamics of many interacting CG particles, with both CG systems governed by non-Markovian interactions. We compare two different methods for the practical implementation of the non-Markovian interactions in GLE and DPD systems. More specifically, a direct evaluation of the non-Markovian (NM) terms is performed in LE-NM and DPD-NM models, which requires the storage of historical information that significantly increases computational complexity. Alternatively, we use a few auxiliary variables in LE-AUX and DPD-AUX models to replace the non-Markovian dynamics with a Markovian dynamics in a higher dimensional space, leading to a much reduced memory footprint and computational cost. In our numerical benchmarks, the GLE and non-Markovian DPD models are constructed from molecular dynamics (MD) simulations of star-polymer melts. Results show that a Markovian dynamics with auxiliary variables successfully generates equivalent non-Markovian dynamics consistent with the reference MD system, while maintaining a tractable computational cost. Also, transient subdiffusion of the star-polymers observed in the MD system can be reproduced by the coarse-grained models. The non-interacting particle models, LE-NM/AUX, are computationally much cheaper than the interacting particle models, DPD-NM/AUX. However, the pairwise models with momentum conservation are more appropriate for correctly reproducing the long-time hydrodynamics characterised by an algebraic decay in the velocity autocorrelation function.
Learning in Context: Preparing Latino Workers for Careers and Continuing Education
ERIC Educational Resources Information Center
Moore, Elizabeth; Oppenheim, Emma
2010-01-01
Adult education services, including education for those lacking basic literacy and numeracy, preparation for the high school equivalency diploma, and English-as-a-second-language courses, play a crucial role in bridging the basic skills gap for Latinos and other workers with limited formal education and training. With recent policy and program…
Procedures for Decomposing a Redox Reaction into Half-Reaction
ERIC Educational Resources Information Center
Fishtik, Ilie; Berka, Ladislav H.
2005-01-01
A simple algorithm for a complete enumeration of the possible ways a redox reaction (RR) might be uniquely decomposed into half-reactions (HRs) using the response reactions (RERs) formalism is presented. A complete enumeration of the possible ways a RR may be decomposed into HRs is equivalent to a complete enumeration of stoichiometrically…
Hamilton-Jacobi theory in multisymplectic classical field theories
NASA Astrophysics Data System (ADS)
de León, Manuel; Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso; Vilariño, Silvia
2017-09-01
The geometric framework for the Hamilton-Jacobi theory developed in the studies of Cariñena et al. [Int. J. Geom. Methods Mod. Phys. 3(7), 1417-1458 (2006)], Cariñena et al. [Int. J. Geom. Methods Mod. Phys. 13(2), 1650017 (2015)], and de León et al. [Variations, Geometry and Physics (Nova Science Publishers, New York, 2009)] is extended for multisymplectic first-order classical field theories. The Hamilton-Jacobi problem is stated for the Lagrangian and the Hamiltonian formalisms of these theories as a particular case of a more general problem, and the classical Hamilton-Jacobi equation for field theories is recovered from this geometrical setting. Particular and complete solutions to these problems are defined and characterized in several equivalent ways in both formalisms, and the equivalence between them is proved. The use of distributions in jet bundles that represent the solutions to the field equations is the fundamental tool in this formulation. Some examples are analyzed and, in particular, the Hamilton-Jacobi equation for non-autonomous mechanical systems is obtained as a special case of our results.
Herskind, Carsten; Griebel, Jürgen; Kraus-Tiefenbacher, Uta; Wenz, Frederik
2008-12-01
Accelerated partial breast radiotherapy with low-energy photons from a miniature X-ray machine is undergoing a randomized clinical trial (Targeted Intra-operative Radiation Therapy [TARGIT]) in a selected subgroup of patients treated with breast-conserving surgery. The steep radial dose gradient implies reduced tumor cell control with increasing depth in the tumor bed. The purpose was to compare the expected risk of local recurrence in this nonuniform radiation field with that after conventional external beam radiotherapy. The relative biologic effectiveness of low-energy photons was modeled using the linear-quadratic formalism including repair of sublethal lesions during protracted irradiation. Doses of 50-kV X-rays (Intrabeam) were converted to equivalent fractionated doses, EQD2, as function of depth in the tumor bed. The probability of local control was estimated using a logistic dose-response relationship fitted to clinical data from fractionated radiotherapy. The model calculations show that, for a cohort of patients, the increase in local control in the high-dose region near the applicator partly compensates the reduction of local control at greater distances. Thus a "sphere of equivalence" exists within which the risk of recurrence is equal to that after external fractionated radiotherapy. The spatial distribution of recurrences inside this sphere will be different from that after conventional radiotherapy. A novel target volume concept is presented here. The incidence of recurrences arising in the tumor bed around the excised tumor will test the validity of this concept and the efficacy of the treatment. Recurrences elsewhere will have implications for the rationale of TARGIT.
Phase space quantum mechanics - Direct
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nasiri, S.; Sobouti, Y.; Taati, F.
2006-09-15
Conventional approach to quantum mechanics in phase space (q,p), is to take the operator based quantum mechanics of Schroedinger, or an equivalent, and assign a c-number function in phase space to it. We propose to begin with a higher level of abstraction, in which the independence and the symmetric role of q and p is maintained throughout, and at once arrive at phase space state functions. Upon reduction to the q- or p-space the proposed formalism gives the conventional quantum mechanics, however, with a definite rule for ordering of factors of noncommuting observables. Further conceptual and practical merits of themore » formalism are demonstrated throughout the text.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hensley, F; Chofor, N; Schoenfeld, A
2016-06-15
Purpose: In the steep dose gradients in the vicinity of a radiation source and due to the properties of the changing photon spectra, dose measurements in Brachytherapy usually have large uncertainties. Working group DIN 6803-3 is presently discussing recommendations for practical brachytherapy dosimetry incorporating recent theoretical developments in the description of brachytherapy radiation fields as well as new detectors and phantom materials. The goal is to prepare methods and instruments to verify dose calculation algorithms and for clinical dose verification with reduced uncertainties. Methods: After analysis of the distance dependent spectral changes of the radiation field surrounding brachytherapy sources, themore » energy dependent response of typical brachytherapy detectors was examined with Monte Carlo simulations. A dosimetric formalism was developed allowing the correction of their energy dependence as function of source distance for a Co-60 calibrated detector. Water equivalent phantom materials were examined with Monte Carlo calculations for their influence on brachytherapy photon spectra and for their water equivalence in terms of generating equivalent distributions of photon spectra and absorbed dose to water. Results: The energy dependence of a detector in the vicinity of a brachytherapy source can be described by defining an energy correction factor kQ for brachytherapy in the same manner as in existing dosimetry protocols which incorporates volume averaging and radiation field distortion by the detector. Solid phantom materials were identified which allow precise positioning of a detector together with small correctable deviations from absorbed dose to water. Recommendations for the selection of detectors and phantom materials are being developed for different measurements in brachytherapy. Conclusion: The introduction of kQ for brachytherapy sources may allow more systematic and comparable dose measurements. In principle, the corrections can be verified or even determined by measurement in a water phantom and comparison with dose distributions calculated using the TG43 dosimetry formalism. Project is supported by DIN Deutsches Institut fuer Normung.« less
Augmented superfield approach to gauge-invariant massive 2-form theory
NASA Astrophysics Data System (ADS)
Kumar, R.; Krishna, S.
2017-06-01
We discuss the complete sets of the off-shell nilpotent (i.e. s^2_{(a)b} = 0) and absolutely anticommuting (i.e. s_b s_{ab} + s_{ab} s_b = 0) Becchi-Rouet-Stora-Tyutin (BRST) (s_b) and anti-BRST (s_{ab}) symmetries for the (3+1)-dimensional (4D) gauge-invariant massive 2-form theory within the framework of an augmented superfield approach to the BRST formalism. In this formalism, we obtain the coupled (but equivalent) Lagrangian densities which respect both BRST and anti-BRST symmetries on the constrained hypersurface defined by the Curci-Ferrari type conditions. The absolute anticommutativity property of the (anti-) BRST transformations (and corresponding generators) is ensured by the existence of the Curci-Ferrari type conditions which emerge very naturally in this formalism. Furthermore, the gauge-invariant restriction plays a decisive role in deriving the proper (anti-) BRST transformations for the Stückelberg-like vector field.
On the relationship between kinetic and fluid formalisms for convection in the inner magnetosphere
NASA Astrophysics Data System (ADS)
Song, Yang; Sazykin, Stanislav; Wolf, Richard A.
2008-08-01
In the inner magnetosphere, the plasma flows are mostly slow compared to thermal or Alfvén speeds, but the convection is far away from the ideal magnetohydrodynamic regime since the gradient/curvature drifts become significant. Both kinetic (Wolf, 1983) and two-fluid (Peymirat and Fontaine, 1994; Heinemann, 1999) formalisms have been used to describe plasma dynamics, but it is not fully understood how they relate to each other. We explore the relations among kinetic, fluid, and recently developed "average" (Liu, 2006) models in an attempt to find the simplest yet realistic way to describe the convection. First, we prove analytically that the model of (Liu, 2006), when closed with the assumption of a Maxwellian distribution, is equivalent to the fluid model of (Heinemann, 1999). Second, we analyze the transport of both one-dimensional and two-dimensional Gaussian-shaped blob of hot plasma. For the kinetic case, it is known that the time evolution of such a blob is gradual spreading in time. For the fluid case, Heinemann and Wolf (2001a, 2001b) showed that in a one-dimensional idealized case, the blob separates into two drifting at different speeds. We present a fully nonlinear solution of this case, confirming this behavior but demonstrating what appears to be a shocklike steepening of the faster drifting secondary blob. A new, more realistic two-dimensional example using the dipole geometry with a uniform electric field confirms the one-dimensional solutions. Implications for the numerical simulations of magnetospheric dynamics are discussed.
Simulating Thin Sheets: Buckling, Wrinkling, Folding and Growth
NASA Astrophysics Data System (ADS)
Vetter, Roman; Stoop, Norbert; Wittel, Falk K.; Herrmann, Hans J.
2014-03-01
Numerical simulations of thin sheets undergoing large deformations are computationally challenging. Depending on the scenario, they may spontaneously buckle, wrinkle, fold, or crumple. Nature's thin tissues often experience significant anisotropic growth, which can act as the driving force for such instabilities. We use a recently developed finite element model to simulate the rich variety of nonlinear responses of Kirchhoff-Love sheets. The model uses subdivision surface shape functions in order to guarantee convergence of the method, and to allow a finite element description of anisotropically growing sheets in the classical Rayleigh-Ritz formalism. We illustrate the great potential in this approach by simulating the inflation of airbags, the buckling of a stretched cylinder, as well as the formation and scaling of wrinkles at free boundaries of growing sheets. Finally, we compare the folding of spatially confined sheets subject to growth and shrinking confinement to find that the two processes are equivalent.
On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method
Roux, Benoît; Weare, Jonathan
2013-01-01
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140
Convergence of discrete Aubry–Mather model in the continuous limit
NASA Astrophysics Data System (ADS)
Su, Xifeng; Thieullen, Philippe
2018-05-01
We develop two approximation schemes for solving the cell equation and the discounted cell equation using Aubry–Mather–Fathi theory. The Hamiltonian is supposed to be Tonelli, time-independent and periodic in space. By Legendre transform it is equivalent to find a fixed point of some nonlinear operator, called Lax-Oleinik operator, which may be discounted or not. By discretizing in time, we are led to solve an additive eigenvalue problem involving a discrete Lax–Oleinik operator. We show how to approximate the effective Hamiltonian and some weak KAM solutions by letting the time step in the discrete model tend to zero. We also obtain a selected discrete weak KAM solution as in Davini et al (2016 Invent. Math. 206 29–55), and show that it converges to a particular solution of the cell equation. In order to unify the two settings, continuous and discrete, we develop a more general formalism of the short-range interactions.
Poincaré-MacMillan Equations of Motion for a Nonlinear Nonholonomic Dynamical System
NASA Astrophysics Data System (ADS)
Amjad, Hussain; Syed Tauseef, Mohyud-Din; Ahmet, Yildirim
2012-03-01
MacMillan's equations are extended to Poincaré's formalism, and MacMillan's equations for nonlinear nonholonomic systems are obtained in terms of Poincaré parameters. The equivalence of the results obtained here with other forms of equations of motion is demonstrated. An illustrative example of the theory is provided as well.
Using Rule-Based Computer Programming to Unify Communication Rules Research.
ERIC Educational Resources Information Center
Sanford, David L.; Roach, J. W.
This paper proposes the use of a rule-based computer programming language as a standard for the expression of rules, arguing that the adoption of a standard would enable researchers to communicate about rules in a consistent and significant way. Focusing on the formal equivalence of artificial intelligence (AI) programming to different types of…
Aspects of AdS/CFT: Conformal Deformations and the Goldstone Equivalence Theorem
NASA Astrophysics Data System (ADS)
Cantrell, Sean Andrew
The AdS/CFT correspondence provides a map from the states of theories situated in AdSd+1 to those in dual conformal theories in a d-dimensional space. The correspondence can be used to establish certain universal properties of some theories in one space by examining the behave of general objects in the other. In this thesis, we develop various formal aspects of AdS/CFT. Conformal deformations manifest in the AdS/CFT correspondence as boundary conditions on the AdS field. Heretofore, double-trace deformations have been the primary focus in this context. To better understand multitrace deformations, we revisit the relationship between the generating AdS partition function for a free bulk theory and the boundary CFT partition function subject to arbitrary conformal deformations. The procedure leads us to a formalism that constructs bulk fields from boundary operators. We independently replicate the holographic RG flow narrative to go on to interpret the brane used to regulate the AdS theory as a renormalization scale. The scale-dependence of the dilatation spectrum of a boundary theory in the presence of general deformations can be thus understood on the AdS side using this formalism. The Goldstone equivalence theorem allows one to relate scattering amplitudes of massive gauge fields to those of scalar fields in the limit of large scattering energies. We generalize this theorem under the framework of the AdS/CFT correspondence. First, we obtain an expression of the equivalence theorem in terms of correlation functions of creation and annihilation operators by using an AdS wave function approach to the AdS/CFT dictionary. It is shown that the divergence of the non-conserved conformal current dual to the bulk gauge field is approximately primary when computing correlators for theories in which the masses of all the exchanged particles are sufficiently large. The results are then generalized to higher spin fields. We then go on to generalize the theorem using conformal blocks in two and four-dimensional CFTs. We show that when the scaling dimensions of the exchanged operators are large compared to both their spins and the dimension of the current, the conformal blocks satisfy an equivalence theorem.
Very narrow band model calculations of atmospheric fluxes and cooling rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, L.S.; Berk, A.; Acharya, P.K.
1996-10-15
A new very narrow band model (VNBM) approach has been developed and incorporated into the MODTRAN atmospheric transmittance-radiance code. The VNBM includes a computational spectral resolution of 1 cm{sup {minus}1}, a single-line Voigt equivalent width formalism that is based on the Rodgers-Williams approximation and accounts for the finite spectral width of the interval, explicit consideration of line tails, a statistical line overlap correction, a new sublayer integration approach that treats the effect of the sublayer temperature gradient on the path radiance, and the Curtis-Godson (CG) approximation for inhomogeneous paths. A modified procedure for determining the line density parameter 1/d ismore » introduced, which reduces its magnitude. This results in a partial correction of the VNBM tendency to overestimate the interval equivalent widths. The standard two parameter CG approximation is used for H{sub 2}O and CO{sub 2}, while the Goody three parameter CG approximation is used for O{sub 3}. Atmospheric flux and cooling rate predictions using a research version of MODTRAN, MODR, are presented for H{sub 2}O (with and without the continuum), CO{sub 2}, and O{sub 3} for several model atmospheres. The effect of doubling the CO{sub 2} concentration is also considered. These calculations are compared to line-by-line (LBL) model calculations using the AER, GLA, GFDL, and GISS codes. The MODR predictions fall within the spread of the LBL results. The effects of decreasing the band model spectral resolution are illustrated using CO{sub 2} cooling rate and flux calculations. 36 refs., 18 figs., 1 tab.« less
Granger Causality and Transfer Entropy Are Equivalent for Gaussian Variables
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Barrett, Adam B.; Seth, Anil K.
2009-12-01
Granger causality is a statistical notion of causal influence based on prediction via vector autoregression. Developed originally in the field of econometrics, it has since found application in a broader arena, particularly in neuroscience. More recently transfer entropy, an information-theoretic measure of time-directed information transfer between jointly dependent processes, has gained traction in a similarly wide field. While it has been recognized that the two concepts must be related, the exact relationship has until now not been formally described. Here we show that for Gaussian variables, Granger causality and transfer entropy are entirely equivalent, thus bridging autoregressive and information-theoretic approaches to data-driven causal inference.
NASA Technical Reports Server (NTRS)
Tsuge, S.; Sagara, K.
1978-01-01
The indeterminacy inherent to the formal extension of Arrhenius' law to reactions in turbulent flows is shown to be surmountable in the case of a binary exchange reaction with a sufficiently high activation energy. A preliminary calculation predicts that the turbulent reaction rate is invariant in the Arrhenius form except for an equivalently lowered activation energy. This is a reflection of turbulence-augmented molecular vigor, and causes an appreciable increase in the reaction rate. A similarity to the tunnel effect in quantum mechanics is indicated. The anomaly associated with the mild ignition of oxy-hydrogen mixtures is discussed in this light.
Formally verifying Ada programs which use real number types
NASA Technical Reports Server (NTRS)
Sutherland, David
1986-01-01
Formal verification is applied to programs which use real number arithmetic operations (mathematical programs). Formal verification of a program P consists of creating a mathematical model of F, stating the desired properties of P in a formal logical language, and proving that the mathematical model has the desired properties using a formal proof calculus. The development and verification of the mathematical model are discussed.
Semi-automated software service integration in virtual organisations
NASA Astrophysics Data System (ADS)
Afsarmanesh, Hamideh; Sargolzaei, Mahdi; Shadi, Mahdieh
2015-08-01
To enhance their business opportunities, organisations involved in many service industries are increasingly active in pursuit of both online provision of their business services (BSs) and collaborating with others. Collaborative Networks (CNs) in service industry sector, however, face many challenges related to sharing and integration of their collection of provided BSs and their corresponding software services. Therefore, the topic of service interoperability for which this article introduces a framework is gaining momentum in research for supporting CNs. It contributes to generation of formal machine readable specification for business processes, aimed at providing their unambiguous definitions, as needed for developing their equivalent software services. The framework provides a model and implementation architecture for discovery and composition of shared services, to support the semi-automated development of integrated value-added services. In support of service discovery, a main contribution of this research is the formal representation of services' behaviour and applying desired service behaviour specified by users for automated matchmaking with other existing services. Furthermore, to support service integration, mechanisms are developed for automated selection of the most suitable service(s) according to a number of service quality aspects. Two scenario cases are presented, which exemplify several specific features related to service discovery and service integration aspects.
A Belief-based Trust Model for Dynamic Service Selection
NASA Astrophysics Data System (ADS)
Ali, Ali Shaikh; Rana, Omer F.
Provision of services across institutional boundaries has become an active research area. Many such services encode access to computational and data resources (comprising single machines to computational clusters). Such services can also be informational, and integrate different resources within an institution. Consequently, we envision a service rich environment in the future, where service consumers can intelligently decide between which services to select. If interaction between service providers/users is automated, it is necessary for these service clients to be able to automatically chose between a set of equivalent (or similar) services. In such a scenario trust serves as a benchmark to differentiate between service providers. One might therefore prioritize potential cooperative partners based on the established trust. Although many approaches exist in literature about trust between online communities, the exact nature of trust for multi-institutional service sharing remains undefined. Therefore, the concept of trust suffers from an imperfect understanding, a plethora of definitions, and informal use in the literature. We present a formalism for describing trust within multi-institutional service sharing, and provide an implementation of this; enabling the agent to make trust-based decision. We evaluate our formalism through simulation.
Two space scatterer formalism calculation of bulk parameters of thunderclouds
NASA Technical Reports Server (NTRS)
Phanord, Dieudonne D.
1994-01-01
In a previous study, we used a modified two-space scatterer formalism of Twersky to establish for a cloud modeled as a statistically homogeneous distribution of spherical water droplets, the dispersion relations that determine its bulk propagation numbers and bulk indexes of refraction in terms of the vector equivalent scattering amplitude and the dyadic scattering amplitude of the single water droplet in isolation. The results were specialized to the forward direction of scattering while demanding that the scatterers preserve the incident polarization. We apply this approach to obtain specific numerical values for the macroscopic parameters of the cloud. We work with a cloud of density rho = 100 cm(exp -3), a wavelength lambda = 0.7774 microns, and with spherical water droplets of common radius alpha = 10 microns. In addition, the scattering medium is divided into three parts, the medium outside the cloud, moist air (the medium inside the cloud but outside the droplets), and the medium inside the spherical water droplets. The results of this report are applicable to a cloud of any geometry since the boundary does not interfere with the calculations. Also, it is important to notice the plane wave nature of the incidence wave in the moist atmosphere.
The Value of Information for Populations in Varying Environments
NASA Astrophysics Data System (ADS)
Rivoire, Olivier; Leibler, Stanislas
2011-04-01
The notion of information pervades informal descriptions of biological systems, but formal treatments face the problem of defining a quantitative measure of information rooted in a concept of fitness, which is itself an elusive notion. Here, we present a model of population dynamics where this problem is amenable to a mathematical analysis. In the limit where any information about future environmental variations is common to the members of the population, our model is equivalent to known models of financial investment. In this case, the population can be interpreted as a portfolio of financial assets and previous analyses have shown that a key quantity of Shannon's communication theory, the mutual information, sets a fundamental limit on the value of information. We show that this bound can be violated when accounting for features that are irrelevant in finance but inherent to biological systems, such as the stochasticity present at the individual level. This leads us to generalize the measures of uncertainty and information usually encountered in information theory.
NASA Astrophysics Data System (ADS)
Nielsen, N. K.; Quaade, U. J.
1995-07-01
The physical phase space of the relativistic top, as defined by Hansson and Regge, is expressed in terms of canonical coordinates of the Poincaré group manifold. The system is described in the Hamiltonian formalism by the mass-shell condition and constraints that reduce the number of spin degrees of freedom. The constraints are second class and are modified into a set of first class constraints by adding combinations of gauge-fixing functions. The Batalin-Fradkin-Vilkovisky method is then applied to quantize the system in the path integral formalism in Hamiltonian form. It is finally shown that different gauge choices produce different equivalent forms of the constraints.
Providing solid angle formalism for skyshine calculations.
Gossman, Michael S; Pahikkala, A Jussi; Rising, Mary B; McGinley, Patton H
2010-08-17
We detail, derive and correct the technical use of the solid angle variable identified in formal guidance that relates skyshine calculations to dose-equivalent rate. We further recommend it for use with all National Council on Radiation Protection and Measurements (NCRP), Institute of Physics and Engineering in Medicine (IPEM) and similar reports documented. In general, for beams of identical width which have different resulting areas, within ± 1.0 % maximum deviation the analytical pyramidal solution is 1.27 times greater than a misapplied analytical conical solution through all field sizes up to 40 × 40 cm². Therefore, we recommend determining the exact results with the analytical pyramidal solution for square beams and the analytical conical solution for circular beams.
A study of the Boltzmann and Gibbs entropies in the context of a stochastic toy model
NASA Astrophysics Data System (ADS)
Malgieri, Massimiliano; Onorato, Pasquale; De Ambrosis, Anna
2018-05-01
In this article we reconsider a stochastic toy model of thermal contact, first introduced in Onorato et al (2017 Eur. J. Phys. 38 045102), showing its educational potential for clarifying some current issues in the foundations of thermodynamics. The toy model can be realized in practice using dice and coins, and can be seen as representing thermal coupling of two subsystems with energy bounded from above. The system is used as a playground for studying the different behaviours of the Boltzmann and Gibbs temperatures and entropies in the approach to steady state. The process that models thermal contact between the two subsystems can be proved to be an ergodic, reversible Markov chain; thus the dynamics produces an equilibrium distribution in which the weight of each state is proportional to its multiplicity in terms of microstates. Each one of the two subsystems, taken separately, is formally equivalent to an Ising spin system in the non-interacting limit. The model is intended for educational purposes, and the level of readership of the article is aimed at advanced undergraduates.
Formal Methods in Air Traffic Management: The Case of Unmanned Aircraft Systems
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.
2015-01-01
As the technological and operational capabilities of unmanned aircraft systems (UAS) continue to grow, so too does the need to introduce these systems into civil airspace. Unmanned Aircraft Systems Integration in the National Airspace System is a NASA research project that addresses the integration of civil UAS into non-segregated airspace operations. One of the major challenges of this integration is the lack of an onboard pilot to comply with the legal requirement that pilots see and avoid other aircraft. The need to provide an equivalent to this requirement for UAS has motivated the development of a detect and avoid (DAA) capability to provide the appropriate situational awareness and maneuver guidance in avoiding and remaining well clear of traffic aircraft. Formal methods has played a fundamental role in the development of this capability. This talk reports on the formal methods work conducted under NASA's Safe Autonomous System Operations project in support of the development of DAA for UAS. This work includes specification of low-level and high-level functional requirements, formal verification of algorithms, and rigorous validation of software implementations. The talk also discusses technical challenges in formal methods research in the context of the development and safety analysis of advanced air traffic management concepts.
Constraining the physical state by symmetries
NASA Astrophysics Data System (ADS)
Fatibene, L.; Ferraris, M.; Magnano, G.
2017-03-01
After reviewing the hole argument and its relations with initial value problem and general covariance, we shall discuss how much freedom one has to define the physical state in a generally covariant field theory (with or without internal gauge symmetries). Our analysis relies on Cauchy problems, thus it is restricted to globally hyperbolic spacetimes. We shall show that in generally covariant theories on a compact space (as well as for internal gauge symmetries on any spacetime) one has no freedom and one is forced to declare as physically equivalent two configurations which differ by a global spacetime diffeomorphism (or by an internal gauge transformation) as it is usually prescribed. On the contrary, when space is not compact, the result does not hold true and one may have different options to define physically equivalent configurations, still preserving determinism. For this scenario to be effective, the group G of formal transformations needs to be a subgroup of dynamical symmetries (otherwise field equations, which are written in terms of configurations would not induce equations for the physical state classes) and it must contain the group D generated by Cauchy transformations (otherwise the equations induced on physical state classes would not be well posed, either). We argue that it is exactly because of this double inclusion that the hole argument in its initial problem formulation is more powerful than in its boundary formulation. In the boundary formulation of the hole argument one still has that the group G of formal transformations is a subgroup of dynamical symmetries, but there is no evidence for it to contain a particular non-trivial subgroup.In this paper we shall show that this scenario is exactly implemented in generally covariant theories. In the last section we shall show it to be implemented in gauge theories as well.Norton also argued (see [1]) that the definition of physical state is something to be discussed in physics and it is not something which can be settled by a purely mathematical argument. This position is certainly plausible and agreeable. However, we shall here argue that some constraints to the definition of physical state can be in fact put on a mathematical stance (the ones which go back to Einstein-Hilbert about well-posedness of Cauchy problems).A physical state is hence defined as an equivalence class of configurations, for which dynamics is well-posed, i.e. its evolution is deterministically singled out by initial conditions. It also defines what the physical observables are, i.e., by definition, the quantities which depend on the equivalence classes, but not on the specific representative configurations. Equivalently, physical observables are defined as quantities which are invariant with respect to formal transformations.A detailed analysis of these issues shows an unexpected structure of cases which is not clarified in general, yet. What is clear is that assuming, as usually done, that the physical state of a generally covariant theory is to be identified with equivalence classes of configurations modulo spacetime diffeomorphisms is a fair assumption, still a choice which is sometimes forced by mathematics (in particular by determinism in the form of Cauchy theorem on globally hyperbolic spacetimes with a compact space) but sometimes it is one of many possible choices which, in those cases, we agree should be addressed from a physical stance.We shall argue that sometimes one can find subclasses of diffeomorphisms (i.e. the group generated by Cauchy-compatible transformations, below denoted by D →) which play a distinctive role in the discussion and which, to the best of our knowledge, has not properly been taken into account in standard frameworks.Let us start, for the sake of simplicity, by restricting to generally covariant theories. Gauge theories will be briefly discussed in the conclusions since most of what we shall do easily applies to those cases, as well; see [20] for general framework and notation.In a generally covariant theory one has a huge group of symmetries S containing the (lift to the configuration bundle of the) spacetime diffeomorphisms. The group of spacetime diffeomorphisms will be denoted by Diff(M) . In particular, the subgroup of spacetime diffeomorphisms which can be connected by a flow with the identity idM will be denoted by Diffe(M) . Any element Φ ∈Diffe(M) can be obtained by evaluating a 1-parameter subgroup Φs at s = 1, i.e. Φ =Φ1. The 1-parameter subgroup Φs is also called a flow of diffeomorphisms.The standard attitude is to assume that in a generally covariant theory any two configurations of fields differing by any spacetime diffeomorphism represent the same physical state. In other words, if σ is a section of the configuration bundle and Φ∗ σ =σ‧ is its image through a diffeomorphism (in Diffe(M) or in Diff(M) depending on the case) then both σ and σ‧ represent the same physical state of the system.Let us call formal transformations the group G of transformations which fix the physical state, or, equivalently, define the physical states of the orbits of the group G. As a matter of fact defining the group of formal transformations is equivalent to defining the physical state. Either one defines the physical state as the orbits of the action of formal transformations or defines formal transformations as the transformations acting on configurations by mapping one representative of the physical state into another representative of the same physical state, i.e. fixing the physical states.In order for this to make sense one needs a formal transformation Φ to be a symmetry of the system (as it is in generally covariant theories) since if σ is a solution, of course also σ‧ must be a solution as well. In other words any formal transformation (acting on configurations but leaving the physical state unchanged) must be a symmetry and the symmetry group is an upper bound to the group of formal transformations, i.e. one must have G ⊂ S.We shall show that there is a lower bound (which will be denoted by D → and which is generated by Cauchy transformations) for the group G of formal transformations as well, i.e. one must have D → ⊂ G ⊂ S.We shall argue that when D → ⊊ S =Diffe(M) one has a certain freedom in setting G between its lower and upper bounds. In these cases one has different options to set the group D → ⊂ G ⊂ S and each different assumption about what G is, in fact defines a different theory with the same dynamics but different interpretation of what is the physical state and what can be in principle observed; see [21]. We shall also discuss topological conditions on M for which this freedom is nullified and one is forced to set G =Diffe(M) as usually done in the literature. On the other hand, we can discuss the motion of particles in spacetime on a physical stance and show that it is reasonable to assume that the physical state is described by worldline trajectories and parameterisations are irrelevant. The two viewpoints come (quite independently) to the same conclusion, which is a good thing. We shall also show a counter example, showing a globally hyperbolic spacetime M = R × R with a non-compact space Σ ≡ R in which the situation is different from the compact space case. As a consequence, the usual assumption of identifying configurations which differ by a diffeomorphism is a legitimate though in general unmotivated choice. When describing a system one should be aware of which assumptions come from mathematical constraints and which assumptions are done on a physical stance.When setting up a general covariant theory one should first study whether the group D → is a strict subgroup of Diffe(M) . If it is, one should characterise possible subgroups D → ⊂ G ⊂ S. Then one should declare which one of such groups G is elected as the group of formal transformations. Different choices lead to different theories with an equivalent dynamics but different observables.
NASA Astrophysics Data System (ADS)
Engdahl, N.
2017-12-01
Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.
ERIC Educational Resources Information Center
Guttman, Cynthia
Developed in the early 1980s, the Hill Areas Education project provides basic education to children and adults of Thailand's six ethnic minority groups, who live in the remote mountainous region of northern Thailand. The project delivers a locally relevant curriculum, equivalent to the six compulsory grades of the formal education system; promotes…
Master of Public Health | Cancer Prevention Fellowship Program
One of the unique features of the CPFP is the opportunity to receive formal, academic training in public health. By pursuing an MPH or equivalent degree, fellows learn about the current role and historical context of cancer prevention in public health. The MPH provides individuals with a strong foundation population health sciences, particularly the quantitative sciences of epidemiology and biostatistics.
NASA Astrophysics Data System (ADS)
Bhattacharya, Krishnakanta; Das, Ashmita; Majhi, Bibhas Ranjan
2018-06-01
We revisit the thermodynamic aspects of the scalar-tensor theory of gravity in the Jordan and in the Einstein frame. Examining the missing links of this theory carefully, we establish the thermodynamic descriptions from the conserved currents and potentials by following both the Noether and the Abbott-Deser-Tekin (ADT) formalism. With the help of conserved Noether current and potential, we define the thermodynamic quantities, which we show to be conformally invariant. Moreover, the defined quantities are shown to fit nicely in the laws of (the first and the second) black hole thermodynamics formulated by the Wald's method. We stretch the study of the conformal equivalence of the physical quantities in these two frames by following the ADT formalism. Our further study reveals that there is a connection between the ADT and the Noether conserved quantities, which signifies that the ADT approach provide the equivalent thermodynamic description in the two frames as obtained in Noether prescription. Our whole analysis is very general as the conserved Noether and ADT currents and potentials are formulated off-shell and the analysis is exempted from any prior assumption or boundary condition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caruso, M., E-mail: mcaruso@ugr.es; Fanchiotti, H.; Canal, C.A. Garcia
An equivalence between the Schroedinger dynamics of a quantum system with a finite number of basis states and a classical dynamics is presented. The equivalence is an isomorphism that connects in univocal way both dynamical systems. We treat the particular case of neutral kaons and found a class of electric networks uniquely related to the kaon system finding the complete map between the matrix elements of the effective Hamiltonian of kaons and those elements of the classical dynamics of the networks. As a consequence, the relevant {epsilon} parameter that measures CP violation in the kaon system is completely determined inmore » terms of network parameters. - Highlights: > We provide a formal equivalence between classical and quantum dynamics. > We make use of the decomplexification concept. > Neutral kaon systems can be represented by electric circuits. > CP symmetry violation can be taken into account by non-reciprocity. > Non-reciprocity is represented by gyrators.« less
Functional equivalency inferred from "authoritative sources" in networks of homologous proteins.
Natarajan, Shreedhar; Jakobsson, Eric
2009-06-12
A one-on-one mapping of protein functionality across different species is a critical component of comparative analysis. This paper presents a heuristic algorithm for discovering the Most Likely Functional Counterparts (MoLFunCs) of a protein, based on simple concepts from network theory. A key feature of our algorithm is utilization of the user's knowledge to assign high confidence to selected functional identification. We show use of the algorithm to retrieve functional equivalents for 7 membrane proteins, from an exploration of almost 40 genomes form multiple online resources. We verify the functional equivalency of our dataset through a series of tests that include sequence, structure and function comparisons. Comparison is made to the OMA methodology, which also identifies one-on-one mapping between proteins from different species. Based on that comparison, we believe that incorporation of user's knowledge as a key aspect of the technique adds value to purely statistical formal methods.
Intrinsic character of Stokes matrices
NASA Astrophysics Data System (ADS)
Gagnon, Jean-François; Rousseau, Christiane
2017-02-01
Two germs of linear analytic differential systems x k + 1Y‧ = A (x) Y with a non-resonant irregular singularity are analytically equivalent if and only if they have the same eigenvalues and equivalent collections of Stokes matrices. The Stokes matrices are the transition matrices between sectors on which the system is analytically equivalent to its formal normal form. Each sector contains exactly one separating ray for each pair of eigenvalues. A rotation in S allows supposing that R+ lies in the intersection of two sectors. Reordering of the coordinates of Y allows ordering the real parts of the eigenvalues, thus yielding triangular Stokes matrices. However, the choice of the rotation in x is not canonical. In this paper we establish how the collection of Stokes matrices depends on this rotation, and hence on a chosen order of the projection of the eigenvalues on a line through the origin.
Functional Equivalency Inferred from “Authoritative Sources” in Networks of Homologous Proteins
Natarajan, Shreedhar; Jakobsson, Eric
2009-01-01
A one-on-one mapping of protein functionality across different species is a critical component of comparative analysis. This paper presents a heuristic algorithm for discovering the Most Likely Functional Counterparts (MoLFunCs) of a protein, based on simple concepts from network theory. A key feature of our algorithm is utilization of the user's knowledge to assign high confidence to selected functional identification. We show use of the algorithm to retrieve functional equivalents for 7 membrane proteins, from an exploration of almost 40 genomes form multiple online resources. We verify the functional equivalency of our dataset through a series of tests that include sequence, structure and function comparisons. Comparison is made to the OMA methodology, which also identifies one-on-one mapping between proteins from different species. Based on that comparison, we believe that incorporation of user's knowledge as a key aspect of the technique adds value to purely statistical formal methods. PMID:19521530
Metrics for Labeled Markov Systems
NASA Technical Reports Server (NTRS)
Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash
1999-01-01
Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.
Factorization of standard model cross sections at ultrahigh energy
NASA Astrophysics Data System (ADS)
Chien, Yang-Ting; Li, Hsiang-nan
2018-03-01
The factorization theorem for organizing multiple electroweak boson emissions at future colliders with energy far above the electroweak scale is formulated. Taking the inclusive muon-pair production in electron-positron collisions as an example, we argue that the summation over isospins is demanded for constructing the universal distributions of leptons and gauge bosons in an electron. These parton distributions are shown to have the same infrared structure in the phases of broken and unbroken electroweak symmetry, an observation consistent with the Goldstone equivalence theorem. The electroweak factorization of processes involving protons is sketched, with an emphasis on the subtlety of the scalar distributions. This formalism, in which electroweak shower effects are handled from the viewpoint of factorization theorem for the first time, is an adequate framework for collider physics at ultra high energy.
Solving ordinary differential equations by electrical analogy: a multidisciplinary teaching tool
NASA Astrophysics Data System (ADS)
Sanchez Perez, J. F.; Conesa, M.; Alhama, I.
2016-11-01
Ordinary differential equations are the mathematical formulation for a great variety of problems in science and engineering, and frequently, two different problems are equivalent from a mathematical point of view when they are formulated by the same equations. Students acquire the knowledge of how to solve these equations (at least some types of them) using protocols and strict algorithms of mathematical calculation without thinking about the meaning of the equation. The aim of this work is that students learn to design network models or circuits in this way; with simple knowledge of them, students can establish the association of electric circuits and differential equations and their equivalences, from a formal point of view, that allows them to associate knowledge of two disciplines and promote the use of this interdisciplinary approach to address complex problems. Therefore, they learn to use a multidisciplinary tool that allows them to solve these kinds of equations, even students of first course of engineering, whatever the order, grade or type of non-linearity. This methodology has been implemented in numerous final degree projects in engineering and science, e.g., chemical engineering, building engineering, industrial engineering, mechanical engineering, architecture, etc. Applications are presented to illustrate the subject of this manuscript.
Relationship between the work of teachers in nonformal settings and in schools
NASA Astrophysics Data System (ADS)
Yoloye, E. Ayotunde
1987-09-01
Formal and nonformal education differ more in strategies and administration than in content. This however is not to say that there are not some important distinctions in the nature of content of what is usually done under formal education especially when we are dealing with particular professions and vocations. Recent efforts to integrate formal and nonformal education especially since publication of the report of the International Commission on the Development of Education have highlighted certain challenges in defining the role of the teacher especially in the nonformal sector and in the operation of an integrated system. Examples of efforts at integration may be found in the community schools in many Eastern African countries, the mosque schools or `maktabs' in Pakistan, the N.A.E.P. in India, the Vocational Skills Improvement unit in Nigeria and the various extramural and extension programmes of tertiary institutions. Among the major implications for a new orientation of teachers are (1) the issue of mobility of individuals between the formal and nonformal systems, (2) the issue of integrating the administration of formal and nonformal education, (3) the issue of appropriate strategies for teacher training and (4) the issue of creating new cadres of teachers besides those currently trained in conventional teachers' colleges. Among the embedded challenges is that of evolving new assessment procedures and establishment of equivalences between practical experience and formal academic instruction. The educational system as a whole still has a considerable way to go in meeting these challenges.
2015-11-01
28 2.3.4 Input/Output Automata ...various other modeling frameworks such as I/O Automata , Kahn Process Networks, Petri-nets, Multi-dimensional SDF, etc. are also used for designing...Formal Ideally suited to model DSP applications 3 Petri Nets Graphical Formal Used for modeling distributed systems 4 I/O Automata Both Formal
Valuing Precaution in Climate Change Policy Analysis (Invited)
NASA Astrophysics Data System (ADS)
Howarth, R. B.
2010-12-01
The U.N. Framework Convention on Climate Change calls for stabilizing greenhouse gas concentrations to prevent “dangerous anthropogenic interference” (DAI) with the global environment. This treaty language emphasizes a precautionary approach to climate change policy in a setting characterized by substantial uncertainty regarding the timing, magnitude, and impacts of climate change. In the economics of climate change, however, analysts often work with deterministic models that assign best-guess values to parameters that are highly uncertain. Such models support a “policy ramp” approach in which only limited steps should be taken to reduce the future growth of greenhouse gas emissions. This presentation will explore how uncertainties related to (a) climate sensitivity and (b) climate-change damages can be satisfactorily addressed in a coupled model of climate-economy dynamics. In this model, capping greenhouse gas concentrations at ~450 ppm of carbon dioxide equivalent provides substantial net benefits by reducing the risk of low-probability, catastrophic impacts. This result formalizes the intuition embodied in the DAI criterion in a manner consistent with rational decision-making under uncertainty.
MO-AB-BRA-02: A Novel Scatter Imaging Modality for Real-Time Image Guidance During Lung SBRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redler, G; Bernard, D; Templeton, A
2015-06-15
Purpose: A novel scatter imaging modality is developed and its feasibility for image-guided radiation therapy (IGRT) during stereotactic body radiation therapy (SBRT) for lung cancer patients is assessed using analytic and Monte Carlo models as well as experimental testing. Methods: During treatment, incident radiation interacts and scatters from within the patient. The presented methodology forms an image of patient anatomy from the scattered radiation for real-time localization of the treatment target. A radiographic flat panel-based pinhole camera provides spatial information regarding the origin of detected scattered radiation. An analytical model is developed, which provides a mathematical formalism for describing themore » scatter imaging system. Experimental scatter images are acquired by irradiating an object using a Varian TrueBeam accelerator. The differentiation between tissue types is investigated by imaging simple objects of known compositions (water, lung, and cortical bone equivalent). A lung tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is fabricated and imaged to investigate image quality for various quantities of delivered radiation. Monte Carlo N-Particle (MCNP) code is used for validation and testing by simulating scatter image formation using the experimental pinhole camera setup. Results: Analytical calculations, MCNP simulations, and experimental results when imaging the water, lung, and cortical bone equivalent objects show close agreement, thus validating the proposed models and demonstrating that scatter imaging differentiates these materials well. Lung tumor phantom images have sufficient contrast-to-noise ratio (CNR) to clearly distinguish tumor from surrounding lung tissue. CNR=4.1 and CNR=29.1 for 10MU and 5000MU images (equivalent to 0.5 and 250 second images), respectively. Conclusion: Lung SBRT provides favorable treatment outcomes, but depends on accurate target localization. A comprehensive approach, employing multiple simulation techniques and experiments, is taken to demonstrate the feasibility of a novel scatter imaging modality for the necessary real-time image guidance.« less
Structural model for fluctuations in financial markets
NASA Astrophysics Data System (ADS)
Anand, Kartik; Khedair, Jonathan; Kühn, Reimer
2018-05-01
In this paper we provide a comprehensive analysis of a structural model for the dynamics of prices of assets traded in a market which takes the form of an interacting generalization of the geometric Brownian motion model. It is formally equivalent to a model describing the stochastic dynamics of a system of analog neurons, which is expected to exhibit glassy properties and thus many metastable states in a large portion of its parameter space. We perform a generating functional analysis, introducing a slow driving of the dynamics to mimic the effect of slowly varying macroeconomic conditions. Distributions of asset returns over various time separations are evaluated analytically and are found to be fat-tailed in a manner broadly in line with empirical observations. Our model also allows us to identify collective, interaction-mediated properties of pricing distributions and it predicts pricing distributions which are significantly broader than their noninteracting counterparts, if interactions between prices in the model contain a ferromagnetic bias. Using simulations, we are able to substantiate one of the main hypotheses underlying the original modeling, viz., that the phenomenon of volatility clustering can be rationalized in terms of an interplay between the dynamics within metastable states and the dynamics of occasional transitions between them.
Super-Quantum Mechanics in the Integral Form Formalism
NASA Astrophysics Data System (ADS)
Castellani, L.; Catenacci, R.; Grassi, P. A.
2018-05-01
We reformulate Super Quantum Mechanics in the context of integral forms. This framework allows to interpolate between different actions for the same theory, connected by different choices of Picture Changing Operators (PCO). In this way we retrieve component and superspace actions, and prove their equivalence. The PCO are closed integral forms, and can be interpreted as super Poincar\\'e duals of bosonic submanifolds embedded into a supermanifold.. We use them to construct Lagrangians that are top integral forms, and therefore can be integrated on the whole supermanifold. The $D=1, ~N=1$ and the $D=1,~ N=2$ cases are studied, in a flat and in a curved supermanifold. In this formalism we also consider coupling with gauge fields, Hilbert space of quantum states and observables.
Simple formalism for efficient derivatives and multi-determinant expansions in quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filippi, Claudia, E-mail: c.filippi@utwente.nl; Assaraf, Roland, E-mail: assaraf@lct.jussieu.fr; Moroni, Saverio, E-mail: moroni@democritos.it
2016-05-21
We present a simple and general formalism to compute efficiently the derivatives of a multi-determinant Jastrow-Slater wave function, the local energy, the interatomic forces, and similar quantities needed in quantum Monte Carlo. Through a straightforward manipulation of matrices evaluated on the occupied and virtual orbitals, we obtain an efficiency equivalent to algorithmic differentiation in the computation of the interatomic forces and the optimization of the orbital parameters. Furthermore, for a large multi-determinant expansion, the significant computational gain afforded by a recently introduced table method is here extended to the local value of any one-body operator and to its derivatives, inmore » both all-electron and pseudopotential calculations.« less
NASA Technical Reports Server (NTRS)
Bolton, Matthew L.; Bass, Ellen J.
2009-01-01
Both the human factors engineering (HFE) and formal methods communities are concerned with finding and eliminating problems with safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to use model checking with HFE practices to perform formal verification of a human-interactive system. Despite the use of a seemingly simple target system, a patient controlled analgesia pump, the initial model proved to be difficult for the model checker to verify in a reasonable amount of time. This resulted in a number of model revisions that affected the HFE architectural, representativeness, and understandability goals of the effort. If formal methods are to meet the needs of the HFE community, additional modeling tools and technological developments are necessary.
Extended Theories of Gravitation. Observation Protocols and Experimental Tests
NASA Astrophysics Data System (ADS)
Fatibene, Lorenzo; Ferraris, Marco; Francaviglia, Mauro; Magnano, Guido
2013-09-01
Within the framework of extended theories of gravitation we shall discuss physical equivalences among different formalisms and classical tests. As suggested by the Ehlers-Pirani-Schild framework, the conformal invariance will be preserved and its effect on observational protocols discussed. Accordingly, we shall review standard tests showing how Palatini f(R)-theories naturally passes solar system tests. Observation protocols will be discussed in this wider framework.
Wheeled Pro(p)file of Batalin-Vilkovisky Formalism
NASA Astrophysics Data System (ADS)
Merkulov, S. A.
2010-05-01
Using a technique of wheeled props we establish a correspondence between the homotopy theory of unimodular Lie 1-bialgebras and the famous Batalin-Vilkovisky formalism. Solutions of the so-called quantum master equation satisfying certain boundary conditions are proven to be in 1-1 correspondence with representations of a wheeled dg prop which, on the one hand, is isomorphic to the cobar construction of the prop of unimodular Lie 1-bialgebras and, on the other hand, is quasi-isomorphic to the dg wheeled prop of unimodular Poisson structures. These results allow us to apply properadic methods for computing formulae for a homotopy transfer of a unimodular Lie 1-bialgebra structure on an arbitrary complex to the associated quantum master function on its cohomology. It is proven that in the category of quantum BV manifolds associated with the homotopy theory of unimodular Lie 1-bialgebras quasi-isomorphisms are equivalence relations. It is shown that Losev-Mnev’s BF theory for unimodular Lie algebras can be naturally extended to the case of unimodular Lie 1-bialgebras (and, eventually, to the case of unimodular Poisson structures). Using a finite-dimensional version of the Batalin-Vilkovisky quantization formalism it is rigorously proven that the Feynman integrals computing the effective action of this new BF theory describe precisely homotopy transfer formulae obtained within the wheeled properadic approach to the quantum master equation. Quantum corrections (which are present in our BF model to all orders of the Planck constant) correspond precisely to what are often called “higher Massey products” in the homological algebra.
FORMED: Bringing Formal Methods to the Engineering Desktop
2016-02-01
integrates formal verification into software design and development by precisely defining semantics for a restricted subset of the Unified Modeling...input-output contract satisfaction and absence of null pointer dereferences. 15. SUBJECT TERMS Formal Methods, Software Verification , Model-Based...Domain specific languages (DSLs) drive both implementation and formal verification
Beware the tail that wags the dog: informal and formal models in biology
Gunawardena, Jeremy
2014-01-01
Informal models have always been used in biology to guide thinking and devise experiments. In recent years, formal mathematical models have also been widely introduced. It is sometimes suggested that formal models are inherently superior to informal ones and that biology should develop along the lines of physics or economics by replacing the latter with the former. Here I suggest to the contrary that progress in biology requires a better integration of the formal with the informal. PMID:25368417
Remanent and induced contributions of the Earth's magnetization
NASA Astrophysics Data System (ADS)
Vervelidou, Foteini; Lesur, Vincent; Thébault, Erwan; Dyment, Jérôme; Holschneider, Matthias
2016-04-01
Inverting the magnetic field of crustal origin for the magnetization distribution that generates it suffers from non-uniqueness. The reason for this is the so-called annihilators, i.e. structures that produce no visible magnetic field outside the sources. Gubbins et al., 2011 uses the complex vector Spherical Harmonics notation in order to separate the Vertical Integrated Magnetization (VIM) distribution into the parts that do and do not contribute to the magnetic field measured in source free regions. We use their formalism and convert a crustal SH model based on the WDMAM into a model for the equivalent magnetization. However, we extend their formalism and assume that the magnetization is confined within a layer of finite thickness. A different thickness is considered for the oceanic crust than for the continental one. It is well known that the large scales of the crustal field are entirely masked by the Earth's main field. Therefore, we complement the WDMAM based magnetization map (SH degrees 16 to 800) with the magnetization map for the large wavelengths (SH degrees 1-15) that was recently derived by Vervelidou and Thébault (2015) from a series of regional statistical analyses of the World Digital Magnetic Anomaly Map. Finally we propose a tentative separation of this magnetization map into induced and remanent contributions on a regional scale. We do so based on the direction of the core magnetic field. We discuss the implications of these results in terms of the tectonic history of the Earth.
Stability of Viscous St. Venant Roll Waves: From Onset to Infinite Froude Number Limit
NASA Astrophysics Data System (ADS)
Barker, Blake; Johnson, Mathew A.; Noble, Pascal; Rodrigues, L. Miguel; Zumbrun, Kevin
2017-02-01
We study the spectral stability of roll wave solutions of the viscous St. Venant equations modeling inclined shallow water flow, both at onset in the small Froude number or "weakly unstable" limit F→ 2^+ and for general values of the Froude number F, including the limit F→ +∞ . In the former, F→ 2^+, limit, the shallow water equations are formally approximated by a Korteweg-de Vries/Kuramoto-Sivashinsky (KdV-KS) equation that is a singular perturbation of the standard Korteweg-de Vries (KdV) equation modeling horizontal shallow water flow. Our main analytical result is to rigorously validate this formal limit, showing that stability as F→ 2^+ is equivalent to stability of the corresponding KdV-KS waves in the KdV limit. Together with recent results obtained for KdV-KS by Johnson-Noble-Rodrigues-Zumbrun and Barker, this gives not only the first rigorous verification of stability for any single viscous St. Venant roll wave, but a complete classification of stability in the weakly unstable limit. In the remainder of the paper, we investigate numerically and analytically the evolution of the stability diagram as Froude number increases to infinity. Notably, we find transition at around F=2.3 from weakly unstable to different, large- F behavior, with stability determined by simple power-law relations. The latter stability criteria are potentially useful in hydraulic engineering applications, for which typically 2.5≤ F≤ 6.0.
Proceedings of the Second NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Munoz, Cesar (Editor)
2010-01-01
This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.
The Endpoint Hypothesis: A Topological-Cognitive Assessment of Geographic Scale Movement Patterns
NASA Astrophysics Data System (ADS)
Klippel, Alexander; Li, Rui
Movement patterns of individual entities at the geographic scale are becoming a prominent research focus in spatial sciences. One pertinent question is how cognitive and formal characterizations of movement patterns relate. In other words, are (mostly qualitative) formal characterizations cognitively adequate? This article experimentally evaluates movement patterns that can be characterized as paths through a conceptual neighborhood graph, that is, two extended spatial entities changing their topological relationship gradually. The central questions addressed are: (a) Do humans naturally use topology to create cognitive equivalent classes, that is, is topology the basis for categorizing movement patterns spatially? (b) Are ‘all’ topological relations equally salient, and (c) does language influence categorization. The first two questions are addressed using a modification of the endpoint hypothesis stating that: movement patterns are distinguished by the topological relation they end in. The third question addresses whether language has an influence on the classification of movement patterns, that is, whether there is a difference between linguistic and non-linguistic category construction. In contrast to our previous findings we were able to document the importance of topology for conceptualizing movement patterns but also reveal differences in the cognitive saliency of topological relations. The latter aspect calls for a weighted conceptual neighborhood graph to cognitively adequately model human conceptualization processes.
Formal modeling and analysis of ER-α associated Biological Regulatory Network in breast cancer.
Khalid, Samra; Hanif, Rumeza; Tareen, Samar H K; Siddiqa, Amnah; Bibi, Zurah; Ahmad, Jamil
2016-01-01
Breast cancer (BC) is one of the leading cause of death among females worldwide. The increasing incidence of BC is due to various genetic and environmental changes which lead to the disruption of cellular signaling network(s). It is a complex disease in which several interlinking signaling cascades play a crucial role in establishing a complex regulatory network. The logical modeling approach of René Thomas has been applied to analyze the behavior of estrogen receptor-alpha (ER- α ) associated Biological Regulatory Network (BRN) for a small part of complex events that leads to BC metastasis. A discrete model was constructed using the kinetic logic formalism and its set of logical parameters were obtained using the model checking technique implemented in the SMBioNet software which is consistent with biological observations. The discrete model was further enriched with continuous dynamics by converting it into an equivalent Petri Net (PN) to analyze the logical parameters of the involved entities. In-silico based discrete and continuous modeling of ER- α associated signaling network involved in BC provides information about behaviors and gene-gene interaction in detail. The dynamics of discrete model revealed, imperative behaviors represented as cyclic paths and trajectories leading to pathogenic states such as metastasis. Results suggest that the increased expressions of receptors ER- α , IGF-1R and EGFR slow down the activity of tumor suppressor genes (TSGs) such as BRCA1, p53 and Mdm2 which can lead to metastasis. Therefore, IGF-1R and EGFR are considered as important inhibitory targets to control the metastasis in BC. The in-silico approaches allow us to increase our understanding of the functional properties of living organisms. It opens new avenues of investigations of multiple inhibitory targets (ER- α , IGF-1R and EGFR) for wet lab experiments as well as provided valuable insights in the treatment of cancers such as BC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelanti, Marica, E-mail: Marica.Pelanti@ens.f; Bouchut, Francois, E-mail: francois.bouchut@univ-mlv.f; Mangeney, Anne, E-mail: mangeney@ipgp.jussieu.f
2011-02-01
We present a Riemann solver derived by a relaxation technique for classical single-phase shallow flow equations and for a two-phase shallow flow model describing a mixture of solid granular material and fluid. Our primary interest is the numerical approximation of this two-phase solid/fluid model, whose complexity poses numerical difficulties that cannot be efficiently addressed by existing solvers. In particular, we are concerned with ensuring a robust treatment of dry bed states. The relaxation system used by the proposed solver is formulated by introducing auxiliary variables that replace the momenta in the spatial gradients of the original model systems. The resultingmore » relaxation solver is related to Roe solver in that its Riemann solution for the flow height and relaxation variables is formally computed as Roe's Riemann solution. The relaxation solver has the advantage of a certain degree of freedom in the specification of the wave structure through the choice of the relaxation parameters. This flexibility can be exploited to handle robustly vacuum states, which is a well known difficulty of standard Roe's method, while maintaining Roe's low diffusivity. For the single-phase model positivity of flow height is rigorously preserved. For the two-phase model positivity of volume fractions in general is not ensured, and a suitable restriction on the CFL number might be needed. Nonetheless, numerical experiments suggest that the proposed two-phase flow solver efficiently models wet/dry fronts and vacuum formation for a large range of flow conditions. As a corollary of our study, we show that for single-phase shallow flow equations the relaxation solver is formally equivalent to the VFRoe solver with conservative variables of Gallouet and Masella [T. Gallouet, J.-M. Masella, Un schema de Godunov approche C.R. Acad. Sci. Paris, Serie I, 323 (1996) 77-84]. The relaxation interpretation allows establishing positivity conditions for this VFRoe method.« less
Quantum-chemical modeling of smectite clays
NASA Technical Reports Server (NTRS)
Aronowitz, S.; Coyne, L.; Lawless, J.; Rishpon, J.
1982-01-01
A self-consistent charge extended Hueckel program is used in modeling isomorphic substitution of Al(3+) by Na(+), K(+), Mg(2+), Fe(2+), and Fe(3+) in the octahedral layer of a dioctahedral smectite clay, such as montmorillonite. Upon comparison of the energies involved in the isomorphic substitution, it is found that the order for successful substitution is as follows: Al(3+), Fe(3+), Mg(2+), Fe(2+), Na(+), which is equivalent to Ca(2+), and then K(+). This ordering is found to be consistent with experimental observation. The calculations also make it possible to determine the possible penetration of metal ions into the clay's 2:1 crystalline layer. For the cases studied, this type of penetration can occur at elevated temperatures into regions where isomorphic substitution has occurred with metal ions that bear a formal charge of less than 3+. The computed behavior of the electronic structure in the presence of isomorphic substitution is found to be similar to behavior associated with semiconductors.
Mobile computing initiatives within pharmacy education.
Cain, Jeff; Bird, Eleanora R; Jones, Mikael
2008-08-15
To identify mobile computing initiatives within pharmacy education, including how devices are obtained, supported, and utilized within the curriculum. An 18-item questionnaire was developed and delivered to academic affairs deans (or closest equivalent) of 98 colleges and schools of pharmacy. Fifty-four colleges and schools completed the questionnaire for a 55% completion rate. Thirteen of those schools have implemented mobile computing requirements for students. Twenty schools reported they were likely to formally consider implementing a mobile computing initiative within 5 years. Numerous models of mobile computing initiatives exist in terms of device obtainment, technical support, infrastructure, and utilization within the curriculum. Responders identified flexibility in teaching and learning as the most positive aspect of the initiatives and computer-aided distraction as the most negative, Numerous factors should be taken into consideration when deciding if and how a mobile computing requirement should be implemented.
Non-LTE gallium abundance in HgMn stars
NASA Astrophysics Data System (ADS)
Zboril, M.; Berrington, K. A.
2001-07-01
We present, for the first time, the Non-LTE gallium equivalent widths for the most prominent gallium transitions as identified in real spectra and in (hot) mercury-manganese star. The common feature of the departure coefficients is to decrease near the stellar surface, the collision rates are dominant in many cases and the Non-LTE equivalent widths are generally smaller. In particular, the abundance difference as derived from UV and visual lines is reduced. The photoionization cross sections were computed by means of standard R-matrix formalism. The gallium cross-sections are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/373/987
Protecting United States Interests in Antarctica
1988-04-01
highest continental eievation(7). Because its average annual (water equivalent) precinitation amounts to only a few: inches(B). Antartic a is, irr &c’lc...Antarctic environment and ec. s ys tem( 89D These recomraendaticns are the only formal policymaking mechan1sm of the Antartic Tren.- reime. The...Ocean and Saudi Arabia provides financial support for ongoing, iceberg towing studies. NATIONAL SCIENCE FOUNDATION, UN’ITED STATES ANTARCTIC RESEARCH
1987-06-01
described the state )f ruaturity of software engineering as being equivalent to the state of maturity of Civil Engineering before Pythagoras invented the...formal verification languages, theorem provers or secure configuration 0 management tools would have to be maintained and used in the PDSS Center to
Supporting research readiness in social enterprise health services.
Wright, Nat M J; Hearty, Philippa; Harris, Linda; Burnell, Andrew; Pender, Sue; Oxnard, Chris; Charlesworth, George
2017-09-13
Health-based social enterprises are spun out of the NHS, yet continue to provide NHS-funded services. With the spin-out, however, formal processes for research governance were lost. Patients have a right to take part in research, regardless of where they access healthcare. This paper discusses the barriers to social enterprises undertaking applied health research and makes recommendations to address the need for equivalence of governance processes with NHS trusts.
A dual Lewis base activation strategy for enantioselective carbene-catalyzed annulations.
Izquierdo, Javier; Orue, Ane; Scheidt, Karl A
2013-07-24
A dual activation strategy integrating N-heterocyclic carbene (NHC) catalysis and a second Lewis base has been developed. NHC-bound homoenolate equivalents derived from α,β-unsaturated aldehydes combine with transient reactive o-quinone methides in an enantioselective formal [4 + 3] fashion to access 2-benzoxopinones. The overall approach provides a general blueprint for the integration of carbene catalysis with additional Lewis base activation modes.
Geometric interpretation of vertex operator algebras.
Huang, Y Z
1991-01-01
In this paper, Vafa's approach to the formulation of conformal field theories is combined with the formal calculus developed in Frenkel, Lepowsky, and Meurman's work on the vertex operator construction of the Monster to give a geometric definition of vertex operator algebras. The main result announced is the equivalence between this definition and the algebraic one in the sense that the categories determined by these definitions are isomorphic. PMID:11607240
Sachs' free data in real connection variables
NASA Astrophysics Data System (ADS)
De Paoli, Elena; Speziale, Simone
2017-11-01
We discuss the Hamiltonian dynamics of general relativity with real connection variables on a null foliation, and use the Newman-Penrose formalism to shed light on the geometric meaning of the various constraints. We identify the equivalent of Sachs' constraint-free initial data as projections of connection components related to null rotations, i.e. the translational part of the ISO(2) group stabilising the internal null direction soldered to the hypersurface. A pair of second-class constraints reduces these connection components to the shear of a null geodesic congruence, thus establishing equivalence with the second-order formalism, which we show in details at the level of symplectic potentials. A special feature of the first-order formulation is that Sachs' propagating equations for the shear, away from the initial hypersurface, are turned into tertiary constraints; their role is to preserve the relation between connection and shear under retarded time evolution. The conversion of wave-like propagating equations into constraints is possible thanks to an algebraic Bianchi identity; the same one that allows one to describe the radiative data at future null infinity in terms of a shear of a (non-geodesic) asymptotic null vector field in the physical spacetime. Finally, we compute the modification to the spin coefficients and the null congruence in the presence of torsion.
[From clinical judgment to linear regression model.
Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2013-01-01
When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.
Architecture and inherent robustness of a bacterial cell-cycle control system.
Shen, Xiling; Collier, Justine; Dill, David; Shapiro, Lucy; Horowitz, Mark; McAdams, Harley H
2008-08-12
A closed-loop control system drives progression of the coupled stalked and swarmer cell cycles of the bacterium Caulobacter crescentus in a near-mechanical step-like fashion. The cell-cycle control has a cyclical genetic circuit composed of four regulatory proteins with tight coupling to processive chromosome replication and cell division subsystems. We report a hybrid simulation of the coupled cell-cycle control system, including asymmetric cell division and responses to external starvation signals, that replicates mRNA and protein concentration patterns and is consistent with observed mutant phenotypes. An asynchronous sequential digital circuit model equivalent to the validated simulation model was created. Formal model-checking analysis of the digital circuit showed that the cell-cycle control is robust to intrinsic stochastic variations in reaction rates and nutrient supply, and that it reliably stops and restarts to accommodate nutrient starvation. Model checking also showed that mechanisms involving methylation-state changes in regulatory promoter regions during DNA replication increase the robustness of the cell-cycle control. The hybrid cell-cycle simulation implementation is inherently extensible and provides a promising approach for development of whole-cell behavioral models that can replicate the observed functionality of the cell and its responses to changing environmental conditions.
Capillary wave Hamiltonian for the Landau-Ginzburg-Wilson density functional
NASA Astrophysics Data System (ADS)
Chacón, Enrique; Tarazona, Pedro
2016-06-01
We study the link between the density functional (DF) formalism and the capillary wave theory (CWT) for liquid surfaces, focused on the Landau-Ginzburg-Wilson (LGW) model, or square gradient DF expansion, with a symmetric double parabola free energy, which has been extensively used in theoretical studies of this problem. We show the equivalence between the non-local DF results of Parry and coworkers and the direct evaluation of the mean square fluctuations of the intrinsic surface, as is done in the intrinsic sampling method for computer simulations. The definition of effective wave-vector dependent surface tensions is reviewed and we obtain new proposals for the LGW model. The surface weight proposed by Blokhuis and the surface mode analysis proposed by Stecki provide consistent and optimal effective definitions for the extended CWT Hamiltonian associated to the DF model. A non-local, or coarse-grained, definition of the intrinsic surface provides the missing element to get the mesoscopic surface Hamiltonian from the molecular DF description, as had been proposed a long time ago by Dietrich and coworkers.
Capillary wave Hamiltonian for the Landau-Ginzburg-Wilson density functional.
Chacón, Enrique; Tarazona, Pedro
2016-06-22
We study the link between the density functional (DF) formalism and the capillary wave theory (CWT) for liquid surfaces, focused on the Landau-Ginzburg-Wilson (LGW) model, or square gradient DF expansion, with a symmetric double parabola free energy, which has been extensively used in theoretical studies of this problem. We show the equivalence between the non-local DF results of Parry and coworkers and the direct evaluation of the mean square fluctuations of the intrinsic surface, as is done in the intrinsic sampling method for computer simulations. The definition of effective wave-vector dependent surface tensions is reviewed and we obtain new proposals for the LGW model. The surface weight proposed by Blokhuis and the surface mode analysis proposed by Stecki provide consistent and optimal effective definitions for the extended CWT Hamiltonian associated to the DF model. A non-local, or coarse-grained, definition of the intrinsic surface provides the missing element to get the mesoscopic surface Hamiltonian from the molecular DF description, as had been proposed a long time ago by Dietrich and coworkers.
From Classical to Quantum: New Canonical Tools for the Dynamics of Gravity
NASA Astrophysics Data System (ADS)
Höhn, P. A.
2012-05-01
In a gravitational context, canonical methods offer an intuitive picture of the dynamics and simplify an identification of the degrees of freedom. Nevertheless, extracting dynamical information from background independent approaches to quantum gravity is a highly non-trivial challenge. In this thesis, the conundrum of (quantum) gravitational dynamics is approached from two different directions by means of new canonical tools. This thesis is accordingly divided into two parts: In the first part, a general canonical formalism for discrete systems featuring a variational action principle is developed which is equivalent to the covariant formulation following directly from the action. This formalism can handle evolving phase spaces and is thus appropriate for describing evolving lattices. Attention will be devoted to a characterization of the constraints, symmetries and degrees of freedom appearing in such discrete systems which, in the case of evolving phase spaces, is time step dependent. The advantage of this formalism is that it does not depend on the particular discretization and, hence, is suitable for coarse graining procedures. This formalism is applicable to discrete mechanics, lattice field theories and discrete gravity models---underlying some approaches to quantum gravity---and, furthermore, may prove useful for numerical imple mentations. For concreteness, these new tools are employed to formulate Regge Calculus canonically as a theory of the dynamics of discrete hypersurfaces in discrete spacetimes, thereby removing a longstanding obstacle to connecting covariant simplicial gravity models with canonical frameworks. This result is interesting in view of several background independent approaches to quantum gravity. In addition, perturbative expansions around symmetric background solutions of Regge Calculus are studied up to second order. Background gauge modes generically become propagating at second order as a consequence of a symmetry breaking. In the second part of this thesis, the paradigm of relational dynamics is considered. Dynamical observables in gravity are relational. Unfortunately, their construction and evaluation is notoriously difficult, especially in the quantum theory. An effective canonical framework is devised which permits to evaluate the semiclassical relational dynamics of constrained quantum systems by sidestepping technical problems associated with explicit constructions of physical Hilbert spaces. This effective approach is well-geared for addressing the concept of relational evolution in general quantum cosmological models since it (i) allows to depart from idealized relational `clock references’ and, instead, to employ generic degrees of freedom as imperfect relational `clocks’, (ii) enables one to systematically switch between different such `clocks’ and (iii) yields a consistent (temporally) local time evolution with transient observables so long as semiclassicality holds. These techniques are illustrated by toy models and, finally, are applied to a non-integrable cosmological model. It is argued that relational evolution is generically only a transient and semiclassical phenomenon
Interpreting Quantum Logic as a Pragmatic Structure
NASA Astrophysics Data System (ADS)
Garola, Claudio
2017-12-01
Many scholars maintain that the language of quantum mechanics introduces a quantum notion of truth which is formalized by (standard, sharp) quantum logic and is incompatible with the classical (Tarskian) notion of truth. We show that quantum logic can be identified (up to an equivalence relation) with a fragment of a pragmatic language LGP of assertive formulas, that are justified or unjustified rather than trueor false. Quantum logic can then be interpreted as an algebraic structure that formalizes properties of the notion of empirical justification according to quantum mechanics rather than properties of a quantum notion of truth. This conclusion agrees with a general integrationist perspective that interprets nonstandard logics as theories of metalinguistic notions different from truth, thus avoiding incompatibility with classical notions and preserving the globality of logic.
Electroweak splitting functions and high energy showering
NASA Astrophysics Data System (ADS)
Chen, Junmou; Han, Tao; Tweedie, Brock
2017-11-01
We derive the electroweak (EW) collinear splitting functions for the Standard Model, including the massive fermions, gauge bosons and the Higgs boson. We first present the splitting functions in the limit of unbroken SU(2) L × U(1) Y and discuss their general features in the collinear and soft-collinear regimes. These are the leading contributions at a splitting scale ( k T ) far above the EW scale ( v). We then systematically incorporate EW symmetry breaking (EWSB), which leads to the emergence of additional "ultra-collinear" splitting phenomena and naive violations of the Goldstone-boson Equivalence Theorem. We suggest a particularly convenient choice of non-covariant gauge (dubbed "Goldstone Equivalence Gauge") that disentangles the effects of Goldstone bosons and gauge fields in the presence of EWSB, and allows trivial book-keeping of leading power corrections in v/ k T . We implement a comprehensive, practical EW showering scheme based on these splitting functions using a Sudakov evolution formalism. Novel features in the implementation include a complete accounting of ultra-collinear effects, matching between shower and decay, kinematic back-reaction corrections in multi-stage showers, and mixed-state evolution of neutral bosons ( γ/ Z/ h) using density-matrices. We employ the EW showering formalism to study a number of important physical processes at O (1-10 TeV) energies. They include (a) electroweak partons in the initial state as the basis for vector-boson-fusion; (b) the emergence of "weak jets" such as those initiated by transverse gauge bosons, with individual splitting probabilities as large as O (35%); (c) EW showers initiated by top quarks, including Higgs bosons in the final state; (d) the occurrence of O (1) interference effects within EW showers involving the neutral bosons; and (e) EW corrections to new physics processes, as illustrated by production of a heavy vector boson ( W ') and the subsequent showering of its decay products.
Alonso, Beatriz; Ocejo, Marta; Carrillo, Luisa; Vicario, Jose L; Reyes, Efraim; Uria, Uxue
2013-01-18
We have developed an efficient protocol for carrying out the stereocontrolled formal conjugate addition of hydroxycarbonyl anion equivalents to α,β-unsaturated carboxylic acid derivatives using (S,S)-(+)-pseudoephedrine as chiral auxiliary, making use of the synthetic equivalence between the heteroaryl moieties and the carboxylate group. This protocol has been applied as key step in the enantioselective synthesis of 3-substituted pyrrolidines in which, after removing the chiral auxiliary, the heteroaryl moiety is converted into a carboxylate group followed by reduction and double nucleophilic displacement. Alternatively, the access to the same type of heterocyclic scaffold but with opposite absolute configuration has also been accomplished by making use of the regio- and diastereoselective conjugate addition of organolithium reagents to α,β,γ,δ-unsaturated amides derived from the same chiral auxiliary followed by chiral auxiliary removal, ozonolysis, and reductive amination/intramolecular nucleophilic displacement sequence.
Experiences Using Formal Methods for Requirements Modeling
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David
1996-01-01
This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.
Software Validation via Model Animation
NASA Technical Reports Server (NTRS)
Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.
2015-01-01
This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.
Modeling formalisms in Systems Biology
2011-01-01
Systems Biology has taken advantage of computational tools and high-throughput experimental data to model several biological processes. These include signaling, gene regulatory, and metabolic networks. However, most of these models are specific to each kind of network. Their interconnection demands a whole-cell modeling framework for a complete understanding of cellular systems. We describe the features required by an integrated framework for modeling, analyzing and simulating biological processes, and review several modeling formalisms that have been used in Systems Biology including Boolean networks, Bayesian networks, Petri nets, process algebras, constraint-based models, differential equations, rule-based models, interacting state machines, cellular automata, and agent-based models. We compare the features provided by different formalisms, and discuss recent approaches in the integration of these formalisms, as well as possible directions for the future. PMID:22141422
Kwan, Christine Ml; Napoles, Anna M; Chou, Jeyling; Seligman, Hilary K
2015-02-01
To develop a conceptually equivalent Chinese-language translation of the eighteen-item US Household Food Security Survey Module. In the current qualitative study, we (i) highlight methodological challenges which arise in developing survey instruments that will be used to make comparisons across language groups and (ii) describe the development of a Chinese-language translation of the US Household Food Security Survey Module, called the San Francisco Chinese Food Security Module. Community sites in San Francisco, CA, USA. We conducted cognitive interviews with twenty-two community members recruited from community sites hosting food pantries and with five professionals recruited from clinical settings. Development of conceptually equivalent surveys can be difficult. We highlight challenges related to dialect, education, literacy (e.g. preferences for more or less formal phrasing), English words and phrases for which there is no Chinese language equivalent (e.g. 'balanced meals' and 'eat less than you felt you should') and response formats. We selected final translations to maximize: (i) consistency of the Chinese translation with the intent of the English version; (ii) clarity; and (iii) similarities in understanding across dialects and literacy levels. Survey translation is essential for conducting research in many communities. The challenges encountered illustrate how literal translations can affect the conceptual equivalence of survey items across languages. Cognitive interview methods should be routinely used for survey translation when such non-equivalence is suspected, such as in surveys addressing highly culturally bound behaviours such as diet and eating behaviours. Literally translated surveys lacking conceptual equivalence may magnify or obscure important health inequalities.
NASA Astrophysics Data System (ADS)
Barreto, A. B.; Pucheu, M. L.; Romero, C.
2018-02-01
We consider scalar–tensor theories of gravity defined in Weyl integrable space-time and show that for time-lapse extended Robertson–Walker metrics in the ADM formalism a class of Weyl transformations corresponding to change of frames induce canonical transformations between different representations of the phase space. In this context, we discuss the physical equivalence of two distinct Weyl frames at the classical level.
Patient group directions to enable hospital nurses to supply medicines.
Dimond, Brigit
Sarah is a nurse specialist in accident and emergency (A&E) nursing. She is told by the consultant that as a nurse specialist she is able to supply various ointments and pain killers for patients who are only seen by the clinical nurse specialist in A&E. She feels that this is the equivalent of prescribing and considers that there should be a more formal procedure. What is the law?
On Gammelgaard's Formula for a Star Product with Separation of Variables
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2013-08-01
We show that Gammelgaard's formula expressing a star product with separation of variables on a pseudo-Kähler manifold in terms of directed graphs without cycles is equivalent to an inversion formula for an operator on a formal Fock space. We prove this inversion formula directly and thus offer an alternative approach to Gammelgaard's formula which gives more insight into the question why the directed graphs in his formula have no cycles.
The Evolution of Cooperation in the Finitely Repeated Prisoner’s Dilemma
1989-09-01
biological evolutionary game theory, mathematical ecology (the replicator dynamics are formally equivalent to the Lotka - Volterra dynamics), and...repeated prisoner’s dilemma. Under the dynamics considered, if there is convergence to a limit (in general there need not be), then that limit must...of time. It will be noted also that this same behavior can create computation problems making it imprudent in general to try to infer limiting
CTD² Publication Guidelines | Office of Cancer Genomics
The Cancer Target Discovery and Development (CTD2) Network is a “community resource project” supported by the National Cancer Institute’s Office of Cancer Genomics. Members of the Network release data to the broader research community by depositing data into NCI-supported or public databases. Data deposition is NOT equivalent to publishing in a peer-reviewed journal. Unless there is a manuscript associated with a dataset, the Network considers data to be formally unpublished.
NASA Astrophysics Data System (ADS)
Derakhshani, Maaneli
In this thesis, we consider the implications of solving the quantum measurement problem for the Newtonian description of semiclassical gravity. First we review the formalism of the Newtonian description of semiclassical gravity based on standard quantum mechanics---the Schroedinger-Newton theory---and two well-established predictions that come out of it, namely, gravitational 'cat states' and gravitationally-induced wavepacket collapse. Then we review three quantum theories with 'primitive ontologies' that are well-known known to solve the measurement problem---Schroedinger's many worlds theory, the GRW collapse theory with matter density ontology, and Nelson's stochastic mechanics. We extend the formalisms of these three quantum theories to Newtonian models of semiclassical gravity and evaluate their implications for gravitational cat states and gravitational wavepacket collapse. We find that (1) Newtonian semiclassical gravity based on Schroedinger's many worlds theory is mathematically equivalent to the Schroedinger-Newton theory and makes the same predictions; (2) Newtonian semiclassical gravity based on the GRW theory differs from Schroedinger-Newton only in the use of a stochastic collapse law, but this law allows it to suppress gravitational cat states so as not to be in contradiction with experiment, while allowing for gravitational wavepacket collapse to happen as well; (3) Newtonian semiclassical gravity based on Nelson's stochastic mechanics differs significantly from Schroedinger-Newton, and does not predict gravitational cat states nor gravitational wavepacket collapse. Considering that gravitational cat states are experimentally ruled out, but gravitational wavepacket collapse is testable in the near future, this implies that only the latter two are viable theories of Newtonian semiclassical gravity and that they can be experimentally tested against each other in future molecular interferometry experiments that are anticipated to be capable of testing the gravitational wavepacket collapse prediction.
A normal tissue dose response model of dynamic repair processes.
Alber, Markus; Belka, Claus
2006-01-07
A model is presented for serial, critical element complication mechanisms for irradiated volumes from length scales of a few millimetres up to the entire organ. The central element of the model is the description of radiation complication as the failure of a dynamic repair process. The nature of the repair process is seen as reestablishing the structural organization of the tissue, rather than mere replenishment of lost cells. The interactions between the cells, such as migration, involved in the repair process are assumed to have finite ranges, which limits the repair capacity and is the defining property of a finite-sized reconstruction unit. Since the details of the repair processes are largely unknown, the development aims to make the most general assumptions about them. The model employs analogies and methods from thermodynamics and statistical physics. An explicit analytical form of the dose response of the reconstruction unit for total, partial and inhomogeneous irradiation is derived. The use of the model is demonstrated with data from animal spinal cord experiments and clinical data about heart, lung and rectum. The three-parameter model lends a new perspective to the equivalent uniform dose formalism and the established serial and parallel complication models. Its implications for dose optimization are discussed.
Effective action for stochastic partial differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hochberg, David; Centro de Astrobiologia, INTA, Carratera Ajalvir, Km. 4, 28850 Torrejon, Madrid,; Molina-Paris, Carmen
Stochastic partial differential equations (SPDEs) are the basic tool for modeling systems where noise is important. SPDEs are used for models of turbulence, pattern formation, and the structural development of the universe itself. It is reasonably well known that certain SPDEs can be manipulated to be equivalent to (nonquantum) field theories that nevertheless exhibit deep and important relationships with quantum field theory. In this paper we systematically extend these ideas: We set up a functional integral formalism and demonstrate how to extract all the one-loop physics for an arbitrary SPDE subject to arbitrary Gaussian noise. It is extremely important tomore » realize that Gaussian noise does not imply that the field variables undergo Gaussian fluctuations, and that these nonquantum field theories are fully interacting. The limitation to one loop is not as serious as might be supposed: Experience with quantum field theories (QFTs) has taught us that one-loop physics is often quite adequate to give a good description of the salient issues. The limitation to one loop does, however, offer marked technical advantages: Because at one loop almost any field theory can be rendered finite using zeta function technology, we can sidestep the complications inherent in the Martin-Siggia-Rose formalism (the SPDE analog of the Becchi-Rouet-Stora-Tyutin formalism used in QFT) and instead focus attention on a minimalist approach that uses only the physical fields (this ''direct approach'' is the SPDE analog of canonical quantization using physical fields). After setting up the general formalism for the characteristic functional (partition function), we show how to define the effective action to all loops, and then focus on the one-loop effective action and its specialization to constant fields: the effective potential. The physical interpretation of the effective action and effective potential for SPDEs is addressed and we show that key features carry over from QFT to the case of SPDEs. An important result is that the amplitude of the two-point function governing the noise acts as the loop-counting parameter and is the analog of Planck's constant ({Dirac_h}/2{pi}) in this SPDE context. We derive a general expression for the one-loop effective potential of an arbitrary SPDE subject to translation-invariant Gaussian noise, and compare this with the one-loop potential for QFT. (c) 1999 The American Physical Society.« less
Trap Model for Clogging and Unclogging in Granular Hopper Flows.
Nicolas, Alexandre; Garcimartín, Ángel; Zuriguel, Iker
2018-05-11
Granular flows through narrow outlets may be interrupted by the formation of arches or vaults that clog the exit. These clogs may be destroyed by vibrations. A feature which remains elusive is the broad distribution p(τ) of clog lifetimes τ measured under constant vibrations. Here, we propose a simple model for arch breaking, in which the vibrations are formally equivalent to thermal fluctuations in a Langevin equation; the rupture of an arch corresponds to the escape from an energy trap. We infer the distribution of trap depths from experiments made in two-dimensional hoppers. Using this distribution, we show that the model captures the empirically observed heavy tails in p(τ). These heavy tails flatten at large τ, consistently with experimental observations under weak vibrations. But, here, we find that this flattening is systematic, which casts doubt on the ability of gentle vibrations to restore a finite outflow forever. The trap model also replicates recent results on the effect of increasing gravity on the statistics of clog formation in a static silo. Therefore, the proposed framework points to a common physical underpinning to the processes of clogging and unclogging, despite their different statistics.
Trap Model for Clogging and Unclogging in Granular Hopper Flows
NASA Astrophysics Data System (ADS)
Nicolas, Alexandre; Garcimartín, Ángel; Zuriguel, Iker
2018-05-01
Granular flows through narrow outlets may be interrupted by the formation of arches or vaults that clog the exit. These clogs may be destroyed by vibrations. A feature which remains elusive is the broad distribution p (τ ) of clog lifetimes τ measured under constant vibrations. Here, we propose a simple model for arch breaking, in which the vibrations are formally equivalent to thermal fluctuations in a Langevin equation; the rupture of an arch corresponds to the escape from an energy trap. We infer the distribution of trap depths from experiments made in two-dimensional hoppers. Using this distribution, we show that the model captures the empirically observed heavy tails in p (τ ). These heavy tails flatten at large τ , consistently with experimental observations under weak vibrations. But, here, we find that this flattening is systematic, which casts doubt on the ability of gentle vibrations to restore a finite outflow forever. The trap model also replicates recent results on the effect of increasing gravity on the statistics of clog formation in a static silo. Therefore, the proposed framework points to a common physical underpinning to the processes of clogging and unclogging, despite their different statistics.
Initiating Formal Requirements Specifications with Object-Oriented Models
NASA Technical Reports Server (NTRS)
Ampo, Yoko; Lutz, Robyn R.
1994-01-01
This paper reports results of an investigation into the suitability of object-oriented models as an initial step in developing formal specifications. The requirements for two critical system-level software modules were used as target applications. It was found that creating object-oriented diagrams prior to formally specifying the requirements enhanced the accuracy of the initial formal specifications and reduced the effort required to produce them. However, the formal specifications incorporated some information not found in the object-oriented diagrams, such as higher-level strategy or goals of the software.
NASA Astrophysics Data System (ADS)
Kocia, Lucas; Love, Peter
2017-12-01
We show that qubit stabilizer states can be represented by non-negative quasiprobability distributions associated with a Wigner-Weyl-Moyal formalism where Clifford gates are positive state-independent maps. This is accomplished by generalizing the Wigner-Weyl-Moyal formalism to three generators instead of two—producing an exterior, or Grassmann, algebra—which results in Clifford group gates for qubits that act as a permutation on the finite Weyl phase space points naturally associated with stabilizer states. As a result, a non-negative probability distribution can be associated with each stabilizer state's three-generator Wigner function, and these distributions evolve deterministically to one another under Clifford gates. This corresponds to a hidden variable theory that is noncontextual and local for qubit Clifford gates while Clifford (Pauli) measurements have a context-dependent representation. Equivalently, we show that qubit Clifford gates can be expressed as propagators within the three-generator Wigner-Weyl-Moyal formalism whose semiclassical expansion is truncated at order ℏ0 with a finite number of terms. The T gate, which extends the Clifford gate set to one capable of universal quantum computation, requires a semiclassical expansion of the propagator to order ℏ1. We compare this approach to previous quasiprobability descriptions of qubits that relied on the two-generator Wigner-Weyl-Moyal formalism and find that the two-generator Weyl symbols of stabilizer states result in a description of evolution under Clifford gates that is state-dependent, in contrast to the three-generator formalism. We have thus extended Wigner non-negative quasiprobability distributions from the odd d -dimensional case to d =2 qubits, which describe the noncontextuality of Clifford gates and contextuality of Pauli measurements on qubit stabilizer states.
Testing For Measurement Invariance of Attachment Across Chinese and American Adolescent Samples.
Ren, Ling; Zhao, Jihong Solomon; He, Ni Phil; Marshall, Ineke Haen; Zhang, Hongwei; Zhao, Ruohui; Jin, Cheng
2016-06-01
Adolescent attachment to formal and informal institutions has emerged as a major focus of criminological theories since the publication of Hirschi's work in 1969. This study attempts to examine the psychometric equivalence of the factorial structure of attachment measures across nations reflecting Western and Eastern cultures. Twelve manifest variables are used tapping the concepts of adolescent attachment to parents, school, and neighborhood. Confirmatory factor analysis is used to conduct invariance test across approximately 3,000 Chinese and U.S. adolescents. Results provide strong support for a three-factor model; the multigroup invariance tests reveal mixed results. While the family attachment measure appears invariant between the two samples, significant differences in the coefficients of the factor loadings are detected in the school attachment and neighborhood attachment measures. The results of regression analyses lend support to the predictive validity of three types of attachment. Finally, the limitations of the study are discussed. © The Author(s) 2015.
Thermal quantum time-correlation functions from classical-like dynamics
NASA Astrophysics Data System (ADS)
Hele, Timothy J. H.
2017-07-01
Thermal quantum time-correlation functions are of fundamental importance in quantum dynamics, allowing experimentally measurable properties such as reaction rates, diffusion constants and vibrational spectra to be computed from first principles. Since the exact quantum solution scales exponentially with system size, there has been considerable effort in formulating reliable linear-scaling methods involving exact quantum statistics and approximate quantum dynamics modelled with classical-like trajectories. Here, we review recent progress in the field with the development of methods including centroid molecular dynamics , ring polymer molecular dynamics (RPMD) and thermostatted RPMD (TRPMD). We show how these methods have recently been obtained from 'Matsubara dynamics', a form of semiclassical dynamics which conserves the quantum Boltzmann distribution. We also apply the Matsubara formalism to reaction rate theory, rederiving t → 0+ quantum transition-state theory (QTST) and showing that Matsubara-TST, like RPMD-TST, is equivalent to QTST. We end by surveying areas for future progress.
Proceedings of the First NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)
2009-01-01
Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.
Partridge, Chris; de Cesare, Sergio; Mitchell, Andrew; Odell, James
2018-01-01
Formalization is becoming more common in all stages of the development of information systems, as a better understanding of its benefits emerges. Classification systems are ubiquitous, no more so than in domain modeling. The classification pattern that underlies these systems provides a good case study of the move toward formalization in part because it illustrates some of the barriers to formalization, including the formal complexity of the pattern and the ontological issues surrounding the "one and the many." Powersets are a way of characterizing the (complex) formal structure of the classification pattern, and their formalization has been extensively studied in mathematics since Cantor's work in the late nineteenth century. One can use this formalization to develop a useful benchmark. There are various communities within information systems engineering (ISE) that are gradually working toward a formalization of the classification pattern. However, for most of these communities, this work is incomplete, in that they have not yet arrived at a solution with the expressiveness of the powerset benchmark. This contrasts with the early smooth adoption of powerset by other information systems communities to, for example, formalize relations. One way of understanding the varying rates of adoption is recognizing that the different communities have different historical baggage. Many conceptual modeling communities emerged from work done on database design, and this creates hurdles to the adoption of the high level of expressiveness of powersets. Another relevant factor is that these communities also often feel, particularly in the case of domain modeling, a responsibility to explain the semantics of whatever formal structures they adopt. This paper aims to make sense of the formalization of the classification pattern in ISE and surveys its history through the literature, starting from the relevant theoretical works of the mathematical literature and gradually shifting focus to the ISE literature. The literature survey follows the evolution of ISE's understanding of how to formalize the classification pattern. The various proposals are assessed using the classical example of classification; the Linnaean taxonomy formalized using powersets as a benchmark for formal expressiveness. The broad conclusion of the survey is that (1) the ISE community is currently in the early stages of the process of understanding how to formalize the classification pattern, particularly in the requirements for expressiveness exemplified by powersets, and (2) that there is an opportunity to intervene and speed up the process of adoption by clarifying this expressiveness. Given the central place that the classification pattern has in domain modeling, this intervention has the potential to lead to significant improvements.
BFV-Complex and Higher Homotopy Structures
NASA Astrophysics Data System (ADS)
Schätz, Florian
2009-03-01
We present a connection between the BFV-complex (abbreviation for Batalin-Fradkin-Vilkovisky complex) and the strong homotopy Lie algebroid associated to a coisotropic submanifold of a Poisson manifold. We prove that the latter structure can be derived from the BFV-complex by means of homotopy transfer along contractions. Consequently the BFV-complex and the strong homotopy Lie algebroid structure are L ∞ quasi-isomorphic and control the same formal deformation problem. However there is a gap between the non-formal information encoded in the BFV-complex and in the strong homotopy Lie algebroid respectively. We prove that there is a one-to-one correspondence between coisotropic submanifolds given by graphs of sections and equivalence classes of normalized Maurer-Cartan elemens of the BFV-complex. This does not hold if one uses the strong homotopy Lie algebroid instead.
Review of Recent Development of Dynamic Wind Farm Equivalent Models Based on Big Data Mining
NASA Astrophysics Data System (ADS)
Wang, Chenggen; Zhou, Qian; Han, Mingzhe; Lv, Zhan’ao; Hou, Xiao; Zhao, Haoran; Bu, Jing
2018-04-01
Recently, the big data mining method has been applied in dynamic wind farm equivalent modeling. In this paper, its recent development with present research both domestic and overseas is reviewed. Firstly, the studies of wind speed prediction, equivalence and its distribution in the wind farm are concluded. Secondly, two typical approaches used in the big data mining method is introduced, respectively. For single wind turbine equivalent modeling, it focuses on how to choose and identify equivalent parameters. For multiple wind turbine equivalent modeling, the following three aspects are concentrated, i.e. aggregation of different wind turbine clusters, the parameters in the same cluster, and equivalence of collector system. Thirdly, an outlook on the development of dynamic wind farm equivalent models in the future is discussed.
Self-organized modularization in evolutionary algorithms.
Dauscher, Peter; Uthmann, Thomas
2005-01-01
The principle of modularization has proven to be extremely successful in the field of technical applications and particularly for Software Engineering purposes. The question to be answered within the present article is whether mechanisms can also be identified within the framework of Evolutionary Computation that cause a modularization of solutions. We will concentrate on processes, where modularization results only from the typical evolutionary operators, i.e. selection and variation by recombination and mutation (and not, e.g., from special modularization operators). This is what we call Self-Organized Modularization. Based on a combination of two formalizations by Radcliffe and Altenberg, some quantitative measures of modularity are introduced. Particularly, we distinguish Built-in Modularity as an inherent property of a genotype and Effective Modularity, which depends on the rest of the population. These measures can easily be applied to a wide range of present Evolutionary Computation models. It will be shown, both theoretically and by simulation, that under certain conditions, Effective Modularity (as defined within this paper) can be a selection factor. This causes Self-Organized Modularization to take place. The experimental observations emphasize the importance of Effective Modularity in comparison with Built-in Modularity. Although the experimental results have been obtained using a minimalist toy model, they can lead to a number of consequences for existing models as well as for future approaches. Furthermore, the results suggest a complex self-amplification of highly modular equivalence classes in the case of respected relations. Since the well-known Holland schemata are just the equivalence classes of respected relations in most Simple Genetic Algorithms, this observation emphasizes the role of schemata as Building Blocks (in comparison with arbitrary subsets of the search space).
Kwan, Christine ML; Napoles, Anna M; Chou, Jeyling; Seligman, Hilary K
2014-01-01
Objective To develop a conceptually equivalent Chinese-language translation of the eighteen-item US Household Food Security Survey Module. Design In the current qualitative study, we (i) highlight methodological challenges which arise in developing survey instruments that will be used to make comparisons across language groups and (ii) describe the development of a Chinese-language translation of the US Household Food Security Survey Module, called the San Francisco Chinese Food Security Module. Setting Community sites in San Francisco, CA, USA. Subjects We conducted cognitive interviews with twenty-two community members recruited from community sites hosting food pantries and with five professionals recruited from clinical settings. Results Development of conceptually equivalent surveys can be difficult. We highlight challenges related to dialect, education, literacy (e.g. preferences for more or less formal phrasing), English words and phrases for which there is no Chinese language equivalent (e.g. ‘balanced meals’ and ‘eat less than you felt you should’) and response formats. We selected final translations to maximize: (i) consistency of the Chinese translation with the intent of the English version; (ii) clarity; and (iii) similarities in understanding across dialects and literacy levels. Conclusions Survey translation is essential for conducting research in many communities. The challenges encountered illustrate how literal translations can affect the conceptual equivalence of survey items across languages. Cognitive interview methods should be routinely used for survey translation when such non-equivalence is suspected, such as in surveys addressing highly culturally bound behaviours such as diet and eating behaviours. Literally translated surveys lacking conceptual equivalence may magnify or obscure important health inequalities. PMID:24642365
FORMAL MODELING, MONITORING, AND CONTROL OF EMERGENCE IN DISTRIBUTED CYBER PHYSICAL SYSTEMS
2018-02-23
FORMAL MODELING, MONITORING, AND CONTROL OF EMERGENCE IN DISTRIBUTED CYBER- PHYSICAL SYSTEMS UNIVERSITY OF TEXAS AT ARLINGTON FEBRUARY 2018 FINAL...COVERED (From - To) APR 2015 – APR 2017 4. TITLE AND SUBTITLE FORMAL MODELING, MONITORING, AND CONTROL OF EMERGENCE IN DISTRIBUTED CYBER- PHYSICAL ...dated 16 Jan 09 13. SUPPLEMENTARY NOTES 14. ABSTRACT This project studied emergent behavior in distributed cyber- physical systems (DCPS). Emergent
Experiences Using Lightweight Formal Methods for Requirements Modeling
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David
1997-01-01
This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.
Saramma, P P; Raj, L Suja; Dash, P K; Sarma, P S
2016-04-01
Cardiopulmonary resuscitation (CPR) and emergency cardiovascular care guidelines are periodically renewed and published by the American Heart Association. Formal training programs are conducted based on these guidelines. Despite widespread training CPR is often poorly performed. Hospital educators spend a significant amount of time and money in training health professionals and maintaining basic life support (BLS) and advanced cardiac life support (ACLS) skills among them. However, very little data are available in the literature highlighting the long-term impact of these training. To evaluate the impact of formal certified CPR training program on the knowledge and skill of CPR among nurses, to identify self-reported outcomes of attempted CPR and training needs of nurses. Tertiary care hospital, Prospective, repeated-measures design. A series of certified BLS and ACLS training programs were conducted during 2010 and 2011. Written and practical performance tests were done. Final testing was undertaken 3-4 years after training. The sample included all available, willing CPR certified nurses and experience matched CPR noncertified nurses. SPSS for Windows version 21.0. The majority of the 206 nurses (93 CPR certified and 113 noncertified) were females. There was a statistically significant increase in mean knowledge level and overall performance before and after the formal certified CPR training program (P = 0.000). However, the mean knowledge scores were equivalent among the CPR certified and noncertified nurses, although the certified nurses scored a higher mean score (P = 0.140). Formal certified CPR training program increases CPR knowledge and skill. However, significant long-term effects could not be found. There is a need for regular and periodic recertification.
Improving Learner Outcomes in Lifelong Education: Formal Pedagogies in Non-Formal Learning Contexts?
ERIC Educational Resources Information Center
Zepke, Nick; Leach, Linda
2006-01-01
This article explores how far research findings about successful pedagogies in formal post-school education might be used in non-formal learning contexts--settings where learning may not lead to formal qualifications. It does this by examining a learner outcomes model adapted from a synthesis of research into retention. The article first…
Equivalent model and power flow model for electric railway traction network
NASA Astrophysics Data System (ADS)
Wang, Feng
2018-05-01
An equivalent model of the Cable Traction Network (CTN) considering the distributed capacitance effect of the cable system is proposed. The model can be divided into 110kV side and 27.5kV side two kinds. The 110kV side equivalent model can be used to calculate the power supply capacity of the CTN. The 27.5kV side equivalent model can be used to solve the voltage of the catenary. Based on the equivalent simplified model of CTN, the power flow model of CTN which involves the reactive power compensation coefficient and the interaction of voltage and current, is derived.
A data-driven model for constraint of present-day glacial isostatic adjustment in North America
NASA Astrophysics Data System (ADS)
Simon, K. M.; Riva, R. E. M.; Kleinherenbrink, M.; Tangdamrongsub, N.
2017-09-01
Geodetic measurements of vertical land motion and gravity change are incorporated into an a priori model of present-day glacial isostatic adjustment (GIA) in North America via least-squares adjustment. The result is an updated GIA model wherein the final predicted signal is informed by both observational data, and prior knowledge (or intuition) of GIA inferred from models. The data-driven method allows calculation of the uncertainties of predicted GIA fields, and thus offers a significant advantage over predictions from purely forward GIA models. In order to assess the influence each dataset has on the final GIA prediction, the vertical land motion and GRACE-measured gravity data are incorporated into the model first independently (i.e., one dataset only), then simultaneously. The relative weighting of the datasets and the prior input is iteratively determined by variance component estimation in order to achieve the most statistically appropriate fit to the data. The best-fit model is obtained when both datasets are inverted and gives respective RMS misfits to the GPS and GRACE data of 1.3 mm/yr and 0.8 mm/yr equivalent water layer change. Non-GIA signals (e.g., hydrology) are removed from the datasets prior to inversion. The post-fit residuals between the model predictions and the vertical motion and gravity datasets, however, suggest particular regions where significant non-GIA signals may still be present in the data, including unmodeled hydrological changes in the central Prairies west of Lake Winnipeg. Outside of these regions of misfit, the posterior uncertainty of the predicted model provides a measure of the formal uncertainty associated with the GIA process; results indicate that this quantity is sensitive to the uncertainty and spatial distribution of the input data as well as that of the prior model information. In the study area, the predicted uncertainty of the present-day GIA signal ranges from ∼0.2-1.2 mm/yr for rates of vertical land motion, and from ∼3-4 mm/yr of equivalent water layer change for gravity variations.
Reciprocal relations between cognitive neuroscience and formal cognitive models: opposites attract?
Forstmann, Birte U; Wagenmakers, Eric-Jan; Eichele, Tom; Brown, Scott; Serences, John T
2011-06-01
Cognitive neuroscientists study how the brain implements particular cognitive processes such as perception, learning, and decision-making. Traditional approaches in which experiments are designed to target a specific cognitive process have been supplemented by two recent innovations. First, formal cognitive models can decompose observed behavioral data into multiple latent cognitive processes, allowing brain measurements to be associated with a particular cognitive process more precisely and more confidently. Second, cognitive neuroscience can provide additional data to inform the development of formal cognitive models, providing greater constraint than behavioral data alone. We argue that these fields are mutually dependent; not only can models guide neuroscientific endeavors, but understanding neural mechanisms can provide key insights into formal models of cognition. Copyright © 2011 Elsevier Ltd. All rights reserved.
On the formalization and reuse of scientific research.
King, Ross D; Liakata, Maria; Lu, Chuan; Oliver, Stephen G; Soldatova, Larisa N
2011-10-07
The reuse of scientific knowledge obtained from one investigation in another investigation is basic to the advance of science. Scientific investigations should therefore be recorded in ways that promote the reuse of the knowledge they generate. The use of logical formalisms to describe scientific knowledge has potential advantages in facilitating such reuse. Here, we propose a formal framework for using logical formalisms to promote reuse. We demonstrate the utility of this framework by using it in a worked example from biology: demonstrating cycles of investigation formalization [F] and reuse [R] to generate new knowledge. We first used logic to formally describe a Robot scientist investigation into yeast (Saccharomyces cerevisiae) functional genomics [f(1)]. With Robot scientists, unlike human scientists, the production of comprehensive metadata about their investigations is a natural by-product of the way they work. We then demonstrated how this formalism enabled the reuse of the research in investigating yeast phenotypes [r(1) = R(f(1))]. This investigation found that the removal of non-essential enzymes generally resulted in enhanced growth. The phenotype investigation was then formally described using the same logical formalism as the functional genomics investigation [f(2) = F(r(1))]. We then demonstrated how this formalism enabled the reuse of the phenotype investigation to investigate yeast systems-biology modelling [r(2) = R(f(2))]. This investigation found that yeast flux-balance analysis models fail to predict the observed changes in growth. Finally, the systems biology investigation was formalized for reuse in future investigations [f(3) = F(r(2))]. These cycles of reuse are a model for the general reuse of scientific knowledge.
Higher-order gravity in higher dimensions: geometrical origins of four-dimensional cosmology?
NASA Astrophysics Data System (ADS)
Troisi, Antonio
2017-03-01
Determining the cosmological field equations is still very much debated and led to a wide discussion around different theoretical proposals. A suitable conceptual scheme could be represented by gravity models that naturally generalize Einstein theory like higher-order gravity theories and higher-dimensional ones. Both of these two different approaches allow one to define, at the effective level, Einstein field equations equipped with source-like energy-momentum tensors of geometrical origin. In this paper, the possibility is discussed to develop a five-dimensional fourth-order gravity model whose lower-dimensional reduction could provide an interpretation of cosmological four-dimensional matter-energy components. We describe the basic concepts of the model, the complete field equations formalism and the 5-D to 4-D reduction procedure. Five-dimensional f( R) field equations turn out to be equivalent, on the four-dimensional hypersurfaces orthogonal to the extra coordinate, to an Einstein-like cosmological model with three matter-energy tensors related with higher derivative and higher-dimensional counter-terms. By considering the gravity model with f(R)=f_0R^n the possibility is investigated to obtain five-dimensional power law solutions. The effective four-dimensional picture and the behaviour of the geometrically induced sources are finally outlined in correspondence to simple cases of such higher-dimensional solutions.
A fish-like robot: Mechanics of swimming due to constraints
NASA Astrophysics Data System (ADS)
Tallapragada, Phanindra; Malla, Rijan
2014-11-01
It is well known that due to reasons of symmetry, a body with one degree of actuation cannot swim in an ideal fluid. However certain velocity constraints arising in fluid-body interactions, such as the Kutta condition classically applied at the trailing cusp of a Joukowski hydrofoil break this symmetry through vortex shedding. Thus Joukowski foils that vary shape periodically can be shown to be able to swim through vortex shedding. In general it can be shown that vortex shedding due to the Kutta condition is equivalent to nonintegrable constraints arising in the mechanics of finite-dimensional mechanical systems. This equivalence allows hydrodynamic problems involving vortex shedding, especially those pertaining to swimming and related phenomena to be framed in the context of geometric mechanics on manifolds. This formal equivalence also allows the design of bio inspired robots that swim not due to shape change but due to internal moving masses and rotors. Such robots lacking articulated joints are easy to design, build and control. We present such a fish-like robot that swims due to the rotation of internal rotors.
Logics for Coalgebras of Finitary Set Functors
NASA Astrophysics Data System (ADS)
Sprunger, David
In this thesis, we present a collection of results about coalgebras of finitary Set functors. Our chief contribution is a logic for behavioral equivalence for states in these coalgebras. This proof system is intended to formalize a common pattern of reasoning in the study of coalgebra commonly called proof by bisimulation or bisimulation up-to. The approach in this thesis combine these up-to techniques with a concept very close to bisimulation to show the proof system is sound and complete with respect to behavioral equivalence. Our second category of contributions revolves around applications of coalgebra to the study of sequences and power series. The culmination of this work is a new approach to Christol's Theorem, a classic result characterizing the algebraic power series in finite characteristic rings as those whose coefficients can be produced by finite automata.
On the adequacy of current empirical evaluations of formal models of categorization.
Wills, Andy J; Pothos, Emmanuel M
2012-01-01
Categorization is one of the fundamental building blocks of cognition, and the study of categorization is notable for the extent to which formal modeling has been a central and influential component of research. However, the field has seen a proliferation of noncomplementary models with little consensus on the relative adequacy of these accounts. Progress in assessing the relative adequacy of formal categorization models has, to date, been limited because (a) formal model comparisons are narrow in the number of models and phenomena considered and (b) models do not often clearly define their explanatory scope. Progress is further hampered by the practice of fitting models with arbitrarily variable parameters to each data set independently. Reviewing examples of good practice in the literature, we conclude that model comparisons are most fruitful when relative adequacy is assessed by comparing well-defined models on the basis of the number and proportion of irreversible, ordinal, penetrable successes (principles of minimal flexibility, breadth, good-enough precision, maximal simplicity, and psychological focus).
Molenaar, Peter C M
2017-01-01
Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.
NASA Astrophysics Data System (ADS)
Farzanehpour, Mehdi; Tokatly, Ilya; Nano-Bio Spectroscopy Group; ETSF Scientific Development Centre Team
2015-03-01
We present a rigorous formulation of the time-dependent density functional theory for interacting lattice electrons strongly coupled to cavity photons. We start with an example of one particle on a Hubbard dimer coupled to a single photonic mode, which is equivalent to the single mode spin-boson model or the quantum Rabi model. For this system we prove that the electron-photon wave function is a unique functional of the electronic density and the expectation value of the photonic coordinate, provided the initial state and the density satisfy a set of well defined conditions. Then we generalize the formalism to many interacting electrons on a lattice coupled to multiple photonic modes and prove the general mapping theorem. We also show that for a system evolving from the ground state of a lattice Hamiltonian any density with a continuous second time derivative is locally v-representable. Spanish Ministry of Economy and Competitiveness (Grant No. FIS2013-46159-C3-1-P), Grupos Consolidados UPV/EHU del Gobierno Vasco (Grant No. IT578-13), COST Actions CM1204 (XLIC) and MP1306 (EUSpec).
Discrete differential geometry: The nonplanar quadrilateral mesh
NASA Astrophysics Data System (ADS)
Twining, Carole J.; Marsland, Stephen
2012-06-01
We consider the problem of constructing a discrete differential geometry defined on nonplanar quadrilateral meshes. Physical models on discrete nonflat spaces are of inherent interest, as well as being used in applications such as computation for electromagnetism, fluid mechanics, and image analysis. However, the majority of analysis has focused on triangulated meshes. We consider two approaches: discretizing the tensor calculus, and a discrete mesh version of differential forms. While these two approaches are equivalent in the continuum, we show that this is not true in the discrete case. Nevertheless, we show that it is possible to construct mesh versions of the Levi-Civita connection (and hence the tensorial covariant derivative and the associated covariant exterior derivative), the torsion, and the curvature. We show how discrete analogs of the usual vector integral theorems are constructed in such a way that the appropriate conservation laws hold exactly on the mesh, rather than only as approximations to the continuum limit. We demonstrate the success of our method by constructing a mesh version of classical electromagnetism and discuss how our formalism could be used to deal with other physical models, such as fluids.
Quantum Hamilton equations of motion for bound states of one-dimensional quantum systems
NASA Astrophysics Data System (ADS)
Köppe, J.; Patzold, M.; Grecksch, W.; Paul, W.
2018-06-01
On the basis of Nelson's stochastic mechanics derivation of the Schrödinger equation, a formal mathematical structure of non-relativistic quantum mechanics equivalent to the one in classical analytical mechanics has been established in the literature. We recently were able to augment this structure by deriving quantum Hamilton equations of motion by finding the Nash equilibrium of a stochastic optimal control problem, which is the generalization of Hamilton's principle of classical mechanics to quantum systems. We showed that these equations allow a description and numerical determination of the ground state of quantum problems without using the Schrödinger equation. We extend this approach here to deliver the complete discrete energy spectrum and related eigenfunctions for bound states of one-dimensional stationary quantum systems. We exemplify this analytically for the one-dimensional harmonic oscillator and numerically by analyzing a quartic double-well potential, a model of broad importance in many areas of physics. We furthermore point out a relation between the tunnel splitting of such models and mean first passage time concepts applied to Nelson's diffusion paths in the ground state.
NASA Technical Reports Server (NTRS)
Glytsis, Elias N.; Brundrett, David L.; Gaylord, Thomas K.
1993-01-01
A review of the rigorous coupled-wave analysis as applied to the diffraction of electro-magnetic waves by gratings is presented. The analysis is valid for any polarization, angle of incidence, and conical diffraction. Cascaded and/or multiplexed gratings as well as material anisotropy can be incorporated under the same formalism. Small period rectangular groove gratings can also be modeled using approximately equivalent uniaxial homogeneous layers (effective media). The ordinary and extraordinary refractive indices of these layers depend on the gratings filling factor, the refractive indices of the substrate and superstrate, and the ratio of the freespace wavelength to grating period. Comparisons of the homogeneous effective medium approximations with the rigorous coupled-wave analysis are presented. Antireflection designs (single-layer or multilayer) using the effective medium models are presented and compared. These ultra-short period antireflection gratings can also be used to produce soft x-rays. Comparisons of the rigorous coupled-wave analysis with experimental results on soft x-ray generation by gratings are also included.
Quantum cluster variational method and message passing algorithms revisited
NASA Astrophysics Data System (ADS)
Domínguez, E.; Mulet, Roberto
2018-02-01
We present a general framework to study quantum disordered systems in the context of the Kikuchi's cluster variational method (CVM). The method relies in the solution of message passing-like equations for single instances or in the iterative solution of complex population dynamic algorithms for an average case scenario. We first show how a standard application of the Kikuchi's CVM can be easily translated to message passing equations for specific instances of the disordered system. We then present an "ad hoc" extension of these equations to a population dynamic algorithm representing an average case scenario. At the Bethe level, these equations are equivalent to the dynamic population equations that can be derived from a proper cavity ansatz. However, at the plaquette approximation, the interpretation is more subtle and we discuss it taking also into account previous results in classical disordered models. Moreover, we develop a formalism to properly deal with the average case scenario using a replica-symmetric ansatz within this CVM for quantum disordered systems. Finally, we present and discuss numerical solutions of the different approximations for the quantum transverse Ising model and the quantum random field Ising model in two-dimensional lattices.
Hybrid diffusion-P3 equation in N-layered turbid media: steady-state domain.
Shi, Zhenzhi; Zhao, Huijuan; Xu, Kexin
2011-10-01
This paper discusses light propagation in N-layered turbid media. The hybrid diffusion-P3 equation is solved for an N-layered finite or infinite turbid medium in the steady-state domain for one point source using the extrapolated boundary condition. The Fourier transform formalism is applied to derive the analytical solutions of the fluence rate in Fourier space. Two inverse Fourier transform methods are developed to calculate the fluence rate in real space. In addition, the solutions of the hybrid diffusion-P3 equation are compared to the solutions of the diffusion equation and the Monte Carlo simulation. For the case of small absorption coefficients, the solutions of the N-layered diffusion equation and hybrid diffusion-P3 equation are almost equivalent and are in agreement with the Monte Carlo simulation. For the case of large absorption coefficients, the model of the hybrid diffusion-P3 equation is more precise than that of the diffusion equation. In conclusion, the model of the hybrid diffusion-P3 equation can replace the diffusion equation for modeling light propagation in the N-layered turbid media for a wide range of absorption coefficients.
Munaò, Gianmarco; Costa, Dino; Caccamo, Carlo
2016-10-19
Inspired by significant improvements obtained for the performances of the polymer reference interaction site model (PRISM) theory of the fluid phase when coupled with 'molecular closures' (Schweizer and Yethiraj 1993 J. Chem. Phys. 98 9053), we exploit a matrix generalization of this concept, suitable for the more general RISM framework. We report a preliminary test of the formalism, as applied to prototype square-well homonuclear diatomics. As for the structure, comparison with Monte Carlo shows that molecular closures are slightly more predictive than their 'atomic' counterparts, and thermodynamic properties are equally accurate. We also devise an application of molecular closures to models interacting via continuous, soft-core potentials, by using well established prescriptions in liquid state perturbation theories. In the case of Lennard-Jones dimers, our scheme definitely improves over the atomic one, providing semi-quantitative structural results, and quite good estimates of internal energy, pressure and phase coexistence. Our finding paves the way to a systematic employment of molecular closures within the RISM framework to be applied to more complex systems, such as molecules constituted by several non-equivalent interaction sites.
NASA Astrophysics Data System (ADS)
Juraschek, Dominik M.; Fechner, Michael; Balatsky, Alexander V.; Spaldin, Nicola A.
2017-06-01
An appealing mechanism for inducing multiferroicity in materials is the generation of electric polarization by a spatially varying magnetization that is coupled to the lattice through the spin-orbit interaction. Here we describe the reciprocal effect, in which a time-dependent electric polarization induces magnetization even in materials with no existing spin structure. We develop a formalism for this dynamical multiferroic effect in the case for which the polarization derives from optical phonons, and compute the strength of the phonon Zeeman effect, which is the solid-state equivalent of the well-established vibrational Zeeman effect in molecules, using density functional theory. We further show that a recently observed behavior—the resonant excitation of a magnon by optically driven phonons—is described by the formalism. Finally, we discuss examples of scenarios that are not driven by lattice dynamics and interpret the excitation of Dzyaloshinskii-Moriya-type electromagnons and the inverse Faraday effect from the viewpoint of dynamical multiferroicity.
Cosmological perturbation theory using the FFTLog: formalism and connection to QFT loop integrals
NASA Astrophysics Data System (ADS)
Simonović, Marko; Baldauf, Tobias; Zaldarriaga, Matias; Carrasco, John Joseph; Kollmeier, Juna A.
2018-04-01
We present a new method for calculating loops in cosmological perturbation theory. This method is based on approximating a ΛCDM-like cosmology as a finite sum of complex power-law universes. The decomposition is naturally achieved using an FFTLog algorithm. For power-law cosmologies, all loop integrals are formally equivalent to loop integrals of massless quantum field theory. These integrals have analytic solutions in terms of generalized hypergeometric functions. We provide explicit formulae for the one-loop and the two-loop power spectrum and the one-loop bispectrum. A chief advantage of our approach is that the difficult part of the calculation is cosmology independent, need be done only once, and can be recycled for any relevant predictions. Evaluation of standard loop diagrams then boils down to a simple matrix multiplication. We demonstrate the promise of this method for applications to higher multiplicity/loop correlation functions.
Equivalence of meson scattering amplitudes in strong coupling lattice and flat space string theory
NASA Astrophysics Data System (ADS)
Armoni, Adi; Ireson, Edwin; Vadacchino, Davide
2018-03-01
We consider meson scattering in the framework of the lattice strong coupling expansion. In particular we derive an expression for the 4-point function of meson operators in the planar limit of scalar Chromodynamics. Interestingly, in the naive continuum limit the expression coincides with an independently known result, that of the worldline formalism. Moreover, it was argued by Makeenko and Olesen that (assuming confinement) the resulting scattering amplitude in momentum space is the celebrated expression proposed by Veneziano several decades ago. This motivates us to also use holography in order to argue that the continuum expression for the scattering amplitude is related to the result obtained from flat space string theory. Our results hint that at strong coupling and large-Nc the naive continuum limit of the lattice formalism can be related to a flat space string theory.
NASA Technical Reports Server (NTRS)
Stamnes, K.; Lie-Svendsen, O.; Rees, M. H.
1991-01-01
The linear Boltzmann equation can be cast in a form mathematically identical to the radiation-transport equation. A multigroup procedure is used to reduce the energy (or velocity) dependence of the transport equation to a series of one-speed problems. Each of these one-speed problems is equivalent to the monochromatic radiative-transfer problem, and existing software is used to solve this problem in slab geometry. The numerical code conserves particles in elastic collisions. Generic examples are provided to illustrate the applicability of this approach. Although this formalism can, in principle, be applied to a variety of test particle or linearized gas dynamics problems, it is particularly well-suited to study the thermalization of suprathermal particles interacting with a background medium when the thermal motion of the background cannot be ignored. Extensions of the formalism to include external forces and spherical geometry are also feasible.
Providing solid angle formalism for skyshine calculations
Pahikkala, A. Jussi; Rising, Mary B.; McGinley, Patton H.
2010-01-01
We detail, derive and correct the technical use of the solid angle variable identified in formal guidance that relates skyshine calculations to dose‐equivalent rate. We further recommend it for use with all National Council on Radiation Protection and Measurements (NCRP), Institute of Physics and Engineering in Medicine (IPEM) and similar reports documented. In general, for beams of identical width which have different resulting areas, within ±1.0% maximum deviation the analytical pyramidal solution is 1.27 times greater than a misapplied analytical conical solution through all field sizes up to 40×40 cm2. Therefore, we recommend determining the exact results with the analytical pyramidal solution for square beams and the analytical conical solution for circular beams. PACS number(s): 87.52.‐g, 87.52.Df, 87.52.Tr, 87.53.‐j, 87.53.Bn, 87.53.Dq, 87.66.‐a, 89., 89.60.+x
Entanglement entropy flow and the Ward identity.
Rosenhaus, Vladimir; Smolkin, Michael
2014-12-31
We derive differential equations for the flow of entanglement entropy as a function of the metric and the couplings of the theory. The variation of the universal part of entanglement entropy under a local Weyl transformation is related to the variation under a local change in the couplings. We show that this relation is, in fact, equivalent to the trace Ward identity. As a concrete application of our formalism, we express the entanglement entropy for massive free fields as a two-point function of the energy-momentum tensor.
Reciprocity relations in transmission electron microscopy: A rigorous derivation.
Krause, Florian F; Rosenauer, Andreas
2017-01-01
A concise derivation of the principle of reciprocity applied to realistic transmission electron microscopy setups is presented making use of the multislice formalism. The equivalence of images acquired in conventional and scanning mode is thereby rigorously shown. The conditions for the applicability of the found reciprocity relations is discussed. Furthermore the positions of apertures in relation to the corresponding lenses are considered, a subject which scarcely has been addressed in previous publications. Copyright © 2016 Elsevier Ltd. All rights reserved.
Torus as phase space: Weyl quantization, dequantization, and Wigner formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ligabò, Marilena, E-mail: marilena.ligabo@uniba.it
2016-08-15
The Weyl quantization of classical observables on the torus (as phase space) without regularity assumptions is explicitly computed. The equivalence class of symbols yielding the same Weyl operator is characterized. The Heisenberg equation for the dynamics of general quantum observables is written through the Moyal brackets on the torus and the support of the Wigner transform is characterized. Finally, a dequantization procedure is introduced that applies, for instance, to the Pauli matrices. As a result we obtain the corresponding classical symbols.
Understanding and Leveraging a Supplier’s CMMI Efforts: A Guidebook for Acquirers (Revised for V1.3)
2011-09-01
and SCAMPI (e.g., ISO /IEC 15288, 12207 , 15504; ISO 9001, EIA 632, and IEEE 1220), or if there are processes to be implemented that are not captured...process descriptions and tailoring as well as any formal audit results. ISO -9001:2008 is a quality management standard for development created and...maintained by the International Organisation for Standardisation ( ISO ). The American National Standard equivalent is ANSI/ ISO /ASQ Q9001-2008
Understanding and Leveraging a Supplier’s CMMI(Trademark) Efforts: A Guidebook for Acquirers
2007-03-01
other than CMMI and SCAMPI (e.g., ISO /IEC 15288, 12207 , 15504; ISO 9001, EIA 632, and IEEE 1220), or if there are processes to be implemented that are...tailoring as well as any formal audit results. 40 | CMU/SEI-2007-TR-004 ISO -9001:2000 is a quality management standard for development created and...maintained by the International Organisation for Standardisation ( ISO ). The American National Standard equivalent is ANSI/ ISO /ASQ Q9001-2000
Higher-order neural networks, Polyà polynomials, and Fermi cluster diagrams
NASA Astrophysics Data System (ADS)
Kürten, K. E.; Clark, J. W.
2003-09-01
The problem of controlling higher-order interactions in neural networks is addressed with techniques commonly applied in the cluster analysis of quantum many-particle systems. For multineuron synaptic weights chosen according to a straightforward extension of the standard Hebbian learning rule, we show that higher-order contributions to the stimulus felt by a given neuron can be readily evaluated via Polyà’s combinatoric group-theoretical approach or equivalently by exploiting a precise formal analogy with fermion diagrammatics.
RETRACTION (G' / G)-expansion method equivalent to the extended tanh-function method
NASA Astrophysics Data System (ADS)
El-Wakil, S. A.; Abdou, M. A.; El-Shewy, E. K.; Hendi, A.; Abdelwahed, H. G.
2010-10-01
This paper has been formally retracted on ethical grounds due to the similarity in content, presentation and style to another article published by Liu Chun-Ping in the journal Communications in Theoretical Physics (Chun-Ping 2009 Commun. Theor. Phys. 51 985). It is unfortunate that this was not detected before going to press. Our thanks go to the original author for bringing this fact to our attention. Corrections were made to this article on 22 October 2010.
On Structural Equation Model Equivalence.
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
1999-01-01
Presents a necessary and sufficient condition for the equivalence of structural-equation models that is applicable to models with parameter restrictions and models that may or may not fulfill assumptions of the rules. Illustrates the application of the approach for studying model equivalence. (SLD)
The HACMS program: using formal methods to eliminate exploitable bugs
Launchbury, John; Richards, Raymond
2017-01-01
For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA’s HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles. This article is part of the themed issue ‘Verified trustworthy software systems’. PMID:28871050
SO(8) fermion dynamical symmetry and strongly correlated quantum Hall states in monolayer graphene
NASA Astrophysics Data System (ADS)
Wu, Lian-Ao; Murphy, Matthew; Guidry, Mike
2017-03-01
A formalism is presented for treating strongly correlated graphene quantum Hall states in terms of an SO(8) fermion dynamical symmetry that includes pairing as well as particle-hole generators. The graphene SO(8) algebra is isomorphic to an SO(8) algebra that has found broad application in nuclear physics, albeit with physically very different generators, and exhibits a strong formal similarity to SU(4) symmetries that have been proposed to describe high-temperature superconductors. The well-known SU(4) symmetry of quantum Hall ferromagnetism for single-layer graphene is recovered as one subgroup of SO(8), but the dynamical symmetry structure associated with the full set of SO(8) subgroup chains extends quantum Hall ferromagnetism and allows analytical many-body solutions for a rich set of collective states exhibiting spontaneously broken symmetry that may be important for the low-energy physics of graphene in strong magnetic fields. The SO(8) symmetry permits a natural definition of generalized coherent states that correspond to symmetry-constrained Hartree-Fock-Bogoliubov solutions, or equivalently a microscopically derived Ginzburg-Landau formalism, exhibiting the interplay between competing spontaneously broken symmetries in determining the ground state.
NASA Astrophysics Data System (ADS)
Adams, John E.; Stratt, Richard M.
1990-08-01
For the instantaneous normal mode analysis method to be generally useful in studying the dynamics of clusters of arbitrary size, it ought to yield values of atomic self-diffusion constants which agree with those derived directly from molecular dynamics calculations. The present study proposes that such agreement indeed can be obtained if a sufficiently sophisticated formalism for computing the diffusion constant is adopted, such as the one suggested by Madan, Keyes, and Seeley [J. Chem. Phys. 92, 7565 (1990)]. In order to implement this particular formalism, however, we have found it necessary to pay particular attention to the removal from the computed spectra of spurious rotational contributions. The utility of the formalism is demonstrated via a study of small argon clusters, for which numerous results generated using other approaches are available. We find the same temperature dependence of the Ar13 self-diffusion constant that Beck and Marchioro [J. Chem. Phys. 93, 1347 (1990)] do from their direct calculation of the velocity autocorrelation function: The diffusion constant rises quickly from zero to a liquid-like value as the cluster goes through (the finite-size equivalent of) the melting transition.
Fitting Higgs data with nonlinear effective theory.
Buchalla, G; Catà, O; Celis, A; Krause, C
2016-01-01
In a recent paper we showed that the electroweak chiral Lagrangian at leading order is equivalent to the conventional [Formula: see text] formalism used by ATLAS and CMS to test Higgs anomalous couplings. Here we apply this fact to fit the latest Higgs data. The new aspect of our analysis is a systematic interpretation of the fit parameters within an EFT. Concentrating on the processes of Higgs production and decay that have been measured so far, six parameters turn out to be relevant: [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text]. A global Bayesian fit is then performed with the result [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text]. Additionally, we show how this leading-order parametrization can be generalized to next-to-leading order, thus improving the [Formula: see text] formalism systematically. The differences with a linear EFT analysis including operators of dimension six are also discussed. One of the main conclusions of our analysis is that since the conventional [Formula: see text] formalism can be properly justified within a QFT framework, it should continue to play a central role in analyzing and interpreting Higgs data.
The HACMS program: using formal methods to eliminate exploitable bugs.
Fisher, Kathleen; Launchbury, John; Richards, Raymond
2017-10-13
For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA's HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles.This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Authors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.
Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using themore » leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.« less
NASA Astrophysics Data System (ADS)
Vassiliev, Oleg N.; Grosshans, David R.; Mohan, Radhe
2017-10-01
We propose a new formalism for calculating parameters α and β of the linear-quadratic model of cell survival. This formalism, primarily intended for calculating relative biological effectiveness (RBE) for treatment planning in hadron therapy, is based on a recently proposed microdosimetric revision of the single-target multi-hit model. The main advantage of our formalism is that it reliably produces α and β that have correct general properties with respect to their dependence on physical properties of the beam, including the asymptotic behavior for very low and high linear energy transfer (LET) beams. For example, in the case of monoenergetic beams, our formalism predicts that, as a function of LET, (a) α has a maximum and (b) the α/β ratio increases monotonically with increasing LET. No prior models reviewed in this study predict both properties (a) and (b) correctly, and therefore, these prior models are valid only within a limited LET range. We first present our formalism in a general form, for polyenergetic beams. A significant new result in this general case is that parameter β is represented as an average over the joint distribution of energies E 1 and E 2 of two particles in the beam. This result is consistent with the role of the quadratic term in the linear-quadratic model. It accounts for the two-track mechanism of cell kill, in which two particles, one after another, damage the same site in the cell nucleus. We then present simplified versions of the formalism, and discuss predicted properties of α and β. Finally, to demonstrate consistency of our formalism with experimental data, we apply it to fit two sets of experimental data: (1) α for heavy ions, covering a broad range of LETs, and (2) β for protons. In both cases, good agreement is achieved.
An equivalent viscoelastic model for rock mass with parallel joints
NASA Astrophysics Data System (ADS)
Li, Jianchun; Ma, Guowei; Zhao, Jian
2010-03-01
An equivalent viscoelastic medium model is proposed for rock mass with parallel joints. A concept of "virtual wave source (VWS)" is proposed to take into account the wave reflections between the joints. The equivalent model can be effectively applied to analyze longitudinal wave propagation through discontinuous media with parallel joints. Parameters in the equivalent viscoelastic model are derived analytically based on longitudinal wave propagation across a single rock joint. The proposed model is then verified by applying identical incident waves to the discontinuous and equivalent viscoelastic media at one end to compare the output waves at the other end. When the wavelength of the incident wave is sufficiently long compared to the joint spacing, the effect of the VWS on wave propagation in rock mass is prominent. The results from the equivalent viscoelastic medium model are very similar to those determined from the displacement discontinuity method. Frequency dependence and joint spacing effect on the equivalent viscoelastic model and the VWS method are discussed.
Influence of cellular and paracellular conductance patterns on epithelial transport and metabolism.
Essig, A
1982-01-01
Theoretical analysis of transepithelial active Na transport is often based on equivalent electrical circuits comprising discrete parallel active and passive pathways. Recent findings show, however, that Na+ pumps are distributed over the entire basal lateral surface of epithelial cells. This suggests that Na+ that has been actively transported into paracellular channels may to some extent return to the apical (mucosal) bathing solution, depending on the relative conductances of the pathways via the tight junctions and the lateral intercellular spaces. Such circulation, as well as the relative conductance of cellular and paracellular pathways, may have an important influence on the relationships between parameters of transcellular and transepithelial active transport and metabolism. These relationships were examined by equivalent circuit analysis of active Na transport, Na conductance, the electromotive force of Na transport, the "stoichiometry" of transport, and the degree of coupling of transport to metabolism. Although the model is too crude to permit precise quantification, important qualitative differences are predicted between "loose" and "tight" epithelia in the absence and presence of circulation. In contrast, there is no effect on the free energy of metabolic reaction estimated from a linear thermodynamic formalism. Also of interest are implications concerning the experimental evaluation of passive paracellular conductance following abolition of active transport, and the use of the cellular voltage-divider ratio to estimate the relative conductances of apical and basal lateral plasma membranes. PMID:6284264
Analytical and numerical construction of equivalent cables.
Lindsay, K A; Rosenberg, J R; Tucker, G
2003-08-01
The mathematical complexity experienced when applying cable theory to arbitrarily branched dendrites has lead to the development of a simple representation of any branched dendrite called the equivalent cable. The equivalent cable is an unbranched model of a dendrite and a one-to-one mapping of potentials and currents on the branched model to those on the unbranched model, and vice versa. The piecewise uniform cable, with a symmetrised tri-diagonal system matrix, is shown to represent the canonical form for an equivalent cable. Through a novel application of the Laplace transform it is demonstrated that an arbitrary branched model of a dendrite can be transformed to the canonical form of an equivalent cable. The characteristic properties of the equivalent cable are extracted from the matrix for the transformed branched model. The one-to-one mapping follows automatically from the construction of the equivalent cable. The equivalent cable is used to provide a new procedure for characterising the location of synaptic contacts on spinal interneurons.
Pattern formation, logistics, and maximum path probability
NASA Astrophysics Data System (ADS)
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are sufficiently strong interpretations of the second law of thermodynamics to define the approach to and the nature of patterned stable steady states. For many pattern-forming systems these principles define quantifiable stable states as maxima or minima (or both) in the dissipation. An elementary statistical-mechanical proof is offered. To turn the argument full circle, the transformations of the partitions and classes which are predicated upon such minimax entropic paths can through digital modeling be directly identified with the syntactic and inferential elements of deductive logic. It follows therefore that all self-organizing or pattern-forming systems which possess stable steady states approach these states according to the imperatives of formal logic, the optimum pattern with its rich endowment of equivalence relations representing the central theorem of the associated calculus. Logic is thus ``the stuff of the universe,'' and biological evolution with its culmination in the human brain is the most significant example of all the irreversible pattern-forming processes. We thus conclude with a few remarks on the relevance of the contribution to the theory of evolution and to research on artificial intelligence.
NASA Astrophysics Data System (ADS)
Li, Chong; Yuan, Juyun; Yu, Haitao; Yuan, Yong
2018-01-01
Discrete models such as the lumped parameter model and the finite element model are widely used in the solution of soil amplification of earthquakes. However, neither of the models will accurately estimate the natural frequencies of soil deposit, nor simulate a damping of frequency independence. This research develops a new discrete model for one-dimensional viscoelastic response analysis of layered soil deposit based on the mode equivalence method. The new discrete model is a one-dimensional equivalent multi-degree-of-freedom (MDOF) system characterized by a series of concentrated masses, springs and dashpots with a special configuration. The dynamic response of the equivalent MDOF system is analytically derived and the physical parameters are formulated in terms of modal properties. The equivalent MDOF system is verified through a comparison of amplification functions with the available theoretical solutions. The appropriate number of degrees of freedom (DOFs) in the equivalent MDOF system is estimated. A comparative study of the equivalent MDOF system with the existing discrete models is performed. It is shown that the proposed equivalent MDOF system can exactly present the natural frequencies and the hysteretic damping of soil deposits and provide more accurate results with fewer DOFs.
Landau's statistical mechanics for quasi-particle models
NASA Astrophysics Data System (ADS)
Bannur, Vishnu M.
2014-04-01
Landau's formalism of statistical mechanics [following L. D. Landau and E. M. Lifshitz, Statistical Physics (Pergamon Press, Oxford, 1980)] is applied to the quasi-particle model of quark-gluon plasma. Here, one starts from the expression for pressure and develop all thermodynamics. It is a general formalism and consistent with our earlier studies [V. M. Bannur, Phys. Lett. B647, 271 (2007)] based on Pathria's formalism [following R. K. Pathria, Statistical Mechanics (Butterworth-Heinemann, Oxford, 1977)]. In Pathria's formalism, one starts from the expression for energy density and develop thermodynamics. Both the formalisms are consistent with thermodynamics and statistical mechanics. Under certain conditions, which are wrongly called thermodynamic consistent relation, we recover other formalism of quasi-particle system, like in M. I. Gorenstein and S. N. Yang, Phys. Rev. D52, 5206 (1995), widely studied in quark-gluon plasma.
Assessing Requirements Quality through Requirements Coverage
NASA Technical Reports Server (NTRS)
Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt
2008-01-01
In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.
TGF-beta signaling proteins and the Protein Ontology.
Arighi, Cecilia N; Liu, Hongfang; Natale, Darren A; Barker, Winona C; Drabkin, Harold; Blake, Judith A; Smith, Barry; Wu, Cathy H
2009-05-06
The Protein Ontology (PRO) is designed as a formal and principled Open Biomedical Ontologies (OBO) Foundry ontology for proteins. The components of PRO extend from a classification of proteins on the basis of evolutionary relationships at the homeomorphic level to the representation of the multiple protein forms of a gene, including those resulting from alternative splicing, cleavage and/or post-translational modifications. Focusing specifically on the TGF-beta signaling proteins, we describe the building, curation, usage and dissemination of PRO. PRO is manually curated on the basis of PrePRO, an automatically generated file with content derived from standard protein data sources. Manual curation ensures that the treatment of the protein classes and the internal and external relationships conform to the PRO framework. The current release of PRO is based upon experimental data from mouse and human proteins wherein equivalent protein forms are represented by single terms. In addition to the PRO ontology, the annotation of PRO terms is released as a separate PRO association file, which contains, for each given PRO term, an annotation from the experimentally characterized sub-types as well as the corresponding database identifiers and sequence coordinates. The annotations are added in the form of relationship to other ontologies. Whenever possible, equivalent forms in other species are listed to facilitate cross-species comparison. Splice and allelic variants, gene fusion products and modified protein forms are all represented as entities in the ontology. Therefore, PRO provides for the representation of protein entities and a resource for describing the associated data. This makes PRO useful both for proteomics studies where isoforms and modified forms must be differentiated, and for studies of biological pathways, where representations need to take account of the different ways in which the cascade of events may depend on specific protein modifications. PRO provides a framework for the formal representation of protein classes and protein forms in the OBO Foundry. It is designed to enable data retrieval and integration and machine reasoning at the molecular level of proteins, thereby facilitating cross-species comparisons, pathway analysis, disease modeling and the generation of new hypotheses.
Formal Methods for Verification and Validation of Partial Specifications: A Case Study
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Callahan, John
1997-01-01
This paper describes our work exploring the suitability of formal specification methods for independent verification and validation (IV&V) of software specifications for large, safety critical systems. An IV&V contractor often has to perform rapid analysis on incomplete specifications, with no control over how those specifications are represented. Lightweight formal methods show significant promise in this context, as they offer a way of uncovering major errors, without the burden of full proofs of correctness. We describe a case study of the use of partial formal models for V&V of the requirements for Fault Detection Isolation and Recovery on the space station. We conclude that the insights gained from formalizing a specification are valuable, and it is the process of formalization, rather than the end product that is important. It was only necessary to build enough of the formal model to test the properties in which we were interested. Maintenance of fidelity between multiple representations of the same requirements (as they evolve) is still a problem, and deserves further study.
Saramma, P. P.; Raj, L. Suja; Dash, P. K.; Sarma, P. S.
2016-01-01
Context: Cardiopulmonary resuscitation (CPR) and emergency cardiovascular care guidelines are periodically renewed and published by the American Heart Association. Formal training programs are conducted based on these guidelines. Despite widespread training CPR is often poorly performed. Hospital educators spend a significant amount of time and money in training health professionals and maintaining basic life support (BLS) and advanced cardiac life support (ACLS) skills among them. However, very little data are available in the literature highlighting the long-term impact of these training. Aims: To evaluate the impact of formal certified CPR training program on the knowledge and skill of CPR among nurses, to identify self-reported outcomes of attempted CPR and training needs of nurses. Setting and Design: Tertiary care hospital, Prospective, repeated-measures design. Subjects and Methods: A series of certified BLS and ACLS training programs were conducted during 2010 and 2011. Written and practical performance tests were done. Final testing was undertaken 3–4 years after training. The sample included all available, willing CPR certified nurses and experience matched CPR noncertified nurses. Statistical Analysis Used: SPSS for Windows version 21.0. Results: The majority of the 206 nurses (93 CPR certified and 113 noncertified) were females. There was a statistically significant increase in mean knowledge level and overall performance before and after the formal certified CPR training program (P = 0.000). However, the mean knowledge scores were equivalent among the CPR certified and noncertified nurses, although the certified nurses scored a higher mean score (P = 0.140). Conclusions: Formal certified CPR training program increases CPR knowledge and skill. However, significant long-term effects could not be found. There is a need for regular and periodic recertification. PMID:27303137
NASA Technical Reports Server (NTRS)
Kershaw, John
1990-01-01
The VIPER project has so far produced a formal specification of a 32 bit RISC microprocessor, an implementation of that chip in radiation-hard SOS technology, a partial proof of correctness of the implementation which is still being extended, and a large body of supporting software. The time has now come to consider what has been achieved and what directions should be pursued in the future. The most obvious lesson from the VIPER project was the time and effort needed to use formal methods properly. Most of the problems arose in the interfaces between different formalisms, e.g., between the (informal) English description and the HOL spec, between the block-level spec in HOL and the equivalent in ELLA needed by the low-level CAD tools. These interfaces need to be made rigorous or (better) eliminated. VIPER 1A (the latest chip) is designed to operate in pairs, to give protection against breakdowns in service as well as design faults. We have come to regard redundancy and formal design methods as complementary, the one to guard against normal component failures and the other to provide insurance against the risk of the common-cause failures which bedevil reliability predictions. Any future VIPER chips will certainly need improved performance to keep up with increasingly demanding applications. We have a prototype design (not yet specified formally) which includes 32 and 64 bit multiply, instruction pre-fetch, more efficient interface timing, and a new instruction to allow a quick response to peripheral requests. Work is under way to specify this device in MIRANDA, and then to refine the spec into a block-level design by top-down transformations. When the refinement is complete, a relatively simple proof checker should be able to demonstrate its correctness. This paper is presented in viewgraph form.
Interacting hadron resonance gas model in the K -matrix formalism
NASA Astrophysics Data System (ADS)
Dash, Ashutosh; Samanta, Subhasis; Mohanty, Bedangadas
2018-05-01
An extension of hadron resonance gas (HRG) model is constructed to include interactions using relativistic virial expansion of partition function. The noninteracting part of the expansion contains all the stable baryons and mesons and the interacting part contains all the higher mass resonances which decay into two stable hadrons. The virial coefficients are related to the phase shifts which are calculated using K -matrix formalism in the present work. We have calculated various thermodynamics quantities like pressure, energy density, and entropy density of the system. A comparison of thermodynamic quantities with noninteracting HRG model, calculated using the same number of hadrons, shows that the results of the above formalism are larger. A good agreement between equation of state calculated in K -matrix formalism and lattice QCD simulations is observed. Specifically, the lattice QCD calculated interaction measure is well described in our formalism. We have also calculated second-order fluctuations and correlations of conserved charges in K -matrix formalism. We observe a good agreement of second-order fluctuations and baryon-strangeness correlation with lattice data below the crossover temperature.
Formal Specification of Information Systems Requirements.
ERIC Educational Resources Information Center
Kampfner, Roberto R.
1985-01-01
Presents a formal model for specification of logical requirements of computer-based information systems that incorporates structural and dynamic aspects based on two separate models: the Logical Information Processing Structure and the Logical Information Processing Network. The model's role in systems development is discussed. (MBR)
Transversal Clifford gates on folded surface codes
Moussa, Jonathan E.
2016-10-12
Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less
All-Optical Stern-Gerlach Effect
NASA Astrophysics Data System (ADS)
Karnieli, Aviv; Arie, Ady
2018-01-01
We introduce a novel formalism in which the paraxial coupled wave equations of the nonlinear optical sum-frequency generation process are shown to be equivalent to the Pauli equation describing the dynamics of a spin-1 /2 particle in a spatially varying magnetic field. This interpretation gives rise to a new classical state of paraxial light, described by a mutual beam comprising of two frequencies. As a straightforward application, we propose the existence of an all-optical Stern-Gerlach effect, where an idler beam is deflected by a gradient in the nonlinear coupling, into two mutual beams of the idler and signal waves (equivalent to oppositely oriented spinors), propagating in two discrete directions. The Stern-Gerlach deflection angle and the intensity pattern in the far field are then obtained analytically, in terms of the parameters of the original optical system, laying the grounds for future experimental realizations.
The determination of the elastodynamic fields of an ellipsoidal inhomogeneity
NASA Technical Reports Server (NTRS)
Fu, L. S.; Mura, T.
1983-01-01
The determination of the elastodynamic fields of an ellipsoidal inhomogeneity is studied in detail via the eigenstrain approach. A complete formulation and a treatment of both types of eigenstrains for equivalence between the inhomogeneity problem and the inclusion problem are given. This approach is shown to be mathematically identical to other approaches such as the direct volume integral formulation. Expanding the eigenstrains and applied strains in the polynomial form in the position vector and satisfying the equivalence conditions at every point, the governing simultaneous algebraic equations for the unknown coefficients in the eigenstrain expansion are derived. The elastodynamic field outside an ellipsoidal inhomogeneity in a linear elastic isotropic medium is given as an example. The angular and frequency dependence of the induced displacement field, as well as the differential and total cross sections are formally given in series expansion form for the case of uniformly distributed eigenstrains.
A Study of Chaos in Cellular Automata
NASA Astrophysics Data System (ADS)
Kamilya, Supreeti; Das, Sukanta
This paper presents a study of chaos in one-dimensional cellular automata (CAs). The communication of information from one part of the system to another has been taken into consideration in this study. This communication is formalized as a binary relation over the set of cells. It is shown that this relation is an equivalence relation and all the cells form a single equivalence class when the cellular automaton (CA) is chaotic. However, the communication between two cells is sometimes blocked in some CAs by a subconfiguration which appears in between the cells during evolution. This blocking of communication by a subconfiguration has been analyzed in this paper with the help of de Bruijn graph. We identify two types of blocking — full and partial. Finally a parameter has been developed for the CAs. We show that the proposed parameter performs better than the existing parameters.
NASA Astrophysics Data System (ADS)
Chakraborty, Sagnik
2018-03-01
We present a general framework for the information backflow (IB) approach of Markovianity that not only includes a large number, if not all, of IB prescriptions proposed so far but also is equivalent to completely positive divisibility for invertible evolutions. Following the common approach of IB, where monotonic decay of some physical property or some information quantifier is seen as the definition of Markovianity, we propose in our framework a general description of what should be called a proper "physicality quantifier" to define Markovianity. We elucidate different properties of our framework and use them to argue that an infinite family of non-Markovianity measures can be constructed, which would capture varied strengths of non-Markovianity in the dynamics. Moreover, we show that generalized trace-distance measure in two dimensions serve as a sufficient criteria for IB Markovianity for a number of prescriptions suggested earlier in the literature.
Attribution of declining Western U.S. Snowpack to human effects
Pierce, D.W.; Barnett, T.P.; Hidalgo, H.G.; Das, T.; Bonfils, Celine; Santer, B.D.; Bala, G.; Dettinger, M.D.; Cayan, D.R.; Mirin, A.; Wood, A.W.; Nozawa, T.
2008-01-01
Observations show snowpack has declined across much of the western United States over the period 1950-99. This reduction has important social and economic implications, as water retained in the snowpack from winter storms forms an important part of the hydrological cycle and water supply in the region. A formal model-based detection and attribution (D-A) study of these reductions is performed. The detection variable is the ratio of 1 April snow water equivalent (SWE) to water-year-to-date precipitation (P), chosen to reduce the effect of P variability on the results. Estimates of natural internal climate variability are obtained from 1600 years of two control simulations performed with fully coupled ocean-atmosphere climate models. Estimates of the SWE/P response to anthropogenic greenhouse gases, ozone, and some aerosols are taken from multiple-member ensembles of perturbation experiments run with two models. The D-A shows the observations and anthropogenically forced models have greater SWE/P reductions than can be explained by natural internal climate variability alone. Model-estimated effects of changes in solar and volcanic forcing likewise do not explain the SWE/P reductions. The mean model estimate is that about half of the SWE/P reductions observed in the west from 1950 to 1999 are the result of climate changes forced by anthropogenic greenhouse gases, ozone, and aerosols. ?? 2008 American Meteorological Society.
Can we use the equivalent sphere model to approximate organ doses in space radiation environments?
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei
For space radiation protection one often calculates the dose or dose equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to approximate the BFO dose. However, previous studies have concluded that a 5cm sphere gives a very different dose from the exact BFO dose. One study concludes that a 9cm sphere is a reasonable approximation for the BFO dose in solar particle event (SPE) environments. In this study we investigate the reason behind these observations and extend earlier studies by studying whether BFO, eyes or the skin can be approximated by the equivalent sphere model in different space radiation environments such as solar particle events and galactic cosmic ray (GCR) environments. We take the thickness distribution functions of the organs from the CAM (Computerized Anatomical Man) model, then use a deterministic radiation transport to calculate organ doses in different space radiation environments. The organ doses have been evaluated with a water or aluminum shielding from 0 to 20 g/cm2. We then compare these exact doses with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we propose to use a modified equivalent sphere model with two radius parameters to represent the skin or eyes. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for eyes or the skin. For galactic cosmic rays environments, the equivalent sphere model with one organ-specific radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of eyes or the skin, but is unacceptable for the dose of eyes or the skin. The BFO radius parameters are found to be significantly larger than 5 cm in all cases, consistent with the conclusion of an earlier study. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and 11 cm for the BFO, 3.7 to 4.8 cm for eyes, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose. In the proposed modified equivalent sphere model, the range of each of the two radius parameters for the skin (or eyes) is much tighter than that in the equivalent sphere model with one radius parameter. Our results thus show that the equivalent sphere model works better in galactic cosmic rays environments than in solar particle events. The model works well or marginally well for BFO but usually does not work for eyes or the skin. A modified model with two radius parameters works much better in approximating the dose and dose equivalent in eyes or the skin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omar, A; Marteinsdottir, M; Kadesjo, N
Purpose: To provide a general formalism for determination of occupational eye lens dose based on the response of an active personal dosimeter (APD) worn at chest level above the radiation protection apron. Methods: The formalism consists of three factors: (1) APD conversion factor converting the reading at chest level (APDchest) to the corresponding personal dose equivalent at eye level, (2) Dose conversion factor transferring the measured dose quantity, Hp(10), into a dose quantity relevant for the eye lens dose, (3) Correction factor accounting for differences in exposure of the eye(s) compared to the exposure at chest level (e.g., due tomore » protective lead glasses).The different factors were investigated and evaluated based on phantom and clinical measurements performed in an x-ray angiography suite for interventional cardiology. Results: The eye lens dose can be conservatively estimated by assigning an appropriate numerical value to each factor entering the formalism that in most circumstances overestimates the dose. Doing so, the eye lens dose to the primary operator and assisting staff was estimated in this work as D-eye,primary = 2.0 APDchest and D-eye,assisting = 1.0 APDchest, respectively.The annual eye lens dose to three nurses and one cardiologist was estimated to be 2, 2, 2, and 13 mSv (Hp(0.07)), respectively, using a TLD dosimeter worn at eye level. In comparison, using the formalism and APDchest measurements, the respective doses were 2, 2, 2, and 16 mSv (Hp(3)). Conclusion: The formalism outlined in this work can be used to estimate the occupational eye lens dose from the response of an APD worn on the chest. The formalism is general and could be applied also to other types of dosimeters. However, the numerical value of the different factors may differ from those obtained with the APD’s used in this work due to differences in dosimeter properties.« less
The financial health of global health programs.
Liaw, Winston; Bazemore, Andrew; Mishori, Ranit; Diller, Philip; Bardella, Inis; Chang, Newton
2014-10-01
No studies have examined how established global health (GH) programs have achieved sustainability. The objective of this study was to describe the financial status of GH programs. In this cross-sectional survey of the Society of Teachers of Family Medicine's Group on Global Health, we assessed each program's affiliation, years of GH activities, whether or not participation was formalized, time spent on GH, funding, and anticipated funding. We received 31 responses (30% response rate); 55% were affiliated with residencies, 29% were affiliated with medical schools, 16% were affiliated with both, and 68% had formalized programs. Respondents spent 19% full-time equivalent (FTE) on GH and used a mean of 3.3 funding sources to support GH. Given a mean budget of $28,756, parent institutions provided 50% while 15% was from personal funds. Twenty-six percent thought their funding would increase in the next 2 years. Compared to residencies, medical school respondents devoted more time (26% FTE versus 13% FTE), used more funding categories (4.7 versus 2.2), and anticipated funding increases (42.8% versus 12.0%). Compared to younger programs (? 5 years), respondents from older programs (> 5 years) devoted more time (25% FTE versus 16% FTE) and used more funding categories (3.8 versus 2.9). Compared to those lacking formal programs, respondents from formalized programs were less likely to use personal funds (19% versus 60%). This limited descriptive study offers insight into the financial status of GH programs. Despite institutional support, respondents relied on personal funds and were pessimistic about future funding.
Modeling Cyber Conflicts Using an Extended Petri Net Formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakrzewska, Anita N; Ferragut, Erik M
2011-01-01
When threatened by automated attacks, critical systems that require human-controlled responses have difficulty making optimal responses and adapting protections in real- time and may therefore be overwhelmed. Consequently, experts have called for the development of automatic real-time reaction capabilities. However, a technical gap exists in the modeling and analysis of cyber conflicts to automatically understand the repercussions of responses. There is a need for modeling cyber assets that accounts for concurrent behavior, incomplete information, and payoff functions. Furthermore, we address this need by extending the Petri net formalism to allow real-time cyber conflicts to be modeled in a way thatmore » is expressive and concise. This formalism includes transitions controlled by players as well as firing rates attached to transitions. This allows us to model both player actions and factors that are beyond the control of players in real-time. We show that our formalism is able to represent situational aware- ness, concurrent actions, incomplete information and objective functions. These factors make it well-suited to modeling cyber conflicts in a way that allows for useful analysis. MITRE has compiled the Common Attack Pattern Enumera- tion and Classification (CAPEC), an extensive list of cyber attacks at various levels of abstraction. CAPEC includes factors such as attack prerequisites, possible countermeasures, and attack goals. These elements are vital to understanding cyber attacks and to generating the corresponding real-time responses. We demonstrate that the formalism can be used to extract precise models of cyber attacks from CAPEC. Several case studies show that our Petri net formalism is more expressive than other models, such as attack graphs, for modeling cyber conflicts and that it is amenable to exploring cyber strategies.« less
2013-01-01
Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648
Ding, Hansheng; Wang, Changying; Xie, Chunyan; Yang, Yitong; Jin, Chunlin
2017-01-01
The need for formal care among the elderly population has been increasing due to their greater longevity and the evolution of family structure. We examined the determinants of the use and expenses of formal care among in-home elderly adults in Shanghai. A two-part model based on the data from the Shanghai Long-Term Care Needs Assessment Questionnaire was applied. A total of 8428 participants responded in 2014 and 7100 were followed up in 2015. The determinants of the probability of using formal care were analyzed in the first part of the model and the determinants of formal care expenses were analyzed in the second part. Demographic indicators, living arrangements, physical health status, and care type in 2014 were selected as independent variables. We found that individuals of older age; women; those with higher Activities of Daily Living (ADL) scores; those without spouse; those with higher income; those suffering from stroke, dementia, lower limb fracture, or advanced tumor; and those with previous experience of formal and informal care were more likely to receive formal care in 2015. Furthermore, age, income and formal care fee in 2014 were significant predictors of formal care expenses in 2015. Taken together, the results showed that formal care provision in Shanghai was not determined by ADL scores, but was instead more related to income. This implied an inappropriate distribution of formal care among elderly population in Shanghai. Additionally, it appeared difficult for the elderly to quit the formal care once they begun to use it. These results highlighted the importance of assessing the need for formal care, and suggested that the government offer guidance on formal care use for the elderly. PMID:28448628
Properties of a Formal Method to Model Emergence in Swarm-Based Systems
NASA Technical Reports Server (NTRS)
Rouff, Christopher; Vanderbilt, Amy; Truszkowski, Walt; Rash, James; Hinchey, Mike
2004-01-01
Future space missions will require cooperation between multiple satellites and/or rovers. Developers are proposing intelligent autonomous swarms for these missions, but swarm-based systems are difficult or impossible to test with current techniques. This viewgraph presentation examines the use of formal methods in testing swarm-based systems. The potential usefulness of formal methods in modeling the ANTS asteroid encounter mission is also examined.
An Evaluation of the Preceptor Model versus the Formal Teaching Model.
ERIC Educational Resources Information Center
Shamian, Judith; Lemieux, Suzanne
1984-01-01
This study evaluated the effectiveness of two teaching methods to determine which is more effective in enhancing the knowledge base of participating nurses: the preceptor model embodies decentralized instruction by a member of the nursing staff, and the formal teaching model uses centralized teaching by the inservice education department. (JOW)
Developing Formal Object-oriented Requirements Specifications: A Model, Tool and Technique.
ERIC Educational Resources Information Center
Jackson, Robert B.; And Others
1995-01-01
Presents a formal object-oriented specification model (OSS) for computer software system development that is supported by a tool that automatically generates a prototype from an object-oriented analysis model (OSA) instance, lets the user examine the prototype, and permits the user to refine the OSA model instance to generate a requirements…
Structuring Formal Control Systems Specifications for Reuse: Surviving Hardware Changes
NASA Technical Reports Server (NTRS)
Thompson, Jeffrey M.; Heimdahl, Mats P. E.; Erickson, Debra M.
2000-01-01
Formal capture and analysis of the required behavior of control systems have many advantages. For instance, it encourages rigorous requirements analysis, the required behavior is unambiguously defined, and we can assure that various safety properties are satisfied. Formal modeling is, however, a costly and time consuming process and if one could reuse the formal models over a family of products, significant cost savings would be realized. In an ongoing project we are investigating how to structure state-based models to achieve a high level of reusability within product families. In this paper we discuss a high-level structure of requirements models that achieves reusability of the desired control behavior across varying hardware platforms in a product family. The structuring approach is demonstrated through a case study in the mobile robotics domain where the desired robot behavior is reused on two diverse platforms-one commercial mobile platform and one build in-house. We use our language RSML (-e) to capture the control behavior for reuse and our tool NIMBUS to demonstrate how the formal specification can be validated and used as a prototype on the two platforms.
From non-trivial geometries to power spectra and vice versa
NASA Astrophysics Data System (ADS)
Brooker, D. J.; Tsamis, N. C.; Woodard, R. P.
2018-04-01
We review a recent formalism which derives the functional forms of the primordial—tensor and scalar—power spectra of scalar potential inflationary models. The formalism incorporates the case of geometries with non-constant first slow-roll parameter. Analytic expressions for the power spectra are given that explicitly display the dependence on the geometric properties of the background. Moreover, we present the full algorithm for using our formalism, to reconstruct the model from the observed power spectra. Our techniques are applied to models possessing "features" in their potential with excellent agreement.
Defects in crystalline packings of twisted filament bundles. I. Continuum theory of disclinations.
Grason, Gregory M
2012-03-01
We develop the theory of the coupling between in-plane order and out-of-plane geometry in twisted, two-dimensionally ordered filament bundles based on the nonlinear continuum elasticity theory of columnar materials. We show that twisted textures of filament backbones necessarily introduce stresses into the cross-sectional packing of bundles and that these stresses are formally equivalent to the geometrically induced stresses generated in thin elastic sheets that are forced to adopt spherical curvature. As in the case of crystalline order on curved membranes, geometrically induced stresses couple elastically to the presence of topological defects in the in-plane order. We derive the effective theory of multiple disclination defects in the cross section of bundle with a fixed twist and show that above a critical degree of twist, one or more fivefold disclinations is favored in the elastic energy ground state. We study the structure and energetics of multidisclination packings based on models of equilibrium and nonequilibrium cross-sectional order.
Parametric Quantum Search Algorithm as Quantum Walk: A Quantum Simulation
NASA Astrophysics Data System (ADS)
Ellinas, Demosthenes; Konstandakis, Christos
2016-02-01
Parametric quantum search algorithm (PQSA) is a form of quantum search that results by relaxing the unitarity of the original algorithm. PQSA can naturally be cast in the form of quantum walk, by means of the formalism of oracle algebra. This is due to the fact that the completely positive trace preserving search map used by PQSA, admits a unitarization (unitary dilation) a la quantum walk, at the expense of introducing auxiliary quantum coin-qubit space. The ensuing QW describes a process of spiral motion, chosen to be driven by two unitary Kraus generators, generating planar rotations of Bloch vector around an axis. The quadratic acceleration of quantum search translates into an equivalent quadratic saving of the number of coin qubits in the QW analogue. The associated to QW model Hamiltonian operator is obtained and is shown to represent a multi-particle long-range interacting quantum system that simulates parametric search. Finally, the relation of PQSA-QW simulator to the QW search algorithm is elucidated.
Beyond statistical inference: A decision theory for science
KILLEEN, PETER R.
2008-01-01
Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests—which place all value on the replicability of an effect and none on its magnitude—as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute. PMID:17201351
Lotka-Volterra representation of general nonlinear systems.
Hernández-Bermejo, B; Fairén, V
1997-02-01
In this article we elaborate on the structure of the generalized Lotka-Volterra (GLV) form for nonlinear differential equations. We discuss here the algebraic properties of the GLV family, such as the invariance under quasimonomial transformations and the underlying structure of classes of equivalence. Each class possesses a unique representative under the classical quadratic Lotka-Volterra form. We show how other standard modeling forms of biological interest, such as S-systems or mass-action systems, are naturally embedded into the GLV form, which thus provides a formal framework for their comparison and for the establishment of transformation rules. We also focus on the issue of recasting of general nonlinear systems into the GLV format. We present a procedure for doing so and point at possible sources of ambiguity that could make the resulting Lotka-Volterra system dependent on the path followed. We then provide some general theorems that define the operational and algorithmic framework in which this is not the case.
Beyond statistical inference: a decision theory for science.
Killeen, Peter R
2006-08-01
Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests--which place all value on the replicability of an effect and none on its magnitude--as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute.
Formal verification of AI software
NASA Technical Reports Server (NTRS)
Rushby, John; Whitehurst, R. Alan
1989-01-01
The application of formal verification techniques to Artificial Intelligence (AI) software, particularly expert systems, is investigated. Constraint satisfaction and model inversion are identified as two formal specification paradigms for different classes of expert systems. A formal definition of consistency is developed, and the notion of approximate semantics is introduced. Examples are given of how these ideas can be applied in both declarative and imperative forms.
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2010-01-01
Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.
Fourier transformation microwave spectroscopy of the methyl glycolate-H2O complex
NASA Astrophysics Data System (ADS)
Fujitake, Masaharu; Tanaka, Toshihiro; Ohashi, Nobukimi
2018-01-01
The rotational spectrum of one conformer of the methyl glycolate-H2O complex has been measured by means of the pulsed jet Fourier transform microwave spectrometer. The observed a- and b-type transitions exhibit doublet splittings due to the internal rotation of the methyl group. On the other hand, most of the c-type transitions exhibit quartet splittings arising from the methyl internal rotation and the inversion motion between two equivalent conformations. The spectrum was analyzed using parameterized expressions of the Hamiltonian matrix elements derived by applying the tunneling matrix formalism. Based on the results obtained from ab initio calculation, the observed complex of methyl glycolate-H2O was assigned to the most stable conformer of the insertion complex, in which a non-planer seven membered-ring structure is formed by the intermolecular hydrogen bonds between methyl glycolate and H2O subunits. The inversion motion observed in the c-type transitions is therefore a kind of ring-inversion motion between two equivalent conformations. Conformational flexibility, which corresponds to the ring-inversion between two equivalent conformations and to the isomerization between two possible conformers of the insertion complex, was investigated with the help of the ab initio calculation.
Evaluating a Control System Architecture Based on a Formally Derived AOCS Model
NASA Astrophysics Data System (ADS)
Ilic, Dubravka; Latvala, Timo; Varpaaniemi, Kimmo; Vaisanen, Pauli; Troubitsyna, Elena; Laibinis, Linas
2010-08-01
Attitude & Orbit Control System (AOCS) refers to a wider class of control systems which are used to determine and control the attitude of the spacecraft while in orbit, based on the information obtained from various sensors. In this paper, we propose an approach to evaluate a typical (yet somewhat simplified) AOCS architecture using formal development - based on the Event-B method. As a starting point, an Ada specification of the AOCS is translated into a formal specification and further refined to incorporate all the details of its original source code specification. This way we are able not only to evaluate the Ada specification by expressing and verifying specific system properties in our formal models, but also to determine how well the chosen modelling framework copes with the level of detail required for an actual implementation and code generation from the derived models.
State Event Models for the Formal Analysis of Human-Machine Interactions
NASA Technical Reports Server (NTRS)
Combefis, Sebastien; Giannakopoulou, Dimitra; Pecheur, Charles
2014-01-01
The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called "fullcontrol" between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Álvarez, Enrique; González-Martín, Sergio, E-mail: enrique.alvarez@uam.es, E-mail: sergio.gonzalez.martin@csic.es
2017-02-01
The on shell equivalence of first order and second order formalisms for the Einstein-Hilbert action does not hold for those actions quadratic in curvature. It would seem that by considering the connection and the metric as independent dynamical variables, there are no quartic propagators for any dynamical variable. This suggests that it is possible to get both renormalizability and unitarity along these lines. We have studied a particular instance of those theories, namely Weyl gravity. In this first paper we show that it is not possible to implement this program with the Weyl connection alone.
Convergence of quantum electrodynamics in a curved modification of Minkowski space.
Segal, I E; Zhou, Z
1994-01-01
The interaction and total hamiltonians for quantum electrodynamics, in the interaction representation, are entirely regular self-adjoint operators in Hilbert space, in the universal covering manifold M of the conformal compactification of Minkowski space Mo. (M is conformally equivalent to the Einstein universe E, in which Mo may be canonically imbedded.) In a fixed Lorentz frame this may be expressed as convergence in a spherical space with suitable periodic boundary conditions in time. The traditional relativistic theory is the formal limit of the present variant as the space curvature vanishes. PMID:11607455
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C
Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less
Improved formalism for precision Higgs coupling fits
NASA Astrophysics Data System (ADS)
Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon; Karl, Robert; List, Jenny; Ogawa, Tomohisa; Peskin, Michael E.; Tian, Junping
2018-03-01
Future e+e- colliders give the promise of model-independent determinations of the couplings of the Higgs boson. In this paper, we present an improved formalism for extracting Higgs boson couplings from e+e- data, based on the effective field theory description of corrections to the Standard Model. We apply this formalism to give projections of Higgs coupling accuracies for stages of the International Linear Collider and for other proposed e+e- colliders.
NASA Technical Reports Server (NTRS)
Boulet, C.; Ma, Qiancheng; Tipping, R. H.
2015-01-01
Starting from the refined Robert-Bonamy formalism [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)], we propose here an extension of line mixing studies to infrared absorptions of linear polyatomic molecules having stretching and bending modes. The present formalism does not neglect the internal degrees of freedom of the perturbing molecules, contrary to the energy corrected sudden (ECS) modeling, and enables one to calculate the whole relaxation matrix starting from the potential energy surface. Meanwhile, similar to the ECS modeling, the present formalism properly accounts for roles played by all the internal angular momenta in the coupling process, including the vibrational angular momentum. The formalism has been applied to the important case of CO2 broadened by N2. Applications to two kinds of vibrational bands (sigma yields sigma and sigma yields pi) have shown that the present results are in good agreement with both experimental data and results derived from the ECS model.
Formal Analysis of BPMN Models Using Event-B
NASA Astrophysics Data System (ADS)
Bryans, Jeremy W.; Wei, Wei
The use of business process models has gone far beyond documentation purposes. In the development of business applications, they can play the role of an artifact on which high level properties can be verified and design errors can be revealed in an effort to reduce overhead at later software development and diagnosis stages. This paper demonstrates how formal verification may add value to the specification, design and development of business process models in an industrial setting. The analysis of these models is achieved via an algorithmic translation from the de-facto standard business process modeling language BPMN to Event-B, a widely used formal language supported by the Rodin platform which offers a range of simulation and verification technologies.
Directly executable formal models of middleware for MANET and Cloud Networking and Computing
NASA Astrophysics Data System (ADS)
Pashchenko, D. V.; Sadeq Jaafar, Mustafa; Zinkin, S. A.; Trokoz, D. A.; Pashchenko, T. U.; Sinev, M. P.
2016-04-01
The article considers some “directly executable” formal models that are suitable for the specification of computing and networking in the cloud environment and other networks which are similar to wireless networks MANET. These models can be easily programmed and implemented on computer networks.
A Model-Driven Approach to Teaching Concurrency
ERIC Educational Resources Information Center
Carro, Manuel; Herranz, Angel; Marino, Julio
2013-01-01
We present an undergraduate course on concurrent programming where formal models are used in different stages of the learning process. The main practical difference with other approaches lies in the fact that the ability to develop correct concurrent software relies on a systematic transformation of formal models of inter-process interaction (so…
Applying Automated Theorem Proving to Computer Security
2008-03-01
CS96]”. Violations of policy can also be specified in this model. La Padula [Pad90] discusses a domain-independent formal model which imple- ments a...Science Laboratory, SRI International, Menlo Park, CA, September 1999. Pad90. L.J. La Padula . Formal modeling in a generalized framework for ac- cess
From Informal Safety-Critical Requirements to Property-Driven Formal Validation
NASA Technical Reports Server (NTRS)
Cimatti, Alessandro; Roveri, Marco; Susi, Angelo; Tonetta, Stefano
2008-01-01
Most of the efforts in formal methods have historically been devoted to comparing a design against a set of requirements. The validation of the requirements themselves, however, has often been disregarded, and it can be considered a largely open problem, which poses several challenges. The first challenge is given by the fact that requirements are often written in natural language, and may thus contain a high degree of ambiguity. Despite the progresses in Natural Language Processing techniques, the task of understanding a set of requirements cannot be automatized, and must be carried out by domain experts, who are typically not familiar with formal languages. Furthermore, in order to retain a direct connection with the informal requirements, the formalization cannot follow standard model-based approaches. The second challenge lies in the formal validation of requirements. On one hand, it is not even clear which are the correctness criteria or the high-level properties that the requirements must fulfill. On the other hand, the expressivity of the language used in the formalization may go beyond the theoretical and/or practical capacity of state-of-the-art formal verification. In order to solve these issues, we propose a new methodology that comprises of a chain of steps, each supported by a specific tool. The main steps are the following. First, the informal requirements are split into basic fragments, which are classified into categories, and dependency and generalization relationships among them are identified. Second, the fragments are modeled using a visual language such as UML. The UML diagrams are both syntactically restricted (in order to guarantee a formal semantics), and enriched with a highly controlled natural language (to allow for modeling static and temporal constraints). Third, an automatic formal analysis phase iterates over the modeled requirements, by combining several, complementary techniques: checking consistency; verifying whether the requirements entail some desirable properties; verify whether the requirements are consistent with selected scenarios; diagnosing inconsistencies by identifying inconsistent cores; identifying vacuous requirements; constructing multiple explanations by enabling the fault-tree analysis related to particular fault models; verifying whether the specification is realizable.
Characterization of heat transfer in nutrient materials, part 2
NASA Technical Reports Server (NTRS)
Cox, J. E.; Bannerot, R. B.; Chen, C. K.; Witte, L. C.
1973-01-01
A thermal model is analyzed that takes into account phase changes in the nutrient material. The behavior of fluids in low gravity environments is discussed along with low gravity heat transfer. Thermal contact resistance in the Skylab food heater is analyzed. The original model is modified to include: equivalent conductance due to radiation, radial equivalent conductance, wall equivalent conductance, and equivalent heat capacity. A constant wall-temperature model is presented.
Adolescent thinking ála Piaget: The formal stage.
Dulit, E
1972-12-01
Two of the formal-stage experiments of Piaget and Inhelder, selected largely for their closeness to the concepts defining the stage, were replicated with groups of average and gifted adolescents. This report describes the relevant Piagetian concepts (formal stage, concrete stage) in context, gives the methods and findings of this study, and concludes with a section discussing implications and making some reformulations which generally support but significantly qualify some of the central themes of the Piaget-Inhelder work. Fully developed formal-stage thinking emerges as far from commonplace among normal or average adolescents (by marked contrast with the impression created by the Piaget-Inhelder text, which chooses to report no middle or older adolescents who function at less than fully formal levels). In this respect, the formal stage differs appreciably from the earlier Piagetian stages, and early adolescence emerges as the age for which a "single path" model of cognitive development becomes seriously inadequate and a more complex model becomes essential. Formal-stage thinking seems best conceptualized, like most other aspects of psychological maturity, as a potentiality only partially attained by most and fully attained only by some.
Can the Equivalent Sphere Model Approximate Organ Doses in Space Radiation Environments?
NASA Technical Reports Server (NTRS)
Zi-Wei, Lin
2007-01-01
In space radiation calculations it is often useful to calculate the dose or dose equivalent in blood-forming organs (BFO). the skin or the eye. It has been customary to use a 5cm equivalent sphere to approximate the BFO dose. However previous studies have shown that a 5cm sphere gives conservative dose values for BFO. In this study we use a deterministic radiation transport with the Computerized Anatomical Man model to investigate whether the equivalent sphere model can approximate organ doses in space radiation environments. We find that for galactic cosmic rays environments the equivalent sphere model with an organ-specific constant radius parameter works well for the BFO dose equivalent and marginally well for the BFO dose and the dose equivalent of the eye or the skin. For solar particle events the radius parameters for the organ dose equivalent increase with the shielding thickness, and the model works marginally for BFO but is unacceptable for the eye or the skin The ranges of the radius parameters are also shown and the BFO radius parameters are found to be significantly larger than 5 cm in all eases.
Towards Formal Implementation of PUS Standard
NASA Astrophysics Data System (ADS)
Ilić, D.
2009-05-01
As an effort to promote the reuse of on-board and ground systems ESA developed a standard for packet telemetry and telecommand - PUS. It defines a set of standard service models with the corresponding structures of the associated telemetry and telecommand packets. Various missions then can choose to implement those standard PUS services that best conform to their specific requirements. In this paper we propose a formal development (based on the Event-B method) of reusable service patterns, which can be instantiated for concrete application. Our formal models allow us to formally express and verify specific service properties including various telecommand and telemetry packet structure validation.
2009-01-01
Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks. PMID:20042075
NASA Astrophysics Data System (ADS)
Kalanov, Temur Z.
2015-04-01
Analysis of the foundations of the theory of negative numbers is proposed. The unity of formal logic and of rational dialectics is methodological basis of the analysis. Statement of the problem is as follows. As is known, point O in the Cartesian coordinate system XOY determines the position of zero on the scale. The number ``zero'' belongs to both the scale of positive numbers and the scale of negative numbers. In this case, the following formallogical contradiction arises: the number 0 is both positive number and negative number; or, equivalently, the number 0 is neither positive number nor negative number, i.e. number 0 has no sign. Then the following question arises: Do negative numbers exist in science and practice? A detailed analysis of the problem shows that negative numbers do not exist because the foundations of the theory of negative numbers contrary to the formal-logical laws. It is proved that: (a) all numbers have no signs; (b) the concepts ``negative number'' and ``negative sign of number'' represent a formallogical error; (c) signs ``plus'' and ``minus'' are only symbols of mathematical operations. The logical errors determine the essence of the theory of negative numbers: the theory of negative number is a false theory.
NASA Astrophysics Data System (ADS)
Kildea, John
This thesis describes a study of shielding design techniques used for radiation therapy facilities that employ megavoltage linear accelerators. Specifically, an evaluation of the shielding design formalism described in NCRP report 151 was undertaken and a feasibility study for open-door 6 MV radiation therapy treatments in existing 6 MV, 18 MV treatment rooms at the Montreal General Hospital (MGH) was conducted. To evaluate the shielding design formalism of NCRP 151, barrier-attenuated equivalent doses were measured for several of the treatment rooms at the MGH and compared with expectations from NCRP 151 calculations. It was found that, while the insight and recommendations of NCRP 151 are very valuable, its dose predictions are not always correct. As such, the NCRP 151 methodology is best used in conjunction with physical measurements. The feasibility study for 6 MV open-door treatments made use of the NCRP 151 formalism, together with physical measurements for realistic 6 MV workloads. The results suggest that, dosimetrically, 6 MV open door treatments are feasible. A conservative estimate for the increased dose at the door arising from such treatments is 0.1 mSv, with a 1/8 occupancy factor, as recommended in NCRP 151, included.
Revised Thomas-Fermi approximation for singular potentials
NASA Astrophysics Data System (ADS)
Dufty, James W.; Trickey, S. B.
2016-08-01
Approximations for the many-fermion free-energy density functional that include the Thomas-Fermi (TF) form for the noninteracting part lead to singular densities for singular external potentials (e.g., attractive Coulomb). This limitation of the TF approximation is addressed here by a formal map of the exact Euler equation for the density onto an equivalent TF form characterized by a modified Kohn-Sham potential. It is shown to be a "regularized" version of the Kohn-Sham potential, tempered by convolution with a finite-temperature response function. The resulting density is nonsingular, with the equilibrium properties obtained from the total free-energy functional evaluated at this density. This new representation is formally exact. Approximate expressions for the regularized potential are given to leading order in a nonlocality parameter, and the limiting behavior at high and low temperatures is described. The noninteracting part of the free energy in this approximation is the usual Thomas-Fermi functional. These results generalize and extend to finite temperatures the ground-state regularization by R. G. Parr and S. Ghosh [Proc. Natl. Acad. Sci. U.S.A. 83, 3577 (1986), 10.1073/pnas.83.11.3577] and by L. R. Pratt, G. G. Hoffman, and R. A. Harris [J. Chem. Phys. 88, 1818 (1988), 10.1063/1.454105] and formally systematize the finite-temperature regularization given by the latter authors.
Near Misses in Slot Machine Gambling Developed Through Generalization of Total Wins.
Belisle, Jordan; Dixon, Mark R
2016-06-01
The purpose of the present study was to evaluate the development of the near miss effect in slot machine gambling as a product of stimulus generalization from total wins. The study was conducted across two experiments. Twelve college students participated in the first experiment, which demonstrated that greater post-reinforcement pauses followed losing outcomes that were formally similar to total wins, relative to losing outcomes that were formally dissimilar [F (5, 7) = 5.24, p = .025] along a generalization gradient (R (2) = .96). Additionally, 11 out of 12 participants showed greater response latencies following near-misses than following total wins. Thirteen college students participated in the second experiment, which demonstrated that symbols that more saliently indicated a loss resulted in lower response latencies than functionally equivalent but visually dissimilar losing symbols [F (3, 10) = 15.50, p = .01]. A generalization gradient was observed across winning symbols (R (2) = .98), and an inverse of the gradient observed across winning symbols was observed across symbols that were the least formally similar (R (2) = .69). The present study replicates and extends previous research on near misses in slot machine gambling, and provides discussion around the clinical utility of such findings on the prevention of problem gambling.
Mapping the Drude polarizable force field onto a multipole and induced dipole model
NASA Astrophysics Data System (ADS)
Huang, Jing; Simmonett, Andrew C.; Pickard, Frank C.; MacKerell, Alexander D.; Brooks, Bernard R.
2017-10-01
The induced dipole and the classical Drude oscillator represent two major approaches for the explicit inclusion of electronic polarizability into force field-based molecular modeling and simulations. In this work, we explore the equivalency of these two models by comparing condensed phase properties computed using the Drude force field and a multipole and induced dipole (MPID) model. Presented is an approach to map the electrostatic model optimized in the context of the Drude force field onto the MPID model. Condensed phase simulations on water and 15 small model compounds show that without any reparametrization, the MPID model yields properties similar to the Drude force field with both models yielding satisfactory reproduction of a range of experimental values and quantum mechanical data. Our results illustrate that the Drude oscillator model and the point induced dipole model are different representations of essentially the same physical model. However, results indicate the presence of small differences between the use of atomic multipoles and off-center charge sites. Additionally, results on the use of dispersion particle mesh Ewald further support its utility for treating long-range Lennard Jones dispersion contributions in the context of polarizable force fields. The main motivation in demonstrating the transferability of parameters between the Drude and MPID models is that the more than 15 years of development of the Drude polarizable force field can now be used with MPID formalism without the need for dual-thermostat integrators nor self-consistent iterations. This opens up a wide range of new methodological opportunities for polarizable models.
SU-F-T-128: Dose-Volume Constraints for Particle Therapy Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, R; Smith, W; Hendrickson, K
2016-06-15
Purpose: Determine equivalent Organ at Risk (OAR) tolerance dose (TD) constraints for MV x-rays and particle therapy. Methods: Equivalent TD estimates for MV x-rays are determined from an isoeffect, regression-analysis of published and in-house constraints for various fractionation schedules (n fractions). The analysis yields an estimate of (α/β) for an OAR. To determine equivalent particle therapy constraints, the MV x-ray TD(n) values are divided by the RBE for DSB induction (RBE{sub DSB}) or cell survival (RBE{sub S}). Estimates of (RBE{sub DSB}) are computed using the Monte Carlo Damage Simulation, and estimates of RBES are computed using the Repair-Misrepair-Fixation (RMF) model.more » A research build of the RayStation™ treatment planning system implementing the above model is used to estimate (RBE{sub DSB}) for OARs of interest in 16 proton therapy patient plans (head and neck, thorax, prostate and brain). Results: The analysis gives an (α/β) estimate of about 20 Gy for the trachea and heart and 2–4 Gy for the esophagus, spine, and brachial plexus. Extrapolation of MV x-ray constraints (n = 1) to fast neutrons using RBE{sub DSB} = 2.7 are in excellent agreement with clinical experience (n = 10 to 20). When conventional (n > 30) x-ray treatments are used as the reference radiation, fast neutron RBE increased to a maximum of 6. For comparison to a constant RBE of 1.1, the RayStation™ analysis gave estimates of proton RBE{sub DSB} from 1.03 to 1.33 for OARs of interest. Conclusion: The presented system of models is a convenient formalism to synthesize from multiple sources of information a set of self-consistent plan constraints for MV x-ray and hadron therapy treatments. Estimates of RBE{sub DSB} from the RayStation™ analysis differ substantially from 1.1 and vary among patients and treatment sites. A treatment planning system that incorporates patient and anatomy-specific corrections in proton RBE would create opportunities to increase the therapeutic ratio. The research build of the RayStation used in the study was made available to the University of Washington free of charge. RaySearch Laboratories did not provide any monetary support for the reported studies.« less
Improved formalism for precision Higgs coupling fits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon
Future e +e – colliders give the promise of model-independent determinations of the couplings of the Higgs boson. In this paper, we present an improved formalism for extracting Higgs boson couplings from e +e – data, based on the effective field theory description of corrections to the Standard Model. Lastly, we apply this formalism to give projections of Higgs coupling accuracies for stages of the International Linear Collider and for other proposed e +e – colliders.
Improved formalism for precision Higgs coupling fits
Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon; ...
2018-03-20
Future e +e – colliders give the promise of model-independent determinations of the couplings of the Higgs boson. In this paper, we present an improved formalism for extracting Higgs boson couplings from e +e – data, based on the effective field theory description of corrections to the Standard Model. Lastly, we apply this formalism to give projections of Higgs coupling accuracies for stages of the International Linear Collider and for other proposed e +e – colliders.
δ M formalism and anisotropic chaotic inflation power spectrum
NASA Astrophysics Data System (ADS)
Talebian-Ashkezari, A.; Ahmadi, N.
2018-05-01
A new analytical approach to linear perturbations in anisotropic inflation has been introduced in [A. Talebian-Ashkezari, N. Ahmadi and A.A. Abolhasani, JCAP 03 (2018) 001] under the name of δ M formalism. In this paper we apply the mentioned approach to a model of anisotropic inflation driven by a scalar field, coupled to the kinetic term of a vector field with a U(1) symmetry. The δ M formalism provides an efficient way of computing tensor-tensor, tensor-scalar as well as scalar-scalar 2-point correlations that are needed for the analysis of the observational features of an anisotropic model on the CMB. A comparison between δ M results and the tedious calculations using in-in formalism shows the aptitude of the δ M formalism in calculating accurate two point correlation functions between physical modes of the system.
Learning Goal Orientation, Formal Mentoring, and Leadership Competence in HRD: A Conceptual Model
ERIC Educational Resources Information Center
Kim, Sooyoung
2007-01-01
Purpose: The purpose of this paper is to suggest a conceptual model of formal mentoring as a leadership development initiative including "learning goal orientation", "mentoring functions", and "leadership competencies" as key constructs of the model. Design/methodology/approach: Some empirical studies, though there are not many, will provide…
Cuetos, Alejandro; Patti, Alessandro
2015-08-01
We propose a simple but powerful theoretical framework to quantitatively compare Brownian dynamics (BD) and dynamic Monte Carlo (DMC) simulations of multicomponent colloidal suspensions. By extending our previous study focusing on monodisperse systems of rodlike colloids, here we generalize the formalism described there to multicomponent colloidal mixtures and validate it by investigating the dynamics in isotropic and liquid crystalline phases containing spherical and rodlike particles. In order to investigate the dynamics of multicomponent colloidal systems by DMC simulations, it is key to determine the elementary time step of each species and establish a unique timescale. This is crucial to consistently study the dynamics of colloidal particles with different geometry. By analyzing the mean-square displacement, the orientation autocorrelation functions, and the self part of the van Hove correlation functions, we show that DMC simulation is a very convenient and reliable technique to describe the stochastic dynamics of any multicomponent colloidal system. Our theoretical formalism can be easily extended to any colloidal system containing size and/or shape polydisperse particles.
Cohomology and deformation of 𝔞𝔣𝔣(1|1) acting on differential operators
NASA Astrophysics Data System (ADS)
Basdouri, Khaled; Omri, Salem
We consider the 𝔞𝔣𝔣(1|1)-module structure on the spaces of differential operators acting on the spaces of weighted densities. We compute the second differential cohomology of the Lie superalgebra 𝔞𝔣𝔣(1|1) with coefficients in differential operators acting on the spaces of weighted densities. We classify formal deformations of the 𝔞𝔣𝔣(1|1)-module structure on the superspaces of symbols of differential operators. We prove that any formal deformation of a given infinitesimal deformation of this structure is equivalent to its infinitesimal part. This work is the simplest superization of a result by Basdouri [Deformation of 𝔞𝔣𝔣(1)-modules of pseudo-differential operators and symbols, J. Pseudo-differ. Oper. Appl. 7(2) (2016) 157-179] and application of work by Basdouri et al. [First cohomology of 𝔞𝔣𝔣(1) and 𝔞𝔣𝔣(1|1) acting on linear differential operators, Int. J. Geom. Methods Mod. Phys. 13(1) (2016)].
Classifying quantum entanglement through topological links
NASA Astrophysics Data System (ADS)
Quinta, Gonçalo M.; André, Rui
2018-04-01
We propose an alternative classification scheme for quantum entanglement based on topological links. This is done by identifying a nonrigid ring to a particle, attributing the act of cutting and removing a ring to the operation of tracing out the particle, and associating linked rings to entangled particles. This analogy naturally leads us to a classification of multipartite quantum entanglement based on all possible distinct links for a given number of rings. To determine all different possibilities, we develop a formalism that associates any link to a polynomial, with each polynomial thereby defining a distinct equivalence class. To demonstrate the use of this classification scheme, we choose qubit quantum states as our example of physical system. A possible procedure to obtain qubit states from the polynomials is also introduced, providing an example state for each link class. We apply the formalism for the quantum systems of three and four qubits and demonstrate the potential of these tools in a context of qubit networks.
ERIC Educational Resources Information Center
Kalechofsky, Robert
This research paper proposes several mathematical models which help clarify Piaget's theory of cognition on the concrete and formal operational stages. Some modified lattice models were used for the concrete stage and a combined Boolean Algebra and group theory model was used for the formal stage. The researcher used experiments cited in the…
Huang, Yihua; Huang, Wenjin; Wang, Qinglei; Su, Xujian
2013-07-01
The equivalent circuit model of a piezoelectric transformer is useful in designing and optimizing the related driving circuits. Based on previous work, an equivalent circuit model for a circular flexural-vibration-mode piezoelectric transformer with moderate thickness is proposed and validated by finite element analysis. The input impedance, voltage gain, and efficiency of the transformer are determined through computation. The basic behaviors of the transformer are shown by numerical results.
Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B
NASA Technical Reports Server (NTRS)
Yeganefard, Sanaz; Butler, Michael; Rezazadeh, Abdolbaghi
2010-01-01
Recently a set of guidelines, or cookbook, has been developed for modelling and refinement of control problems in Event-B. The Event-B formal method is used for system-level modelling by defining states of a system and events which act on these states. It also supports refinement of models. This cookbook is intended to systematize the process of modelling and refining a control problem system by distinguishing environment, controller and command phenomena. Our main objective in this paper is to investigate and evaluate the usefulness and effectiveness of this cookbook by following it throughout the formal modelling of cruise control system found in cars. The outcomes are identifying the benefits of the cookbook and also giving guidance to its future users.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yavari, M., E-mail: yavari@iaukashan.ac.ir
2016-06-15
We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.
What can formal methods offer to digital flight control systems design
NASA Technical Reports Server (NTRS)
Good, Donald I.
1990-01-01
Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.
Research on the equivalence between digital core and rock physics models
NASA Astrophysics Data System (ADS)
Yin, Xingyao; Zheng, Ying; Zong, Zhaoyun
2017-06-01
In this paper, we calculate the elastic modulus of 3D digital cores using the finite element method, systematically study the equivalence between the digital core model and various rock physics models, and carefully analyze the conditions of the equivalence relationships. The influences of the pore aspect ratio and consolidation coefficient on the equivalence relationships are also further refined. Theoretical analysis indicates that the finite element simulation based on the digital core is equivalent to the boundary theory and Gassmann model. For pure sandstones, effective medium theory models (SCA and DEM) and the digital core models are equivalent in cases when the pore aspect ratio is within a certain range, and dry frame models (Nur and Pride model) and the digital core model are equivalent in cases when the consolidation coefficient is a specific value. According to the equivalence relationships, the comparison of the elastic modulus results of the effective medium theory and digital rock physics is an effective approach for predicting the pore aspect ratio. Furthermore, the traditional digital core models with two components (pores and matrix) are extended to multiple minerals to more precisely characterize the features and mineral compositions of rocks in underground reservoirs. This paper studies the effects of shale content on the elastic modulus in shaly sandstones. When structural shale is present in the sandstone, the elastic modulus of the digital cores are in a reasonable agreement with the DEM model. However, when dispersed shale is present in the sandstone, the Hill model cannot describe the changes in the stiffness of the pore space precisely. Digital rock physics describes the rock features such as pore aspect ratio, consolidation coefficient and rock stiffness. Therefore, digital core technology can, to some extent, replace the theoretical rock physics models because the results are more accurate than those of the theoretical models.
Relativistic Transverse Gravitational Redshift
NASA Astrophysics Data System (ADS)
Mayer, A. F.
2012-12-01
The parametrized post-Newtonian (PPN) formalism is a tool for quantitative analysis of the weak gravitational field based on the field equations of general relativity. This formalism and its ten parameters provide the practical theoretical foundation for the evaluation of empirical data produced by space-based missions designed to map and better understand the gravitational field (e.g., GRAIL, GRACE, GOCE). Accordingly, mission data is interpreted in the context of the canonical PPN formalism; unexpected, anomalous data are explained as similarly unexpected but apparently real physical phenomena, which may be characterized as ``gravitational anomalies," or by various sources contributing to the total error budget. Another possibility, which is typically not considered, is a small modeling error in canonical general relativity. The concept of the idealized point-mass spherical equipotential surface, which originates with Newton's law of gravity, is preserved in Einstein's synthesis of special relativity with accelerated reference frames in the form of the field equations. It was not previously realized that the fundamental principles of relativity invalidate this concept and with it the idea that the gravitational field is conservative (i.e., zero net work is done on any closed path). The ideal radial free fall of a material body from arbitrarily-large range to a point on such an equipotential surface (S) determines a unique escape-velocity vector of magnitude v collinear to the acceleration vector of magnitude g at this point. For two such points on S separated by angle dφ , the Equivalence Principle implies distinct reference frames experiencing inertial acceleration of identical magnitude g in different directions in space. The complete equivalence of these inertially-accelerated frames to their analogous frames at rest on S requires evaluation at instantaneous velocity v relative to a local inertial observer. Because these velocity vectors are not parallel, a symmetric energy potential exists between the frames that is quantified by the instantaneous Δ {v} = v\\cdot{d}φ between them; in order for either frame to become indistinguishable from the other, such that their respective velocity and acceleration vectors are parallel, a change in velocity is required. While the qualitative features of general relativity imply this phenomenon (i.e., a symmetric potential difference between two points on a Newtonian `equipotential surface' that is similar to a friction effect), it is not predicted by the field equations due to a modeling error concerning time. This is an error of omission; time has fundamental geometric properties implied by the principles of relativity that are not reflected in the field equations. Where b is the radius and g is the gravitational acceleration characterizing a spherical geoid S of an ideal point-source gravitational field, an elegant derivation that rests on first principles shows that for two points at rest on S separated by a distance d << b, a symmetric relativistic redshift exists between these points of magnitude z = gd2/bc^2, which over 1 km at Earth sea level yields z ˜{10-17}. It can be tested with a variety of methods, in particular laser interferometry. A more sophisticated derivation yields a considerably more complex predictive formula for any two points in a gravitational field.
The importance of being equivalent: Newton's two models of one-body motion
NASA Astrophysics Data System (ADS)
Pourciau, Bruce
2004-05-01
As an undergraduate at Cambridge, Newton entered into his "Waste Book" an assumption that we have named the Equivalence Assumption (The Younger): "If a body move progressively in some crooked line [about a center of motion] ..., [then this] crooked line may bee conceived to consist of an infinite number of streight lines. Or else in any point of the croked line the motion may bee conceived to be on in the tangent". In this assumption, Newton somewhat imprecisely describes two mathematical models, a "polygonal limit model" and a "tangent deflected model", for "one-body motion", that is, for the motion of a "body in orbit about a fixed center", and then claims that these two models are equivalent. In the first part of this paper, we study the Principia to determine how the elder Newton would more carefully describe the polygonal limit and tangent deflected models. From these more careful descriptions, we then create Equivalence Assumption (The Elder), a precise interpretation of Equivalence Assumption (The Younger) as it might have been restated by Newton, after say 1687. We then review certain portions of the Waste Book and the Principia to make the case that, although Newton never restates nor even alludes to the Equivalence Assumption after his youthful Waste Book entry, still the polygonal limit and tangent deflected models, as well as an unspoken belief in their equivalence, infuse Newton's work on orbital motion. In particular, we show that the persuasiveness of the argument for the Area Property in Proposition 1 of the Principia depends crucially on the validity of Equivalence Assumption (The Elder). After this case is made, we present the mathematical analysis required to establish the validity of the Equivalence Assumption (The Elder). Finally, to illustrate the fundamental nature of the resulting theorem, the Equivalence Theorem as we call it, we present three significant applications: we use the Equivalence Theorem first to clarify and resolve questions related to Leibniz's "polygonal model" of one-body motion; then to repair Newton's argument for the Area Property in Proposition 1; and finally to clarify and resolve questions related to the transition from impulsive to continuous forces in "De motu" and the Principia.
A formal approach to the analysis of clinical computer-interpretable guideline modeling languages.
Grando, M Adela; Glasspool, David; Fox, John
2012-01-01
To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.
The Archival Photograph and Its Meaning: Formalisms for Modeling Images
ERIC Educational Resources Information Center
Benson, Allen C.
2009-01-01
This article explores ontological principles and their potential applications in the formal description of archival photographs. Current archival descriptive practices are reviewed and the larger question is addressed: do archivists who are engaged in describing photographs need a more formalized system of representation, or do existing encoding…
Thermodynamically Feasible Kinetic Models of Reaction Networks
Ederer, Michael; Gilles, Ernst Dieter
2007-01-01
The dynamics of biological reaction networks are strongly constrained by thermodynamics. An holistic understanding of their behavior and regulation requires mathematical models that observe these constraints. However, kinetic models may easily violate the constraints imposed by the principle of detailed balance, if no special care is taken. Detailed balance demands that in thermodynamic equilibrium all fluxes vanish. We introduce a thermodynamic-kinetic modeling (TKM) formalism that adapts the concepts of potentials and forces from irreversible thermodynamics to kinetic modeling. In the proposed formalism, the thermokinetic potential of a compound is proportional to its concentration. The proportionality factor is a compound-specific parameter called capacity. The thermokinetic force of a reaction is a function of the potentials. Every reaction has a resistance that is the ratio of thermokinetic force and reaction rate. For mass-action type kinetics, the resistances are constant. Since it relies on the thermodynamic concept of potentials and forces, the TKM formalism structurally observes detailed balance for all values of capacities and resistances. Thus, it provides an easy way to formulate physically feasible, kinetic models of biological reaction networks. The TKM formalism is useful for modeling large biological networks that are subject to many detailed balance relations. PMID:17208985
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes
Zhang, Hong; Pei, Yun
2016-01-01
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.
Zhang, Hong; Pei, Yun
2016-08-12
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.
On the Adequacy of Current Empirical Evaluations of Formal Models of Categorization
ERIC Educational Resources Information Center
Wills, Andy J.; Pothos, Emmanuel M.
2012-01-01
Categorization is one of the fundamental building blocks of cognition, and the study of categorization is notable for the extent to which formal modeling has been a central and influential component of research. However, the field has seen a proliferation of noncomplementary models with little consensus on the relative adequacy of these accounts.…
Reasoning with Conditionals: A Test of Formal Models of Four Theories
ERIC Educational Resources Information Center
Oberauer, Klaus
2006-01-01
The four dominant theories of reasoning from conditionals are translated into formal models: The theory of mental models (Johnson-Laird, P. N., & Byrne, R. M. J. (2002). Conditionals: a theory of meaning, pragmatics, and inference. "Psychological Review," 109, 646-678), the suppositional theory (Evans, J. S. B. T., & Over, D. E. (2004). "If."…
ERIC Educational Resources Information Center
Grabinska, Teresa; Zielinska, Dorota
2010-01-01
The authors examine language from the perspective of models of empirical sciences, which discipline studies the relationship between reality, models, and formalisms. Such a perspective allows one to notice that linguistics approached within the classical framework share a number of problems with other experimental sciences studied initially…
Managing Risk in Mobile Applications with Formal Security Policies
2013-04-01
Alternatively, Breaux and Powers (2009) found the Business Process Modeling Notation ( BPMN ), a declarative language for describing business processes, to be...the Business Process Execution Language (BPEL), preferred as the candidate formal semantics for BPMN , only works for limited classes of BPMN models
A Formal Valuation Framework for Emotions and Their Control.
Huys, Quentin J M; Renz, Daniel
2017-09-15
Computational psychiatry aims to apply mathematical and computational techniques to help improve psychiatric care. To achieve this, the phenomena under scrutiny should be within the scope of formal methods. As emotions play an important role across many psychiatric disorders, such computational methods must encompass emotions. Here, we consider formal valuation accounts of emotions. We focus on the fact that the flexibility of emotional responses and the nature of appraisals suggest the need for a model-based valuation framework for emotions. However, resource limitations make plain model-based valuation impossible and require metareasoning strategies to apportion cognitive resources adaptively. We argue that emotions may implement such metareasoning approximations by restricting the range of behaviors and states considered. We consider the processes that guide the deployment of the approximations, discerning between innate, model-free, heuristic, and model-based controllers. A formal valuation and metareasoning framework may thus provide a principled approach to examining emotions. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Formal Techniques for Synchronized Fault-Tolerant Systems
NASA Technical Reports Server (NTRS)
DiVito, Ben L.; Butler, Ricky W.
1992-01-01
We present the formal verification of synchronizing aspects of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications. The RCP uses NMR-style redundancy to mask faults and internal majority voting to purge the effects of transient faults. The system design has been formally specified and verified using the EHDM verification system. Our formalization is based on an extended state machine model incorporating snapshots of local processors clocks.
NASA Astrophysics Data System (ADS)
Kumar, Sujay V.; Wang, Shugong; Mocko, David M.; Peters-Lidard, Christa D.; Xia, Youlong
2017-11-01
Multimodel ensembles are often used to produce ensemble mean estimates that tend to have increased simulation skill over any individual model output. If multimodel outputs are too similar, an individual LSM would add little additional information to the multimodel ensemble, whereas if the models are too dissimilar, it may be indicative of systematic errors in their formulations or configurations. The article presents a formal similarity assessment of the North American Land Data Assimilation System (NLDAS) multimodel ensemble outputs to assess their utility to the ensemble, using a confirmatory factor analysis. Outputs from four NLDAS Phase 2 models currently running in operations at NOAA/NCEP and four new/upgraded models that are under consideration for the next phase of NLDAS are employed in this study. The results show that the runoff estimates from the LSMs were most dissimilar whereas the models showed greater similarity for root zone soil moisture, snow water equivalent, and terrestrial water storage. Generally, the NLDAS operational models showed weaker association with the common factor of the ensemble and the newer versions of the LSMs showed stronger association with the common factor, with the model similarity increasing at longer time scales. Trade-offs between the similarity metrics and accuracy measures indicated that the NLDAS operational models demonstrate a larger span in the similarity-accuracy space compared to the new LSMs. The results of the article indicate that simultaneous consideration of model similarity and accuracy at the relevant time scales is necessary in the development of multimodel ensemble.
Cultural adaptation of the Test of Narrative Language (TNL) into Brazilian Portuguese.
Rossi, Natalia Freitas; Lindau, Tâmara de Andrade; Gillam, Ronald Bradley; Giacheti, Célia Maria
To accomplish the translation and cultural adaptation of the Test of Narrative Language (TNL) into Brazilian Portuguese. The TNL is a formal instrument which assesses narrative comprehension and oral narration of children between the ages of 5-0 and 11-11 (years-months). The TNL translation and adaptation process had the following steps: (1) translation into the target language; (2) summary of the translated versions; (3) back-translation; (4) checking of the conceptual, semantics and cultural equivalence process and (5) pilot study (56 children within the test age range and from both genders). The adapted version maintained the same structure as the original version: number of tasks (both, three comprehension and oral narration), narrative formats (no picture, sequenced pictures and single picture) and scoring system. There were no adjustments to the pictures. The "McDonald's Story" was replaced by the "Snack Bar History" to meet the semantic and experiential equivalence of the target population. The other stories had semantic and grammatical adjustments. Statistically significant difference was found when comparing the raw score (comprehension, narration and total) of age groups from the adapted version. Adjustments were required to meet the equivalence between the original and the translated versions. The adapted version showed it has the potential to identify differences in oral narratives of children in the age range provided by the test. Measurement equivalence for validation and test standardization are in progress and will be able to supplement the study outcomes.
Some Implications of a Behavioral Analysis of Verbal Behavior for Logic and Mathematics
2013-01-01
The evident power and utility of the formal models of logic and mathematics pose a puzzle: Although such models are instances of verbal behavior, they are also essentialistic. But behavioral terms, and indeed all products of selection contingencies, are intrinsically variable and in this respect appear to be incommensurate with essentialism. A distinctive feature of verbal contingencies resolves this puzzle: The control of behavior by the nonverbal environment is often mediated by the verbal behavior of others, and behavior under control of verbal stimuli is blind to the intrinsic variability of the stimulating environment. Thus, words and sentences serve as filters of variability and thereby facilitate essentialistic model building and the formal structures of logic, mathematics, and science. Autoclitic frames, verbal chains interrupted by interchangeable variable terms, are ubiquitous in verbal behavior. Variable terms can be substituted in such frames almost without limit, a feature fundamental to formal models. Consequently, our fluency with autoclitic frames fosters generalization to formal models, which in turn permit deduction and other kinds of logical and mathematical inference. PMID:28018038
A Rule-Based Modeling for the Description of Flexible and Self-healing Business Processes
NASA Astrophysics Data System (ADS)
Boukhebouze, Mohamed; Amghar, Youssef; Benharkat, Aïcha-Nabila; Maamar, Zakaria
In this paper we discuss the importance of ensuring that business processes are label robust and agile at the same time robust and agile. To this end, we consider reviewing the way business processes are managed. For instance we consider offering a flexible way to model processes so that changes in regulations are handled through some self-healing mechanisms. These changes may raise exceptions at run-time if not properly reflected on these processes. To this end we propose a new rule based model that adopts the ECA rules and is built upon formal tools. The business logic of a process can be summarized with a set of rules that implement an organization’s policies. Each business rule is formalized using our ECAPE formalism (Event-Condition-Action-Post condition- post Event). This formalism allows translating a process into a graph of rules that is analyzed in terms of reliably and flexibility.
ADGS-2100 Adaptive Display and Guidance System Window Manager Analysis
NASA Technical Reports Server (NTRS)
Whalen, Mike W.; Innis, John D.; Miller, Steven P.; Wagner, Lucas G.
2006-01-01
Recent advances in modeling languages have made it feasible to formally specify and analyze the behavior of large system components. Synchronous data flow languages, such as Lustre, SCR, and RSML-e are particularly well suited to this task, and commercial versions of these tools such as SCADE and Simulink are growing in popularity among designers of safety critical systems, largely due to their ability to automatically generate code from the models. At the same time, advances in formal analysis tools have made it practical to formally verify important properties of these models to ensure that design defects are identified and corrected early in the lifecycle. This report describes how these tools have been applied to the ADGS-2100 Adaptive Display and Guidance Window Manager being developed by Rockwell Collins Inc. This work demonstrates how formal methods can be easily and cost-efficiently used to remove defects early in the design cycle.
Development of structured ICD-10 and its application to computer-assisted ICD coding.
Imai, Takeshi; Kajino, Masayuki; Sato, Megumi; Ohe, Kazuhiko
2010-01-01
This paper presents: (1) a framework of formal representation of ICD10, which functions as a bridge between ontological information and natural language expressions; and (2) a methodology to use formally described ICD10 for computer-assisted ICD coding. First, we analyzed and structurized the meanings of categories in 15 chapters of ICD10. Then we expanded the structured ICD10 (S-ICD10) by adding subordinate concepts and labels derived from Japanese Standard Disease Names. The information model to describe formal representation was refined repeatedly. The resultant model includes 74 types of semantic links. We also developed an ICD coding module based on S-ICD10 and a 'Coding Principle,' which achieved high accuracy (>70%) for four chapters. These results not only demonstrate the basic feasibility of our coding framework but might also inform the development of the information model for formal description framework in the ICD11 revision.
ERIC Educational Resources Information Center
Magis, David
2015-01-01
The purpose of this note is to study the equivalence of observed and expected (Fisher) information functions with polytomous item response theory (IRT) models. It is established that observed and expected information functions are equivalent for the class of divide-by-total models (including partial credit, generalized partial credit, rating…
The Factor Structure of Concrete and Formal Operations: A Confirmation of Piaget.
ERIC Educational Resources Information Center
Gray, William M.
Piaget has hypothesized that concrete and formal operations can be described by specific logical models. The present study focused on assessing various aspects of four concrete operational groupings and two variations of two formal operational characteristics. Six hundred twenty-two 9-14 year old students participating in the Human Sciences…
Why formal learning theory matters for cognitive science.
Fulop, Sean; Chater, Nick
2013-01-01
This article reviews a number of different areas in the foundations of formal learning theory. After outlining the general framework for formal models of learning, the Bayesian approach to learning is summarized. This leads to a discussion of Solomonoff's Universal Prior Distribution for Bayesian learning. Gold's model of identification in the limit is also outlined. We next discuss a number of aspects of learning theory raised in contributed papers, related to both computational and representational complexity. The article concludes with a description of how semi-supervised learning can be applied to the study of cognitive learning models. Throughout this overview, the specific points raised by our contributing authors are connected to the models and methods under review. Copyright © 2013 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Divito, Ben L.; Holloway, C. Michael
1994-01-01
In this paper the design and formal verification of the lower levels of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications, are presented. The RCP uses NMR-style redundancy to mask faults and internal majority voting to flush the effects of transient faults. Two new layers of the RCP hierarchy are introduced: the Minimal Voting refinement (DA_minv) of the Distributed Asynchronous (DA) model and the Local Executive (LE) Model. Both the DA_minv model and the LE model are specified formally and have been verified using the Ehdm verification system. All specifications and proofs are available electronically via the Internet using anonymous FTP or World Wide Web (WWW) access.
Formal Methods of V&V of Partial Specifications: An Experience Report
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Callahan, John
1997-01-01
This paper describes our work exploring the suitability of formal specification methods for independent verification and validation (IV&V) of software specifications for large, safety critical systems. An IV&V contractor often has to perform rapid analysis on incomplete specifications, with no control over how those specifications are represented. Lightweight formal methods show significant promise in this context, as they offer a way of uncovering major errors, without the burden of full proofs of correctness. We describe an experiment in the application of the method SCR. to testing for consistency properties of a partial model of requirements for Fault Detection Isolation and Recovery on the space station. We conclude that the insights gained from formalizing a specification is valuable, and it is the process of formalization, rather than the end product that is important. It was only necessary to build enough of the formal model to test the properties in which we were interested. Maintenance of fidelity between multiple representations of the same requirements (as they evolve) is still a problem, and deserves further study.
NASA Astrophysics Data System (ADS)
White, Irene; Lorenzi, Francesca
2016-12-01
Creativity has been emerging as a key concept in educational policies since the mid-1990s, with many Western countries restructuring their education systems to embrace innovative approaches likely to stimulate creative and critical thinking. But despite current intentions of putting more emphasis on creativity in education policies worldwide, there is still a relative dearth of viable models which capture the complexity of creativity and the conditions for its successful infusion into formal school environments. The push for creativity is in direct conflict with the results-driven/competitive performance-oriented culture which continues to dominate formal education systems. The authors of this article argue that incorporating creativity into mainstream education is a complex task and is best tackled by taking a systematic and multifaceted approach. They present a multidimensional model designed to help educators in tackling the challenges of the promotion of creativity. Their model encompasses three distinct yet interrelated dimensions of a creative space - physical, social-emotional and critical. The authors use the metaphor of space to refer to the interplay of the three identified dimensions. Drawing on confluence approaches to the theorisation of creativity, this paper exemplifies the development of a model before the background of a growing trend of systems theories. The aim of the model is to be helpful in systematising creativity by offering parameters - derived from the evaluation of an example offered by a non-formal educational environment - for the development of creative environments within mainstream secondary schools.
Formal Analysis of Self-Efficacy in Job Interviewee’s Mental State Model
NASA Astrophysics Data System (ADS)
Ajoge, N. S.; Aziz, A. A.; Yusof, S. A. Mohd
2017-08-01
This paper presents a formal analysis approach for self-efficacy model of interviewee’s mental state during a job interview session. Self-efficacy is a construct that has been hypothesised to combine with motivation and interviewee anxiety to define state influence of interviewees. The conceptual model was built based on psychological theories and models related to self-efficacy. A number of well-known relations between events and the course of self-efficacy are summarized from the literature and it is shown that the proposed model exhibits those patterns. In addition, this formal model has been mathematically analysed to find out which stable situations exist. Finally, it is pointed out how this model can be used in a software agent or robot-based platform. Such platform can provide an interview coaching approach where support to the user is provided based on their individual metal state during interview sessions.
A Model for Semantic Equivalence Discovery for Harmonizing Master Data
NASA Astrophysics Data System (ADS)
Piprani, Baba
IT projects often face the challenge of harmonizing metadata and data so as to have a "single" version of the truth. Determining equivalency of multiple data instances against the given type, or set of types, is mandatory in establishing master data legitimacy in a data set that contains multiple incarnations of instances belonging to the same semantic data record . The results of a real-life application define how measuring criteria and equivalence path determination were established via a set of "probes" in conjunction with a score-card approach. There is a need for a suite of supporting models to help determine master data equivalency towards entity resolution—including mapping models, transform models, selection models, match models, an audit and control model, a scorecard model, a rating model. An ORM schema defines the set of supporting models along with their incarnation into an attribute based model as implemented in an RDBMS.
Formal Methods Tool Qualification
NASA Technical Reports Server (NTRS)
Wagner, Lucas G.; Cofer, Darren; Slind, Konrad; Tinelli, Cesare; Mebsout, Alain
2017-01-01
Formal methods tools have been shown to be effective at finding defects in safety-critical digital systems including avionics systems. The publication of DO-178C and the accompanying formal methods supplement DO-333 allows applicants to obtain certification credit for the use of formal methods without providing justification for them as an alternative method. This project conducted an extensive study of existing formal methods tools, identifying obstacles to their qualification and proposing mitigations for those obstacles. Further, it interprets the qualification guidance for existing formal methods tools and provides case study examples for open source tools. This project also investigates the feasibility of verifying formal methods tools by generating proof certificates which capture proof of the formal methods tool's claim, which can be checked by an independent, proof certificate checking tool. Finally, the project investigates the feasibility of qualifying this proof certificate checker, in the DO-330 framework, in lieu of qualifying the model checker itself.
Entanglement and nonclassical properties of hypergraph states
NASA Astrophysics Data System (ADS)
Gühne, Otfried; Cuquet, Martí; Steinhoff, Frank E. S.; Moroder, Tobias; Rossi, Matteo; Bruß, Dagmar; Kraus, Barbara; Macchiavello, Chiara
2014-08-01
Hypergraph states are multiqubit states that form a subset of the locally maximally entangleable states and a generalization of the well-established notion of graph states. Mathematically, they can conveniently be described by a hypergraph that indicates a possible generation procedure of these states; alternatively, they can also be phrased in terms of a nonlocal stabilizer formalism. In this paper, we explore the entanglement properties and nonclassical features of hypergraph states. First, we identify the equivalence classes under local unitary transformations for up to four qubits, as well as important classes of five- and six-qubit states, and determine various entanglement properties of these classes. Second, we present general conditions under which the local unitary equivalence of hypergraph states can simply be decided by considering a finite set of transformations with a clear graph-theoretical interpretation. Finally, we consider the question of whether hypergraph states and their correlations can be used to reveal contradictions with classical hidden-variable theories. We demonstrate that various noncontextuality inequalities and Bell inequalities can be derived for hypergraph states.
Equivalent-Continuum Modeling With Application to Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Odegard, Gregory M.; Gates, Thomas S.; Nicholson, Lee M.; Wise, Kristopher E.
2002-01-01
A method has been proposed for developing structure-property relationships of nano-structured materials. This method serves as a link between computational chemistry and solid mechanics by substituting discrete molecular structures with equivalent-continuum models. It has been shown that this substitution may be accomplished by equating the vibrational potential energy of a nano-structured material with the strain energy of representative truss and continuum models. As important examples with direct application to the development and characterization of single-walled carbon nanotubes and the design of nanotube-based devices, the modeling technique has been applied to determine the effective-continuum geometry and bending rigidity of a graphene sheet. A representative volume element of the chemical structure of graphene has been substituted with equivalent-truss and equivalent continuum models. As a result, an effective thickness of the continuum model has been determined. This effective thickness has been shown to be significantly larger than the interatomic spacing of graphite. The effective thickness has been shown to be significantly larger than the inter-planar spacing of graphite. The effective bending rigidity of the equivalent-continuum model of a graphene sheet was determined by equating the vibrational potential energy of the molecular model of a graphene sheet subjected to cylindrical bending with the strain energy of an equivalent continuum plate subjected to cylindrical bending.
On a modified form of navier-stokes equations for three-dimensional flows.
Venetis, J
2015-01-01
A rephrased form of Navier-Stokes equations is performed for incompressible, three-dimensional, unsteady flows according to Eulerian formalism for the fluid motion. In particular, we propose a geometrical method for the elimination of the nonlinear terms of these fundamental equations, which are expressed in true vector form, and finally arrive at an equivalent system of three semilinear first order PDEs, which hold for a three-dimensional rectangular Cartesian coordinate system. Next, we present the related variational formulation of these modified equations as well as a general type of weak solutions which mainly concern Sobolev spaces.
On a Modified Form of Navier-Stokes Equations for Three-Dimensional Flows
Venetis, J.
2015-01-01
A rephrased form of Navier-Stokes equations is performed for incompressible, three-dimensional, unsteady flows according to Eulerian formalism for the fluid motion. In particular, we propose a geometrical method for the elimination of the nonlinear terms of these fundamental equations, which are expressed in true vector form, and finally arrive at an equivalent system of three semilinear first order PDEs, which hold for a three-dimensional rectangular Cartesian coordinate system. Next, we present the related variational formulation of these modified equations as well as a general type of weak solutions which mainly concern Sobolev spaces. PMID:25918743
Alternative self-dual gravity in eight dimensions
NASA Astrophysics Data System (ADS)
Nieto, J. A.
2016-07-01
We develop an alternative Ashtekar formalism in eight dimensions. In fact, using a MacDowell-Mansouri physical framework and a self-dual curvature symmetry, we propose an action in eight dimensions in which the Levi-Civita tenor with eight indices plays a key role. We explicitly show that such an action contains number of linear, quadratic and cubic terms in the Riemann tensor, Ricci tensor and scalar curvature. In particular, the linear term is reduced to the Einstein-Hilbert action with cosmological constant in eight dimensions. We prove that such a reduced action is equivalent to the Lovelock action in eight dimensions.
Fujisaki, Keisuke; Ikeda, Tomoyuki
2013-01-01
To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model) and the homogeneous model (macro-model). However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity. PMID:28788395
NASA Astrophysics Data System (ADS)
Mao, Dan
The conditions in the solar interior are so extreme that it has so far been impossible to match them in a laboratory. However, for nearly 50 years solar oscillations have been precisely observed, and the wealth of their data has enabled us to study the interior of the Sun as if it were a laboratory. Helioseismology is the name of this branch of astrophysics. It allows a high- precision diagnostic of the thermodynamic quantities in the solar interior. High-quality thermodynamic quantities are crucial for successful solar modeling. If good solar models are desired, considerable theoretical effort is required. Good solar models, in turn, are fundamental tools for solar physics. The most prominent example of this link between solar physics and basic physics was the resolution of the solar neutrino problem in 2002. The equation of state is a key material property that describes the relation between pressure, density and temperature. If the equation of state is derived from a thermodynamic potential it will also determine all associated thermodynamic quantities. A second key material property is the nuclear-energy production rate, which plays a crucial role in the solar core. Both are important physical properties describing the structure of the Sun. Both derive from microphysical models. In the equation-of-state part, we have studied two models of the equation of state (EOS). One is the MHD EOS, which is widely used in solar models. In our research, we have incorporated new terms into the MHD EOS. These terms have been borrowed from the major competing formalism, the OPAL EOS. They were missing in the original MHD EOS. Not only do the upgrades bring MHD closer to the OPAL equation of state, which is well known for its better match with observations. Most importantly it will allow solar modelers to use the OPAL equation of state directly, without recourse to the OPAL tables distributed by the Lawrence Livermore National Laboratory. Since the OPAL code is not publicly available, there is no alternative source. The official OPAL tables, however, have disadvantages. First, they are inflexible regarding the chemical mix, which is set once and for all by the producers of the tables. Our equation of state will allow the user to choose, in principle, an arbitrary mix. Second, the OPAL tables by their very nature are limited by the errors of interpolation within tables. The second equation of state model is a density expansion based on the Feynman-Kac path-integral formalism. By making use of the equivalence of quantum Hamiltonian matrix and the classical action of closed and open filaments (paths), an analytic formalism of equation of state. Although the character of density expansion limits its application, the formalism can still be valid in most region in the Sun. Our work provides the link between the abstract theoretical formalism that was developed in the 1990s and a numerically smooth realization that can be used in solar and stellar models. Since it is so far the most exact and systematic approach for an EOS, it provides another way to study the influence of different very fine physical effects, despite considerable limitations in its domain of applicability. In the nuclear-reaction part of the thesis, we have used a molecular-dynamics method to simulate the motion of protons in a hydrogen plasma (which is a good approximation for this purpose). Quantum tunneling explains why nuclear fusion can occur in the first place, considering the "low" temperature in the solar core. It is well known that this tunneling is enhanced (which leads to higher nuclear reaction rates) in the presence of Coulomb screening. In the 1950, Salpeter formulated a theory based on the static-screened Coulomb potential, as derived by Debye and H=FCckel in the 1920s. As expected, Salpeter obtained enhanced reaction rates. But from our simulation, we confirmed the results of a recent controversy about the existence of a dynamic effect. Since the bulk of fusion reactions happens at the high end of the Maxwell distribution, this is an relevant issue. Our work is the first independent confirmation of such a dynamic effect.
Improving Project Management Using Formal Models and Architectures
NASA Technical Reports Server (NTRS)
Kahn, Theodore; Sturken, Ian
2011-01-01
This talk discusses the advantages formal modeling and architecture brings to project management. These emerging technologies have both great potential and challenges for improving information available for decision-making. The presentation covers standards, tools and cultural issues needing consideration, and includes lessons learned from projects the presenters have worked on.
ERIC Educational Resources Information Center
Reushle, Shirley, Ed.; Antonio, Amy, Ed.; Keppell, Mike, Ed.
2016-01-01
The discipline of education is a multi-faceted system that must constantly integrate new strategies and procedures to ensure successful learning experiences. Enhancements in education provide learners with greater opportunities for growth and advancement. "Open Learning and Formal Credentialing in Higher Education: Curriculum Models and…
Leading the Teacher Team--Balancing between Formal and Informal Power in Program Leadership
ERIC Educational Resources Information Center
Högfeldt, Anna-Karin; Malmi, Lauri; Kinnunen, Päivi; Jerbrant, Anna; Strömberg, Emma; Berglund, Anders; Villadsen, Jørgen
2018-01-01
This continuous research within Nordic engineering institutions targets the contexts and possibilities for leadership among engineering education program directors. The IFP-model, developed based on analysis of interviews with program leaders in these institutions, visualizes the program director's informal and formal power. The model is presented…
Swedberg, Lena; Michélsen, Hans; Chiriac, Eva Hammar; Hylander, Ingrid
2015-06-01
To describe and analyse perceived competence and perceived responsibility among healthcare assistants (HC assistants), caring for patients with home mechanical ventilation (HMV) and other advanced caring needs, adjusted for socio-demographic and workplace background factors. A cross-sectional study was conducted including 128 HC assistants employed in Stockholm County, Sweden. The HC assistants responded to a study-specific questionnaire on perceived competence and perceived responsibility, provided socio-demographic and workplace background data, as well as information on the patient characteristics for the understanding of their work situations. Descriptive statistics and logistic regression analyses were performed. Eighty per cent of the HC assistants rated their perceived competence as high, and fifty-nine per cent rated their perceived responsibility as high. Fifty-five per cent lacked formal healthcare training, and only one in five of the HC assistants had a formal training equivalent with a licensed practical nurse (LPN) examination. Males lacked formal training to a greater extent than females and rated their competence accordingly. On-the-job training was significantly associated with high ratings on both perceived competence and perceived responsibility, and clinical supervision was associated with high rating on perceived responsibility. HC assistants with limited formal training self-reported their competence as high, and on-the-job training was found to be important. Also, clinical supervision was found important for their perception of high responsibility. In Sweden, HC assistants have a 24-hour responsibility for the care and safety of their patient with HMV and other advanced caring needs. The study results point out important issues for further research regarding formal training requirements as well as the needs for standardised workplace training and supervision of HC assistants. The consequences of transfer of responsibility by delegation from healthcare professionals to paraprofessionals within advanced home care also need further study. © 2014 Nordic College of Caring Science.
New equivalent-electrical circuit model and a practical measurement method for human body impedance.
Chinen, Koyu; Kinjo, Ichiko; Zamami, Aki; Irei, Kotoyo; Nagayama, Kanako
2015-01-01
Human body impedance analysis is an effective tool to extract electrical information from tissues in the human body. This paper presents a new measurement method of impedance using armpit electrode and a new equivalent circuit model for the human body. The lowest impedance was measured by using an LCR meter and six electrodes including armpit electrodes. The electrical equivalent circuit model for the cell consists of resistance R and capacitance C. The R represents electrical resistance of the liquid of the inside and outside of the cell, and the C represents high frequency conductance of the cell membrane. We propose an equivalent circuit model which consists of five parallel high frequency-passing CR circuits. The proposed equivalent circuit represents alpha distribution in the impedance measured at a lower frequency range due to ion current of the outside of the cell, and beta distribution at a high frequency range due to the cell membrane and the liquid inside cell. The calculated values by using the proposed equivalent circuit model were consistent with the measured values for the human body impedance.
Can the Equivalent Sphere Model Approximate Organ Doses in Space?
NASA Technical Reports Server (NTRS)
Lin, Zi-Wei
2007-01-01
For space radiation protection it is often useful to calculate dose or dose,equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to. simulate the BFO dose. However, many previous studies have concluded that a 5cm sphere gives very different dose values from the exact BFO values. One study [1] . concludes that a 9 cm sphere is a reasonable approximation for BFO'doses in solar particle event environments. In this study we use a deterministic radiation transport [2] to investigate the reason behind these observations and to extend earlier studies. We take different space radiation environments, including seven galactic cosmic ray environments and six large solar particle events, and calculate the dose and dose equivalent in the skin, eyes and BFO using their thickness distribution functions from the CAM (Computerized Anatomical Man) model [3] The organ doses have been evaluated with a water or aluminum shielding of an areal density from 0 to 20 g/sq cm. We then compare with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we address why the equivalent sphere model is not a good approximation in some cases. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for the eye or the skin. For galactic cosmic rays environments, the equivalent sphere model with an organ-specific constant radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of the eye or the skin, but is unacceptable for the dose of the eye or the skin. The ranges of the radius parameters are also being investigated, and the BFO radius parameters are found to be significantly, larger than 5 cm in all cases, consistent with the conclusion of an earlier study [I]. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and I I cm for the BFO, 3.7 to 4.8 cm for the eye, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose.
Technical note: Equivalent genomic models with a residual polygenic effect.
Liu, Z; Goddard, M E; Hayes, B J; Reinhardt, F; Reents, R
2016-03-01
Routine genomic evaluations in animal breeding are usually based on either a BLUP with genomic relationship matrix (GBLUP) or single nucleotide polymorphism (SNP) BLUP model. For a multi-step genomic evaluation, these 2 alternative genomic models were proven to give equivalent predictions for genomic reference animals. The model equivalence was verified also for young genotyped animals without phenotypes. Due to incomplete linkage disequilibrium of SNP markers to genes or causal mutations responsible for genetic inheritance of quantitative traits, SNP markers cannot explain all the genetic variance. A residual polygenic effect is normally fitted in the genomic model to account for the incomplete linkage disequilibrium. In this study, we start by showing the proof that the multi-step GBLUP and SNP BLUP models are equivalent for the reference animals, when they have a residual polygenic effect included. Second, the equivalence of both multi-step genomic models with a residual polygenic effect was also verified for young genotyped animals without phenotypes. Additionally, we derived formulas to convert genomic estimated breeding values of the GBLUP model to its components, direct genomic values and residual polygenic effect. Third, we made a proof that the equivalence of these 2 genomic models with a residual polygenic effect holds also for single-step genomic evaluation. Both the single-step GBLUP and SNP BLUP models lead to equal prediction for genotyped animals with phenotypes (e.g., reference animals), as well as for (young) genotyped animals without phenotypes. Finally, these 2 single-step genomic models with a residual polygenic effect were proven to be equivalent for estimation of SNP effects, too. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Symbolic LTL Compilation for Model Checking: Extended Abstract
NASA Technical Reports Server (NTRS)
Rozier, Kristin Y.; Vardi, Moshe Y.
2007-01-01
In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.
NASA Astrophysics Data System (ADS)
Nan, Miao; Junfeng, Li; Tianshu, Wang
2017-01-01
Subjected to external lateral excitations, large-amplitude sloshing may take place in propellant tanks, especially for spacecraft in low-gravity conditions, such as landers in the process of hover and obstacle avoidance during lunar soft landing. Due to lateral force of the order of gravity in magnitude, the amplitude of liquid sloshing becomes too big for the traditional equivalent model to be accurate. Therefore, a new equivalent mechanical model, denominated the "composite model", that can address large-amplitude lateral sloshing in partially filled spherical tanks is established in this paper, with both translational and rotational excitations considered. The hypothesis of liquid equilibrium position following equivalent gravity is first proposed. By decomposing the large-amplitude motion of a liquid into bulk motion following the equivalent gravity and additional small-amplitude sloshing, a better simulation of large-amplitude liquid sloshing is presented. The effectiveness and accuracy of the model are verified by comparing the slosh forces and moments to results of the traditional model and CFD software.
Physical Invariants of Intelligence
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
A program of research is dedicated to development of a mathematical formalism that could provide, among other things, means by which living systems could be distinguished from non-living ones. A major issue that arises in this research is the following question: What invariants of mathematical models of the physics of systems are (1) characteristic of the behaviors of intelligent living systems and (2) do not depend on specific features of material compositions heretofore considered to be characteristic of life? This research at earlier stages has been reported, albeit from different perspectives, in numerous previous NASA Tech Briefs articles. To recapitulate: One of the main underlying ideas is to extend the application of physical first principles to the behaviors of living systems. Mathematical models of motor dynamics are used to simulate the observable physical behaviors of systems or objects of interest, and models of mental dynamics are used to represent the evolution of the corresponding knowledge bases. For a given system, the knowledge base is modeled in the form of probability distributions and the mental dynamics is represented by models of the evolution of the probability densities or, equivalently, models of flows of information. At the time of reporting the information for this article, the focus of this research was upon the following aspects of the formalism: Intelligence is considered to be a means by which a living system preserves itself and improves its ability to survive and is further considered to manifest itself in feedback from the mental dynamics to the motor dynamics. Because of the feedback from the mental dynamics, the motor dynamics attains quantum-like properties: The trajectory of the physical aspect of the system in the space of dynamical variables splits into a family of different trajectories, and each of those trajectories can be chosen with a probability prescribed by the mental dynamics. From a slightly different perspective, the mechanism of decision-making is feedback from the mental dynamics to the motor dynamics, and this mechanism provides a quantum-like collapse of a random motion into an appropriate deterministic state, such that entropy undergoes a pronounced decrease. The existence of this mechanism is considered to be an invariant of intelligent behavior of living systems, regardless of the origins and material compositions of the systems.
An equivalent body surface charge model representing three-dimensional bioelectrical activity
NASA Technical Reports Server (NTRS)
He, B.; Chernyak, Y. B.; Cohen, R. J.
1995-01-01
A new surface-source model has been developed to account for the bioelectrical potential on the body surface. A single-layer surface-charge model on the body surface has been developed to equivalently represent bioelectrical sources inside the body. The boundary conditions on the body surface are discussed in relation to the surface-charge in a half-space conductive medium. The equivalent body surface-charge is shown to be proportional to the normal component of the electric field on the body surface just outside the body. The spatial resolution of the equivalent surface-charge distribution appears intermediate between those of the body surface potential distribution and the body surface Laplacian distribution. An analytic relationship between the equivalent surface-charge and the surface Laplacian of the potential was found for a half-space conductive medium. The effects of finite spatial sampling and noise on the reconstruction of the equivalent surface-charge were evaluated by computer simulations. It was found through computer simulations that the reconstruction of the equivalent body surface-charge from the body surface Laplacian distribution is very stable against noise and finite spatial sampling. The present results suggest that the equivalent body surface-charge model may provide an additional insight to our understanding of bioelectric phenomena.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mian, Muhammad Umer, E-mail: umermian@gmail.com; Khir, M. H. Md.; Tang, T. B.
Pre-fabrication, behavioural and performance analysis with computer aided design (CAD) tools is a common and fabrication cost effective practice. In light of this we present a simulation methodology for a dual-mass oscillator based 3 Degree of Freedom (3-DoF) MEMS gyroscope. 3-DoF Gyroscope is modeled through lumped parameter models using equivalent circuit elements. These equivalent circuits consist of elementary components which are counterpart of their respective mechanical components, used to design and fabricate 3-DoF MEMS gyroscope. Complete designing of equivalent circuit model, mathematical modeling and simulation are being presented in this paper. Behaviors of the equivalent lumped models derived for themore » proposed device design are simulated in MEMSPRO T-SPICE software. Simulations are carried out with the design specifications following design rules of the MetalMUMPS fabrication process. Drive mass resonant frequencies simulated by this technique are 1.59 kHz and 2.05 kHz respectively, which are close to the resonant frequencies found by the analytical formulation of the gyroscope. The lumped equivalent circuit modeling technique proved to be a time efficient modeling technique for the analysis of complex MEMS devices like 3-DoF gyroscopes. The technique proves to be an alternative approach to the complex and time consuming couple field analysis Finite Element Analysis (FEA) previously used.« less
Formal expressions and corresponding expansions for the exact Kohn-Sham exchange potential
NASA Astrophysics Data System (ADS)
Bulat, Felipe A.; Levy, Mel
2009-11-01
Formal expressions and their corresponding expansions in terms of Kohn-Sham (KS) orbitals are deduced for the exchange potential vx(r) . After an alternative derivation of the basic optimized effective potential integrodifferential equations is given through a Hartree-Fock adiabatic connection perturbation theory, we present an exact infinite expansion for vx(r) that is particularly simple in structure. It contains the very same occupied-virtual quantities that appear in the well-known optimized effective potential integral equation, but in this new expression vx(r) is isolated on one side of the equation. An orbital-energy modified Slater potential is its leading term which gives encouraging numerical results. Along different lines, while the earlier Krieger-Li-Iafrate approximation truncates completely the necessary first-order perturbation orbitals, we observe that the improved localized Hartree-Fock (LHF) potential, or common energy denominator potential (CEDA), or effective local potential (ELP), incorporates the part of each first-order orbital that consists of the occupied KS orbitals. With this in mind, the exact correction to the LHF, CEDA, or ELP potential (they are all equivalent) is deduced and displayed in terms of the virtual portions of the first-order orbitals. We close by observing that the newly derived exact formal expressions and corresponding expansions apply as well for obtaining the correlation potential from an orbital-dependent correlation energy functional.
Ding, Ming; Zhu, Qianlong
2016-01-01
Hardware protection and control action are two kinds of low voltage ride-through technical proposals widely used in a permanent magnet synchronous generator (PMSG). This paper proposes an innovative clustering concept for the equivalent modeling of a PMSG-based wind power plant (WPP), in which the impacts of both the chopper protection and the coordinated control of active and reactive powers are taken into account. First, the post-fault DC link voltage is selected as a concentrated expression of unit parameters, incoming wind and electrical distance to a fault point to reflect the transient characteristics of PMSGs. Next, we provide an effective method for calculating the post-fault DC link voltage based on the pre-fault wind energy and the terminal voltage dip. Third, PMSGs are divided into groups by analyzing the calculated DC link voltages without any clustering algorithm. Finally, PMSGs of the same group are equivalent as one rescaled PMSG to realize the transient equivalent modeling of the PMSG-based WPP. Using the DIgSILENT PowerFactory simulation platform, the efficiency and accuracy of the proposed equivalent model are tested against the traditional equivalent WPP and the detailed WPP. The simulation results show the proposed equivalent model can be used to analyze the offline electromechanical transients in power systems.
Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.
2018-03-01
Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.
Formal reasoning about systems biology using theorem proving
Hasan, Osman; Siddique, Umair; Tahar, Sofiène
2017-01-01
System biology provides the basis to understand the behavioral properties of complex biological organisms at different levels of abstraction. Traditionally, analysing systems biology based models of various diseases have been carried out by paper-and-pencil based proofs and simulations. However, these methods cannot provide an accurate analysis, which is a serious drawback for the safety-critical domain of human medicine. In order to overcome these limitations, we propose a framework to formally analyze biological networks and pathways. In particular, we formalize the notion of reaction kinetics in higher-order logic and formally verify some of the commonly used reaction based models of biological networks using the HOL Light theorem prover. Furthermore, we have ported our earlier formalization of Zsyntax, i.e., a deductive language for reasoning about biological networks and pathways, from HOL4 to the HOL Light theorem prover to make it compatible with the above-mentioned formalization of reaction kinetics. To illustrate the usefulness of the proposed framework, we present the formal analysis of three case studies, i.e., the pathway leading to TP53 Phosphorylation, the pathway leading to the death of cancer stem cells and the tumor growth based on cancer stem cells, which is used for the prognosis and future drug designs to treat cancer patients. PMID:28671950
Formal analysis of temporal dynamics in anxiety states and traits for virtual patients
NASA Astrophysics Data System (ADS)
Aziz, Azizi Ab; Ahmad, Faudziah; Yusof, Nooraini; Ahmad, Farzana Kabir; Yusof, Shahrul Azmi Mohd
2016-08-01
This paper presents a temporal dynamic model of anxiety states and traits for an individual. Anxiety is a natural part of life, and most of us experience it from time to time. But for some people, anxiety can be extreme. Based on several personal characteristics, traits, and a representation of events (i.e. psychological and physiological stressors), the formal model can represent whether a human that experience certain scenarios will fall into an anxiety states condition. A number of well-known relations between events and the course of anxiety are summarized from the literature and it is shown that the model exhibits those patterns. In addition, the formal model has been mathematically analyzed to find out which stable situations exist. Finally, it is pointed out how this model can be used in therapy, supported by a software agent.
Formal Methods for Automated Diagnosis of Autosub 6000
NASA Technical Reports Server (NTRS)
Ernits, Juhan; Dearden, Richard; Pebody, Miles
2009-01-01
This is a progress report on applying formal methods in the context of building an automated diagnosis and recovery system for Autosub 6000, an Autonomous Underwater Vehicle (AUV). The diagnosis task involves building abstract models of the control system of the AUV. The diagnosis engine is based on Livingstone 2, a model-based diagnoser originally built for aerospace applications. Large parts of the diagnosis model can be built without concrete knowledge about each mission, but actual mission scripts and configuration parameters that carry important information for diagnosis are changed for every mission. Thus we use formal methods for generating the mission control part of the diagnosis model automatically from the mission script and perform a number of invariant checks to validate the configuration. After the diagnosis model is augmented with the generated mission control component model, it needs to be validated using verification techniques.
Montefusco, Alberto; Consonni, Francesco; Beretta, Gian Paolo
2015-04-01
By reformulating the steepest-entropy-ascent (SEA) dynamical model for nonequilibrium thermodynamics in the mathematical language of differential geometry, we compare it with the primitive formulation of the general equation for the nonequilibrium reversible-irreversible coupling (GENERIC) model and discuss the main technical differences of the two approaches. In both dynamical models the description of dissipation is of the "entropy-gradient" type. SEA focuses only on the dissipative, i.e., entropy generating, component of the time evolution, chooses a sub-Riemannian metric tensor as dissipative structure, and uses the local entropy density field as potential. GENERIC emphasizes the coupling between the dissipative and nondissipative components of the time evolution, chooses two compatible degenerate structures (Poisson and degenerate co-Riemannian), and uses the global energy and entropy functionals as potentials. As an illustration, we rewrite the known GENERIC formulation of the Boltzmann equation in terms of the square root of the distribution function adopted by the SEA formulation. We then provide a formal proof that in more general frameworks, whenever all degeneracies in the GENERIC framework are related to conservation laws, the SEA and GENERIC models of the dissipative component of the dynamics are essentially interchangeable, provided of course they assume the same kinematics. As part of the discussion, we note that equipping the dissipative structure of GENERIC with the Leibniz identity makes it automatically SEA on metric leaves.
Going from microscopic to macroscopic on nonuniform growing domains.
Yates, Christian A; Baker, Ruth E; Erban, Radek; Maini, Philip K
2012-08-01
Throughout development, chemical cues are employed to guide the functional specification of underlying tissues while the spatiotemporal distributions of such chemicals can be influenced by the growth of the tissue itself. These chemicals, termed morphogens, are often modeled using partial differential equations (PDEs). The connection between discrete stochastic and deterministic continuum models of particle migration on growing domains was elucidated by Baker, Yates, and Erban [Bull. Math. Biol. 72, 719 (2010)] in which the migration of individual particles was modeled as an on-lattice position-jump process. We build on this work by incorporating a more physically reasonable description of domain growth. Instead of allowing underlying lattice elements to instantaneously double in size and divide, we allow incremental element growth and splitting upon reaching a predefined threshold size. Such a description of domain growth necessitates a nonuniform partition of the domain. We first demonstrate that an individual-based stochastic model for particle diffusion on such a nonuniform domain partition is equivalent to a PDE model of the same phenomenon on a nongrowing domain, providing the transition rates (which we derive) are chosen correctly and we partition the domain in the correct manner. We extend this analysis to the case where the domain is allowed to change in size, altering the transition rates as necessary. Through application of the master equation formalism we derive a PDE for particle density on this growing domain and corroborate our findings with numerical simulations.
ERIC Educational Resources Information Center
Goldratt, Miri; Cohen, Eric H.
2016-01-01
This article explores encounters between formal, informal, and non-formal education and the role of mentor-educators in creating values education in which such encounters take place. Mixed-methods research was conducted in Israeli public schools participating in the Personal Education Model, which combines educational modes. Ethnographic and…
Caricato, Marco
2013-07-28
The calculation of vertical electronic transition energies of molecular systems in solution with accurate quantum mechanical methods requires the use of approximate and yet reliable models to describe the effect of the solvent on the electronic structure of the solute. The polarizable continuum model (PCM) of solvation represents a computationally efficient way to describe this effect, especially when combined with coupled cluster (CC) methods. Two formalisms are available to compute transition energies within the PCM framework: State-Specific (SS) and Linear-Response (LR). The former provides a more complete account of the solute-solvent polarization in the excited states, while the latter is computationally very efficient (i.e., comparable to gas phase) and transition properties are well defined. In this work, I review the theory for the two formalisms within CC theory with a focus on their computational requirements, and present the first implementation of the LR-PCM formalism with the coupled cluster singles and doubles method (CCSD). Transition energies computed with LR- and SS-CCSD-PCM are presented, as well as a comparison between solvation models in the LR approach. The numerical results show that the two formalisms provide different absolute values of transition energy, but similar relative solvatochromic shifts (from nonpolar to polar solvents). The LR formalism may then be used to explore the solvent effect on multiple states and evaluate transition probabilities, while the SS formalism may be used to refine the description of specific states and for the exploration of excited state potential energy surfaces of solvated systems.
NASA Technical Reports Server (NTRS)
Hall, Brendan; Driscoll, Kevin; Schweiker, Kevin; Dutertre, Bruno
2013-01-01
Within distributed fault-tolerant systems the term force-fight is colloquially used to describe the level of command disagreement present at redundant actuation interfaces. This report details an investigation of force-fight using three distributed system case-study architectures. Each case study architecture is abstracted and formally modeled using the Symbolic Analysis Laboratory (SAL) tool chain from the Stanford Research Institute (SRI). We use the formal SAL models to produce k-induction based proofs of a bounded actuation agreement property. We also present a mathematically derived bound of redundant actuation agreement for sine-wave stimulus. The report documents our experiences and lessons learned developing the formal models and the associated proofs.
Bio-Inspired Genetic Algorithms with Formalized Crossover Operators for Robotic Applications.
Zhang, Jie; Kang, Man; Li, Xiaojuan; Liu, Geng-Yang
2017-01-01
Genetic algorithms are widely adopted to solve optimization problems in robotic applications. In such safety-critical systems, it is vitally important to formally prove the correctness when genetic algorithms are applied. This paper focuses on formal modeling of crossover operations that are one of most important operations in genetic algorithms. Specially, we for the first time formalize crossover operations with higher-order logic based on HOL4 that is easy to be deployed with its user-friendly programing environment. With correctness-guaranteed formalized crossover operations, we can safely apply them in robotic applications. We implement our technique to solve a path planning problem using a genetic algorithm with our formalized crossover operations, and the results show the effectiveness of our technique.
Modelling Ni-mH battery using Cauer and Foster structures
NASA Astrophysics Data System (ADS)
Kuhn, E.; Forgez, C.; Lagonotte, P.; Friedrich, G.
This paper deals with dynamic models of Ni-mH battery and focuses on the development of the equivalent electric models. We propose two equivalent electric models, using Cauer and Foster structures, able to relate both dynamic and energetic behavior of the battery. These structures are well adapted to real time applications (e.g. Battery Management Systems) or system simulations. A special attention will be brought to the influence of the complexity of the equivalent electric scheme on the precision of the model. Experimental validations allow to discuss about performances of proposed models.
Low Earth orbit assessment of proton anisotropy using AP8 and AP9 trapped proton models
NASA Astrophysics Data System (ADS)
Badavi, Francis F.; Walker, Steven A.; Santos Koos, Lindsey M.
2015-04-01
The completion of the International Space Station (ISS) in 2011 has provided the space research community with an ideal evaluation and testing facility for future long duration human activities in space. Ionized and secondary neutral particles radiation measurements inside ISS form the ideal tool for validation of radiation environmental models, nuclear reaction cross sections and transport codes. Studies using thermo-luminescent detectors (TLD), tissue equivalent proportional counter (TPEC), and computer aided design (CAD) models of early ISS configurations confirmed that, as input, computational dosimetry at low Earth orbit (LEO) requires an environmental model with directional (anisotropic) capability to properly describe the exposure of trapped protons within ISS. At LEO, ISS encounters exposure from trapped electrons, protons and geomagnetically attenuated galactic cosmic rays (GCR). For short duration studies at LEO, one can ignore trapped electrons and ever present GCR exposure contributions during quiet times. However, within the trapped proton field, a challenge arises from properly estimating the amount of proton exposure acquired. There exist a number of models to define the intensity of trapped particles. Among the established trapped models are the historic AE8/AP8, dating back to the 1980s and the recently released AE9/AP9/SPM. Since at LEO electrons have minimal exposure contribution to ISS, this work ignores the AE8 and AE9 components of the models and couples a measurement derived anisotropic trapped proton formalism to omnidirectional output from the AP8 and AP9 models, allowing the assessment of the differences between the two proton models. The assessment is done at a target point within the ISS-11A configuration (circa 2003) crew quarter (CQ) of Russian Zvezda service module (SM), during its ascending and descending nodes passes through the south Atlantic anomaly (SAA). The anisotropic formalism incorporates the contributions of proton narrow pitch angle (PA) and east-west (EW) effects. Within SAA, the EW anisotropy results in different level of exposure to each side of the ISS Zvezda SM, allowing angular evaluation of the anisotropic proton spectrum. While the combined magnitude of PA and EW effects at LEO depends on a multitude of factors such as trapped proton energy, orientation and altitude of the spacecraft along the velocity vector, this paper draws quantitative conclusions on the combined anisotropic magnitude differences within ISS SM target point between AP8 and AP9 models.
Low Earth orbit assessment of proton anisotropy using AP8 and AP9 trapped proton models.
Badavi, Francis F; Walker, Steven A; Santos Koos, Lindsey M
2015-04-01
The completion of the International Space Station (ISS) in 2011 has provided the space research community with an ideal evaluation and testing facility for future long duration human activities in space. Ionized and secondary neutral particles radiation measurements inside ISS form the ideal tool for validation of radiation environmental models, nuclear reaction cross sections and transport codes. Studies using thermo-luminescent detectors (TLD), tissue equivalent proportional counter (TPEC), and computer aided design (CAD) models of early ISS configurations confirmed that, as input, computational dosimetry at low Earth orbit (LEO) requires an environmental model with directional (anisotropic) capability to properly describe the exposure of trapped protons within ISS. At LEO, ISS encounters exposure from trapped electrons, protons and geomagnetically attenuated galactic cosmic rays (GCR). For short duration studies at LEO, one can ignore trapped electrons and ever present GCR exposure contributions during quiet times. However, within the trapped proton field, a challenge arises from properly estimating the amount of proton exposure acquired. There exist a number of models to define the intensity of trapped particles. Among the established trapped models are the historic AE8/AP8, dating back to the 1980s and the recently released AE9/AP9/SPM. Since at LEO electrons have minimal exposure contribution to ISS, this work ignores the AE8 and AE9 components of the models and couples a measurement derived anisotropic trapped proton formalism to omnidirectional output from the AP8 and AP9 models, allowing the assessment of the differences between the two proton models. The assessment is done at a target point within the ISS-11A configuration (circa 2003) crew quarter (CQ) of Russian Zvezda service module (SM), during its ascending and descending nodes passes through the south Atlantic anomaly (SAA). The anisotropic formalism incorporates the contributions of proton narrow pitch angle (PA) and east-west (EW) effects. Within SAA, the EW anisotropy results in different level of exposure to each side of the ISS Zvezda SM, allowing angular evaluation of the anisotropic proton spectrum. While the combined magnitude of PA and EW effects at LEO depends on a multitude of factors such as trapped proton energy, orientation and altitude of the spacecraft along the velocity vector, this paper draws quantitative conclusions on the combined anisotropic magnitude differences within ISS SM target point between AP8 and AP9 models. Copyright © 2015 The Committee on Space Research (COSPAR). All rights reserved.
Cardiovascular Pharmacogenomics and Cognitive Function in Patients with Schizophrenia.
Ward, Kristen M; Kraal, A Zarina; Flowers, Stephanie A; Ellingrod, Vicki L
2017-09-01
The authors sought to examine the impact of multiple risk alleles for cognitive dysfunction and cardiovascular disease risk on cognitive function and to determine if these relationships varied by cognitive reserve (CR) or concomitant medication use in patients with schizophrenia. They conducted a cross-sectional study in ambulatory mental health centers. A total of 122 adults with a schizophrenia spectrum diagnosis who were maintained on a stable antipsychotic regimen for at least 6 months before study enrollment were included. Patients were divided into three CR groups based on years of formal education: no high school completion or equivalent (low-education group [18 patients]), completion of high school or equivalent (moderate-education group [36 patients], or any degree of post-high school education (high-education group [68 patients]). The following pharmacogenomic variants were genotyped for each patient: AGT M268T (rs699), ACE insertion/deletion (or ACE I/D, rs1799752), and APOE ε2, ε3, and ε4 (rs429358 and rs7412). Risk allele carrier status (identified per gene as AGT M268 T carriers, ACE D carriers, and APOE ε4 carriers) was not significantly different among CR groups. The Brief Assessment of Cognition in Schizophrenia (BACS) scale was used to assess cognitive function. The mean ± SD patient age was 43.9 ± 11.6 years. Cardiovascular risk factors such as hypertension and hyperlipidemia diagnoses, and use of antihypertensive and lipid-lowering agents, did not significantly differ among CR groups. Mixed modeling revealed that risk allele carrier status was significantly associated with lower verbal memory scores for ACE D and APOE ε4 carriers, but AGT T carrier status was significantly associated with higher verbal memory scores (p=0.0188, p=0.0055, and p=0.0058, respectively). These results were only significant in the low-education group. In addition, medication-gene interactions were not significant predictors of BACS scores. ACE D and APOE ε4 carrier status, independent of medication use, was associated with lower verbal memory scores in patients with schizophrenia who had relatively lower CR, as identified by formal education. These results suggest that increasing CR may be protective against cognitive impairment that may be worsened by select cardiovascular risk alleles in patients with schizophrenia. © 2017 Pharmacotherapy Publications, Inc.
Equivalent circuit model of Ge/Si separate absorption charge multiplication avalanche photodiode
NASA Astrophysics Data System (ADS)
Wang, Wei; Chen, Ting; Yan, Linshu; Bao, Xiaoyuan; Xu, Yuanyuan; Wang, Guang; Wang, Guanyu; Yuan, Jun; Li, Junfeng
2018-03-01
The equivalent circuit model of Ge/Si Separate Absorption Charge Multiplication Avalanche Photodiode (SACM-APD) is proposed. Starting from the carrier rate equations in different regions of device and considering the influences of non-uniform electric field, noise, parasitic effect and some other factors, the equivalent circuit model of SACM-APD device is established, in which the steady-state and transient current voltage characteristics can be described exactly. In addition, the proposed Ge/Si SACM APD equivalent circuit model is embedded in PSpice simulator. The important characteristics of Ge/Si SACM APD such as dark current, frequency response, shot noise are simulated, the simulation results show that the simulation with the proposed model are in good agreement with the experimental results.
An object-oriented description method of EPMM process
NASA Astrophysics Data System (ADS)
Jiang, Zuo; Yang, Fan
2017-06-01
In order to use the object-oriented mature tools and language in software process model, make the software process model more accord with the industrial standard, it’s necessary to study the object-oriented modelling of software process. Based on the formal process definition in EPMM, considering the characteristics that Petri net is mainly formal modelling tool and combining the Petri net modelling with the object-oriented modelling idea, this paper provides this implementation method to convert EPMM based on Petri net into object models based on object-oriented description.
The Verification-based Analysis of Reliable Multicast Protocol
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1996-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
NASA Astrophysics Data System (ADS)
Fitzpatrick, Matthew R. C.; Kennett, Malcolm P.
2018-05-01
We develop a formalism that allows the study of correlations in space and time in both the superfluid and Mott insulating phases of the Bose-Hubbard Model. Specifically, we obtain a two particle irreducible effective action within the contour-time formalism that allows for both equilibrium and out of equilibrium phenomena. We derive equations of motion for both the superfluid order parameter and two-point correlation functions. To assess the accuracy of this formalism, we study the equilibrium solution of the equations of motion and compare our results to existing strong coupling methods as well as exact methods where possible. We discuss applications of this formalism to out of equilibrium situations.
Defining role models for staff orientation.
Kinley, H
This article examines the need for a formal role model to help integrate new staff within a unit. While acknowledging the range of titles and functions ascribed to such a role in the literature, the author suggests that the essence of the role and its formal recognition has benefits for experienced staff and orientees alike.
FORMAL AND ON-THE JOB TRAINING IN MILITARY OCCUPATIONS.
ERIC Educational Resources Information Center
BATEMAN, C.W.
A MODEL IS CONSTRUCTED TO DETERMINE THE BEST PROPORTION OF FORMAL AND ON-THE-JOB TRAINING IN MILITARY OCCUPATIONS. SPECIAL CONSIDERATION IS GIVEN TO THE UNIQUE SITUATION OF ENLISTED PERSONNEL'S FIXED LENGTH OF SERVICE, THE SMALL PERCENTAGE OF RE-ENLISTMENT, AND THE NECESSITY OF TRAINING ALL ENLISTEES FOR ASSIGNED OCCUPATIONS. THE MODEL FORMULA…
1991-10-01
SUBJECT TERMS 15. NUMBER OF PAGES engineering management information systems method formalization 60 information engineering process modeling 16 PRICE...CODE information systems requirements definition methods knowlede acquisition methods systems engineering 17. SECURITY CLASSIFICATION ji. SECURITY... Management , Inc., Santa Monica, California. CORYNEN, G. C., 1975, A Mathematical Theory of Modeling and Simula- tion. Ph.D. Dissertation, Department
BRST Quantization of the Proca Model Based on the BFT and the BFV Formalism
NASA Astrophysics Data System (ADS)
Kim, Yong-Wan; Park, Mu-In; Park, Young-Jai; Yoon, Sean J.
The BRST quantization of the Abelian Proca model is performed using the Batalin-Fradkin-Tyutin and the Batalin-Fradkin-Vilkovisky formalism. First, the BFT Hamiltonian method is applied in order to systematically convert a second class constraint system of the model into an effectively first class one by introducing new fields. In finding the involutive Hamiltonian we adopt a new approach which is simpler than the usual one. We also show that in our model the Dirac brackets of the phase space variables in the original second class constraint system are exactly the same as the Poisson brackets of the corresponding modified fields in the extended phase space due to the linear character of the constraints comparing the Dirac or Faddeev-Jackiw formalisms. Then, according to the BFV formalism we obtain that the desired resulting Lagrangian preserving BRST symmetry in the standard local gauge fixing procedure naturally includes the Stückelberg scalar related to the explicit gauge symmetry breaking effect due to the presence of the mass term. We also analyze the nonstandard nonlocal gauge fixing procedure.
Consumer experience of formal crisis-response services and preferred methods of crisis intervention.
Boscarato, Kara; Lee, Stuart; Kroschel, Jon; Hollander, Yitzchak; Brennan, Alice; Warren, Narelle
2014-08-01
The manner in which people with mental illness are supported in a crisis is crucial to their recovery. The current study explored mental health consumers' experiences with formal crisis services (i.e. police and crisis assessment and treatment (CAT) teams), preferred crisis supports, and opinions of four collaborative interagency response models. Eleven consumers completed one-on-one, semistructured interviews. The results revealed that the perceived quality of previous formal crisis interventions varied greatly. Most participants preferred family members or friends to intervene. However, where a formal response was required, general practitioners and mental health case managers were preferred; no participant wanted a police response, and only one indicated a preference for CAT team assistance. Most participants welcomed collaborative crisis interventions. Of four collaborative interagency response models currently being trialled internationally, participants most strongly supported the Ride-Along Model, which enables a police officer and a mental health clinician to jointly respond to distressed consumers in the community. The findings highlight the potential for an interagency response model to deliver a crisis response aligned with consumers' preferences. © 2014 Australian College of Mental Health Nurses Inc.
Assessing Knowledge of Mathematical Equivalence: A Construct-Modeling Approach
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Matthews, Percival G.; Taylor, Roger S.; McEldoon, Katherine L.
2011-01-01
Knowledge of mathematical equivalence, the principle that 2 sides of an equation represent the same value, is a foundational concept in algebra, and this knowledge develops throughout elementary and middle school. Using a construct-modeling approach, we developed an assessment of equivalence knowledge. Second through sixth graders (N = 175)…
Near Identifiability of Dynamical Systems
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1987-01-01
Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.
Formally verifying human–automation interaction as part of a system model: limitations and tradeoffs
Bass, Ellen J.
2011-01-01
Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human–automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human–automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE. PMID:21572930
ADM Analysis of gravity models within the framework of bimetric variational formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golovnev, Alexey; Karčiauskas, Mindaugas; Nyrhinen, Hannu J., E-mail: agolovnev@yandex.ru, E-mail: mindaugas.karciauskas@helsinki.fi, E-mail: hannu.nyrhinen@helsinki.fi
2015-05-01
Bimetric variational formalism was recently employed to construct novel bimetric gravity models. In these models an affine connection is generated by an additional tensor field which is independent of the physical metric. In this work we demonstrate how the ADM decomposition can be applied to study such models and provide some technical intermediate details. Using ADM decomposition we are able to prove that a linear model is unstable as has previously been indicated by perturbative analysis. Moreover, we show that it is also very difficult if not impossible to construct a non-linear model which is ghost-free within the framework ofmore » bimetric variational formalism. However, we demonstrate that viable models are possible along similar lines of thought. To this end, we consider a set up in which the affine connection is a variation of the Levi-Civita one. As a proof of principle we construct a gravity model with a massless scalar field obtained this way.« less
Dependability modeling and assessment in UML-based software development.
Bernardi, Simona; Merseguer, José; Petriu, Dorina C
2012-01-01
Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results.
Dependability Modeling and Assessment in UML-Based Software Development
Bernardi, Simona; Merseguer, José; Petriu, Dorina C.
2012-01-01
Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results. PMID:22988428
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imbert, Bruno; Lafosse, Fabien; Catoire, Laurent
2008-11-15
This article is part of the project to model the kinetics of high-temperature combustions, occurring behind shock waves and in detonation waves. The ''conventional'' semi-empirical correlations of ignition delays have been reformulated, by keeping the Arrhenius equation form. It is shown how a polynomial with 3{sup N} coefficients (where N element of is the number of adjustable kinetic parameters, likely to be simultaneously chosen among the temperature T, the pressure P, the inert fraction X{sub Ar}, and the equivalence ratio {phi}) can reproduce the delays predicted by the Curran et al. [H.J. Curran, P. Gaffuri, W.J. Pitz, C.K. Westbrook, Combust.more » Flame 129 (2002) 253-280] detailed mechanism (565 species and 2538 reactions), over a wide range of conditions (comparable with the validity domain). The deviations between the simulated times and their fits (typically 1%) are definitely lower than the uncertainties related to the mechanism (at least 25%). In addition, using this new formalism to evaluate these durations is about 10{sup 6} times faster than simulating them with SENKIN (CHEMKIN III package) and only 10 times slower than using the classical correlations. The adaptation of the traditional method for predicting delays is interesting for modeling, because those performances are difficult to obtain simultaneously with other reduction methods (either purely mathematical, chemical, or even mixed). After a physical and mathematical justification of the proposed formalism, some of its potentialities for n-heptane combustion are presented. In particular, the trends of simulated delays and activation energies are shown for {sub T} {sub element} {sub of} {sub [1500} {sub K,1900} {sub K},} {sub P} {sub element} {sub of} {sub [10kPa,1MPa]}, X{sub Ar} element of [0,0.7], and {phi} element of {sub [0.25,4.0]}. (author)« less
Age-dependence of the average and equivalent refractive indices of the crystalline lens
Charman, W. Neil; Atchison, David A.
2013-01-01
Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474
ERIC Educational Resources Information Center
Ward, Ted W.; Herzog, William A., Jr.
This document is part of a series dealing with nonformal education. Introductory information is included in document SO 008 058. The focus of this report is on the learning effectiveness of nonformal education. Chapter 1 compares effective learning in a formal and nonformal environment. Chapter 2 develops a systems model for designers of learning…
ERIC Educational Resources Information Center
Mars, Matthew M.; Ball, Anna L.
2016-01-01
The mainstream agricultural literacy movement has been mostly focused on school-based learning through formal curricula and standardized non-formal models (e.g., FFA, 4-H). The purpose of the current study is to qualitatively explore through a grounded theory approach, the development, sharing, and translation of diverse forms of agricultural…
McIlvane, William J; Kledaras, Joanne B; Gerard, Christophe J; Wilde, Lorin; Smelson, David
2018-07-01
A few noteworthy exceptions notwithstanding, quantitative analyses of relational learning are most often simple descriptive measures of study outcomes. For example, studies of stimulus equivalence have made much progress using measures such as percentage consistent with equivalence relations, discrimination ratio, and response latency. Although procedures may have ad hoc variations, they remain fairly similar across studies. Comparison studies of training variables that lead to different outcomes are few. Yet to be developed are tools designed specifically for dynamic and/or parametric analyses of relational learning processes. This paper will focus on recent studies to develop (1) quality computer-based programmed instruction for supporting relational learning in children with autism spectrum disorders and intellectual disabilities and (2) formal algorithms that permit ongoing, dynamic assessment of learner performance and procedure changes to optimize instructional efficacy and efficiency. Because these algorithms have a strong basis in evidence and in theories of stimulus control, they may have utility also for basic and translational research. We present an overview of the research program, details of algorithm features, and summary results that illustrate their possible benefits. It also presents arguments that such algorithm development may encourage parametric research, help in integrating new research findings, and support in-depth quantitative analyses of stimulus control processes in relational learning. Such algorithms may also serve to model control of basic behavioral processes that is important to the design of effective programmed instruction for human learners with and without functional disabilities. Copyright © 2018 Elsevier B.V. All rights reserved.
A Framework for Modeling Workflow Execution by an Interdisciplinary Healthcare Team.
Kezadri-Hamiaz, Mounira; Rosu, Daniela; Wilk, Szymon; Kuziemsky, Craig; Michalowski, Wojtek; Carrier, Marc
2015-01-01
The use of business workflow models in healthcare is limited because of insufficient capture of complexities associated with behavior of interdisciplinary healthcare teams that execute healthcare workflows. In this paper we present a novel framework that builds on the well-founded business workflow model formalism and related infrastructures and introduces a formal semantic layer that describes selected aspects of team dynamics and supports their real-time operationalization.
Formal verification of automated teller machine systems using SPIN
NASA Astrophysics Data System (ADS)
Iqbal, Ikhwan Mohammad; Adzkiya, Dieky; Mukhlash, Imam
2017-08-01
Formal verification is a technique for ensuring the correctness of systems. This work focuses on verifying a model of the Automated Teller Machine (ATM) system against some specifications. We construct the model as a state transition diagram that is suitable for verification. The specifications are expressed as Linear Temporal Logic (LTL) formulas. We use Simple Promela Interpreter (SPIN) model checker to check whether the model satisfies the formula. This model checker accepts models written in Process Meta Language (PROMELA), and its specifications are specified in LTL formulas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vollick, Dan N.
In recent papers [D. N. Vollick, Phys. Rev. D 68, 063510 (2003)][D. N. Vollick, Classical Quantum Gravity 21, 3813 (2004).] I have argued that the observed cosmological acceleration can be accounted for by the inclusion of a 1/R term in the gravitational action in the Palatini formalism. Subsequently, Flanagan [Phys. Rev. Lett. 92, 071101 (2004)][Class. Quant. Grav. 21, 3817 (2004)] argued that this theory is equivalent to a scalar-tensor theory which produces corrections to the standard model that are ruled out experimentally. In this article I examine the Dirac field coupled to 1/R gravity. The Dirac action contains the connectionmore » which was taken to be the Christoffel symbol, not an independent quantity, in the papers by Flanagan. Since the metric and connection are taken to be independent in the Palatini approach it is natural to allow the connection that appears in the Dirac action to be an independent quantity. This is the approach that is taken in this paper. The resulting theory is very different and much more complicated than the one discussed in Flanagan's papers.« less
A workload model and measures for computer performance evaluation
NASA Technical Reports Server (NTRS)
Kerner, H.; Kuemmerle, K.
1972-01-01
A generalized workload definition is presented which constructs measurable workloads of unit size from workload elements, called elementary processes. An elementary process makes almost exclusive use of one of the processors, CPU, I/O processor, etc., and is measured by the cost of its execution. Various kinds of user programs can be simulated by quantitative composition of elementary processes into a type. The character of the type is defined by the weights of its elementary processes and its structure by the amount and sequence of transitions between its elementary processes. A set of types is batched to a mix. Mixes of identical cost are considered as equivalent amounts of workload. These formalized descriptions of workloads allow investigators to compare the results of different studies quantitatively. Since workloads of different composition are assigned a unit of cost, these descriptions enable determination of cost effectiveness of different workloads on a machine. Subsequently performance parameters such as throughput rate, gain factor, internal and external delay factors are defined and used to demonstrate the effects of various workload attributes on the performance of a selected large scale computer system.
Theory of electron-impact ionization of atoms
NASA Astrophysics Data System (ADS)
Kadyrov, A. S.; Mukhamedzhanov, A. M.; Stelbovics, A. T.; Bray, I.
2004-12-01
The existing formulations of electron-impact ionization of a hydrogenic target suffer from a number of formal problems including an ambiguous and phase-divergent definition of the ionization amplitude. An alternative formulation of the theory is given. An integral representation for the ionization amplitude which is free of ambiguity and divergence problems is derived and is shown to have four alternative, but equivalent, forms well suited for practical calculations. The extension to amplitudes of all possible scattering processes taking place in an arbitrary three-body system follows. A well-defined conventional post form of the breakup amplitude valid for arbitrary potentials including the long-range Coulomb interaction is given. Practical approaches are based on partial-wave expansions, so the formulation is also recast in terms of partial waves and partial-wave expansions of the asymptotic wave functions are presented. In particular, expansions of the asymptotic forms of the total scattering wave function, developed from both the initial and the final state, for electron-impact ionization of hydrogen are given. Finally, the utility of the present formulation is demonstrated on some well-known model problems.
A harmonic analysis of lunar gravity
NASA Technical Reports Server (NTRS)
Bills, B. G.; Ferrari, A. J.
1980-01-01
An improved model of lunar global gravity has been obtained by fitting a sixteenth-degree harmonic series to a combination of Doppler tracking data from Apollo missions 8, 12, 15, and 16, and Lunar Orbiters 1, 2, 3, 4, and 5, and laser ranging data to the lunar surface. To compensate for the irregular selenographic distribution of these data, the solution algorithm has also incorporated a semi-empirical a priori covariance function. Maps of the free-air gravity disturbance and its formal error are presented, as are free-air anomaly and Bouguer anomaly maps. The lunar gravitational variance spectrum has the form V(G; n) = O(n to the -4th power), as do the corresponding terrestrial and martian spectra. The variance spectra of the Bouguer corrections (topography converted to equivalent gravity) for these bodies have the same basic form as the observed gravity; and, in fact, the spectral ratios are nearly constant throughout the observed spectral range for each body. Despite this spectral compatibility, the correlation between gravity and topography is generally quite poor on a global scale.
Boundaries, mirror symmetry, and symplectic duality in 3d N = 4 gauge theory
Bullimore, Mathew; Dimofte, Tudor; Gaiotto, Davide; ...
2016-10-20
We introduce several families of N = (2, 2) UV boundary conditions in 3d N=4 gauge theories and study their IR images in sigma-models to the Higgs and Coulomb branches. In the presence of Omega deformations, a UV boundary condition defines a pair of modules for quantized algebras of chiral Higgs- and Coulomb-branch operators, respectively, whose structure we derive. In the case of abelian theories, we use the formalism of hyperplane arrangements to make our constructions very explicit, and construct a half-BPS interface that implements the action of 3d mirror symmetry on gauge theories and boundary conditions. Finally, by studyingmore » two-dimensional compactifications of 3d N = 4 gauge theories and their boundary conditions, we propose a physical origin for symplectic duality $-$ an equivalence of categories of modules associated to families of Higgs and Coulomb branches that has recently appeared in the mathematics literature, and generalizes classic results on Koszul duality in geometric representation theory. We make several predictions about the structure of symplectic duality, and identify Koszul duality as a special case of wall crossing.« less
Biotic games and cloud experimentation as novel media for biophysics education
NASA Astrophysics Data System (ADS)
Riedel-Kruse, Ingmar; Blikstein, Paulo
2014-03-01
First-hand, open-ended experimentation is key for effective formal and informal biophysics education. We developed, tested and assessed multiple new platforms that enable students and children to directly interact with and learn about microscopic biophysical processes: (1) Biotic games that enable local and online play using galvano- and photo-tactic stimulation of micro-swimmers, illustrating concepts such as biased random walks, Low Reynolds number hydrodynamics, and Brownian motion; (2) an undergraduate course where students learn optics, electronics, micro-fluidics, real time image analysis, and instrument control by building biotic games; and (3) a graduate class on the biophysics of multi-cellular systems that contains a cloud experimentation lab enabling students to execute open-ended chemotaxis experiments on slimemolds online, analyze their data, and build biophysical models. Our work aims to generate the equivalent excitement and educational impact for biophysics as robotics and video games have had for mechatronics and computer science, respectively. We also discuss how scaled-up cloud experimentation systems can support MOOCs with true lab components and life-science research in general.
Software Formal Inspections Guidebook
NASA Technical Reports Server (NTRS)
1993-01-01
The Software Formal Inspections Guidebook is designed to support the inspection process of software developed by and for NASA. This document provides information on how to implement a recommended and proven method for conducting formal inspections of NASA software. This Guidebook is a companion document to NASA Standard 2202-93, Software Formal Inspections Standard, approved April 1993, which provides the rules, procedures, and specific requirements for conducting software formal inspections. Application of the Formal Inspections Standard is optional to NASA program or project management. In cases where program or project management decide to use the formal inspections method, this Guidebook provides additional information on how to establish and implement the process. The goal of the formal inspections process as documented in the above-mentioned Standard and this Guidebook is to provide a framework and model for an inspection process that will enable the detection and elimination of defects as early as possible in the software life cycle. An ancillary aspect of the formal inspection process incorporates the collection and analysis of inspection data to effect continual improvement in the inspection process and the quality of the software subjected to the process.
Development of a traffic noise prediction model for an urban environment.
Sharma, Asheesh; Bodhe, G L; Schimak, G
2014-01-01
The objective of this study is to develop a traffic noise model under diverse traffic conditions in metropolitan cities. The model has been developed to calculate equivalent traffic noise based on four input variables i.e. equivalent traffic flow (Q e ), equivalent vehicle speed (S e ) and distance (d) and honking (h). The traffic data is collected and statistically analyzed in three different cases for 15-min during morning and evening rush hours. Case I represents congested traffic where equivalent vehicle speed is <30 km/h while case II represents free-flowing traffic where equivalent vehicle speed is >30 km/h and case III represents calm traffic where no honking is recorded. The noise model showed better results than earlier developed noise model for Indian traffic conditions. A comparative assessment between present and earlier developed noise model has also been presented in the study. The model is validated with measured noise levels and the correlation coefficients between measured and predicted noise levels were found to be 0.75, 0.83 and 0.86 for case I, II and III respectively. The noise model performs reasonably well under different traffic conditions and could be implemented for traffic noise prediction at other region as well.
LinkEHR-Ed: a multi-reference model archetype editor based on formal semantics.
Maldonado, José A; Moner, David; Boscá, Diego; Fernández-Breis, Jesualdo T; Angulo, Carlos; Robles, Montserrat
2009-08-01
To develop a powerful archetype editing framework capable of handling multiple reference models and oriented towards the semantic description and standardization of legacy data. The main prerequisite for implementing tools providing enhanced support for archetypes is the clear specification of archetype semantics. We propose a formalization of the definition section of archetypes based on types over tree-structured data. It covers the specialization of archetypes, the relationship between reference models and archetypes and conformance of data instances to archetypes. LinkEHR-Ed, a visual archetype editor based on the former formalization with advanced processing capabilities that supports multiple reference models, the editing and semantic validation of archetypes, the specification of mappings to data sources, and the automatic generation of data transformation scripts, is developed. LinkEHR-Ed is a useful tool for building, processing and validating archetypes based on any reference model.
Cosmological implications of quantum corrections and higher-derivative extension
NASA Astrophysics Data System (ADS)
Chialva, Diego; Mazumdar, Anupam
2015-02-01
We discuss the challenges for the early universe cosmology from quantum corrections, and in particular higher-derivative terms, in the gravitational and inflaton sectors of the models. The work is divided in two parts. In the first one we review the already well-known issues due to quantum corrections to the inflaton potential, in particular focusing on chaotic/slow-roll single-field models. We will point out some issues concerning the proposed mechanisms to cope with the corrections, and also argue how the presence of higher-derivative corrections could be problematic for those mechanisms. In the second part we will more directly focus on higher-derivative corrections. We will show how, in order to discuss a number of high-energy phenomena relevant to inflation (such as its actual onset) one has to deal with energy scales where the derivative expansion breaks down, presenting problems such as quantum vacuum instability and ghosts. To discuss such phenomena in the convenient framework of the effective theory, one must then abandon the derivative expansion and resort to the full nonlocal formulation of the theory, which is in fact equivalent to re-integrating back the relevant physics, but with the benefit of using a more compact single-field formalism. Finally, we will briefly discuss possible advantages offered by the presence of higher derivatives and a nonlocal theory to build better controlled UV models of inflation.
Allostatic Self-efficacy: A Metacognitive Theory of Dyshomeostasis-Induced Fatigue and Depression.
Stephan, Klaas E; Manjaly, Zina M; Mathys, Christoph D; Weber, Lilian A E; Paliwal, Saee; Gard, Tim; Tittgemeyer, Marc; Fleming, Stephen M; Haker, Helene; Seth, Anil K; Petzschner, Frederike H
2016-01-01
This paper outlines a hierarchical Bayesian framework for interoception, homeostatic/allostatic control, and meta-cognition that connects fatigue and depression to the experience of chronic dyshomeostasis. Specifically, viewing interoception as the inversion of a generative model of viscerosensory inputs allows for a formal definition of dyshomeostasis (as chronically enhanced surprise about bodily signals, or, equivalently, low evidence for the brain's model of bodily states) and allostasis (as a change in prior beliefs or predictions which define setpoints for homeostatic reflex arcs). Critically, we propose that the performance of interoceptive-allostatic circuitry is monitored by a metacognitive layer that updates beliefs about the brain's capacity to successfully regulate bodily states (allostatic self-efficacy). In this framework, fatigue and depression can be understood as sequential responses to the interoceptive experience of dyshomeostasis and the ensuing metacognitive diagnosis of low allostatic self-efficacy. While fatigue might represent an early response with adaptive value (cf. sickness behavior), the experience of chronic dyshomeostasis may trigger a generalized belief of low self-efficacy and lack of control (cf. learned helplessness), resulting in depression. This perspective implies alternative pathophysiological mechanisms that are reflected by differential abnormalities in the effective connectivity of circuits for interoception and allostasis. We discuss suitably extended models of effective connectivity that could distinguish these connectivity patterns in individual patients and may help inform differential diagnosis of fatigue and depression in the future.
Allostatic Self-efficacy: A Metacognitive Theory of Dyshomeostasis-Induced Fatigue and Depression
Stephan, Klaas E.; Manjaly, Zina M.; Mathys, Christoph D.; Weber, Lilian A. E.; Paliwal, Saee; Gard, Tim; Tittgemeyer, Marc; Fleming, Stephen M.; Haker, Helene; Seth, Anil K.; Petzschner, Frederike H.
2016-01-01
This paper outlines a hierarchical Bayesian framework for interoception, homeostatic/allostatic control, and meta-cognition that connects fatigue and depression to the experience of chronic dyshomeostasis. Specifically, viewing interoception as the inversion of a generative model of viscerosensory inputs allows for a formal definition of dyshomeostasis (as chronically enhanced surprise about bodily signals, or, equivalently, low evidence for the brain's model of bodily states) and allostasis (as a change in prior beliefs or predictions which define setpoints for homeostatic reflex arcs). Critically, we propose that the performance of interoceptive-allostatic circuitry is monitored by a metacognitive layer that updates beliefs about the brain's capacity to successfully regulate bodily states (allostatic self-efficacy). In this framework, fatigue and depression can be understood as sequential responses to the interoceptive experience of dyshomeostasis and the ensuing metacognitive diagnosis of low allostatic self-efficacy. While fatigue might represent an early response with adaptive value (cf. sickness behavior), the experience of chronic dyshomeostasis may trigger a generalized belief of low self-efficacy and lack of control (cf. learned helplessness), resulting in depression. This perspective implies alternative pathophysiological mechanisms that are reflected by differential abnormalities in the effective connectivity of circuits for interoception and allostasis. We discuss suitably extended models of effective connectivity that could distinguish these connectivity patterns in individual patients and may help inform differential diagnosis of fatigue and depression in the future. PMID:27895566
Cosmic logic: a computational model
NASA Astrophysics Data System (ADS)
Vanchurin, Vitaly
2016-02-01
We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.
Four-body extension of the continuum-discretized coupled-channels method
NASA Astrophysics Data System (ADS)
Descouvemont, P.
2018-06-01
I develop an extension of the continuum-discretized coupled-channels (CDCC) method to reactions where both nuclei present a low breakup threshold. This leads to a four-body model, where the only inputs are the interactions describing the colliding nuclei, and the four optical potentials between the fragments. Once these potentials are chosen, the model does not contain any additional parameter. First I briefly discuss the general formalism, and emphasize the need for dealing with large coupled-channel systems. The method is tested with existing benchmarks on 4 α bound states with the Ali-Bodmer potential. Then I apply the four-body CDCC to the 11Be+d system, where I consider the 10Be(0+,2+)+n configuration for 11Be. I show that breakup channels are crucial to reproduce the elastic cross section, but that core excitation plays a weak role. The 7Li+d system is investigated with an α +t cluster model for 7Li. I show that breakup channels significantly improve the agreement with the experimental cross section, but an additional imaginary term, simulating missing transfer channels, is necessary. The full CDCC results can be interpreted by equivalent potentials. For both systems, the real part is weakly affected by breakup channels, but the imaginary part is strongly modified. I suggest that the present wave functions could be used in future DWBA calculations.
Assessing Measurement Equivalence in Ordered-Categorical Data
ERIC Educational Resources Information Center
Elosua, Paula
2011-01-01
Assessing measurement equivalence in the framework of the common factor linear models (CFL) is known as factorial invariance. This methodology is used to evaluate the equivalence among the parameters of a measurement model among different groups. However, when dichotomous, Likert, or ordered responses are used, one of the assumptions of the CFL is…
Lumped-parameters equivalent circuit for condenser microphones modeling.
Esteves, Josué; Rufer, Libor; Ekeom, Didace; Basrour, Skandar
2017-10-01
This work presents a lumped parameters equivalent model of condenser microphone based on analogies between acoustic, mechanical, fluidic, and electrical domains. Parameters of the model were determined mainly through analytical relations and/or finite element method (FEM) simulations. Special attention was paid to the air gap modeling and to the use of proper boundary condition. Corresponding lumped-parameters were obtained as results of FEM simulations. Because of its simplicity, the model allows a fast simulation and is readily usable for microphone design. This work shows the validation of the equivalent circuit on three real cases of capacitive microphones, including both traditional and Micro-Electro-Mechanical Systems structures. In all cases, it has been demonstrated that the sensitivity and other related data obtained from the equivalent circuit are in very good agreement with available measurement data.
Managing Analysis Models in the Design Process
NASA Technical Reports Server (NTRS)
Briggs, Clark
2006-01-01
Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacGregor, B.R.; McCoy, A.E.; Wickramasekara, S., E-mail: wickrama@grinnell.edu
2012-09-15
We present a formalism of Galilean quantum mechanics in non-inertial reference frames and discuss its implications for the equivalence principle. This extension of quantum mechanics rests on the Galilean line group, the semidirect product of the real line and the group of analytic functions from the real line to the Euclidean group in three dimensions. This group provides transformations between all inertial and non-inertial reference frames and contains the Galilei group as a subgroup. We construct a certain class of unitary representations of the Galilean line group and show that these representations determine the structure of quantum mechanics in non-inertialmore » reference frames. Our representations of the Galilean line group contain the usual unitary projective representations of the Galilei group, but have a more intricate cocycle structure. The transformation formula for the Hamiltonian under the Galilean line group shows that in a non-inertial reference frame it acquires a fictitious potential energy term that is proportional to the inertial mass, suggesting the equivalence of inertial mass and gravitational mass in quantum mechanics. - Highlights: Black-Right-Pointing-Pointer A formulation of Galilean quantum mechanics in non-inertial reference frames is given. Black-Right-Pointing-Pointer The key concept is the Galilean line group, an infinite dimensional group. Black-Right-Pointing-Pointer Unitary, cocycle representations of the Galilean line group are constructed. Black-Right-Pointing-Pointer A non-central extension of the group underlies these representations. Black-Right-Pointing-Pointer Quantum equivalence principle and gravity emerge from these representations.« less
On the subsystem formulation of linear-response time-dependent DFT.
Pavanello, Michele
2013-05-28
A new and thorough derivation of linear-response subsystem time-dependent density functional theory (TD-DFT) is presented and analyzed in detail. Two equivalent derivations are presented and naturally yield self-consistent subsystem TD-DFT equations. One reduces to the subsystem TD-DFT formalism of Neugebauer [J. Chem. Phys. 126, 134116 (2007)]. The other yields Dyson type equations involving three types of subsystem response functions: coupled, uncoupled, and Kohn-Sham. The Dyson type equations for subsystem TD-DFT are derived here for the first time. The response function formalism reveals previously hidden qualities and complications of subsystem TD-DFT compared with the regular TD-DFT of the supersystem. For example, analysis of the pole structure of the subsystem response functions shows that each function contains information about the electronic spectrum of the entire supersystem. In addition, comparison of the subsystem and supersystem response functions shows that, while the correlated response is subsystem additive, the Kohn-Sham response is not. Comparison with the non-subjective partition DFT theory shows that this non-additivity is largely an artifact introduced by the subjective nature of the density partitioning in subsystem DFT.
Physically motivated correlation formalism in hyperspectral imaging
NASA Astrophysics Data System (ADS)
Roy, Ankita; Rafert, J. Bruce
2004-05-01
Most remote sensing data-sets contain a limiting number of independent spatial and spectral measurements, beyond which no effective increase in information is achieved. This paper presents a Physically Motivated Correlation Formalism (PMCF) ,which places both Spatial and Spectral data on an equivalent mathematical footing in the context of a specific Kernel, such that, optimal combinations of independent data can be selected from the entire Hypercube via the method of "Correlation Moments". We present an experimental and computational analysis of Hyperspectral data sets using the Michigan Tech VFTHSI [Visible Fourier Transform Hyperspectral Imager] based on a Sagnac Interferometer, adjusted to obtain high SNR levels. The captured Signal Interferograms of different targets - aerial snaps of Houghton and lab-based data (white light , He-Ne laser , discharge tube sources) with the provision of customized scan of targets with the same exposures are processed using inverse imaging transformations and filtering techniques to obtain the Spectral profiles and generate Hypercubes to compute Spectral/Spatial/Cross Moments. PMCF answers the question of how optimally the entire hypercube should be sampled and finds how many spatial-spectral pixels are required for a particular target recognition.
Some novel features in 2D non-Abelian theory: BRST approach
NASA Astrophysics Data System (ADS)
Srinivas, N.; Kumar, S.; Kureel, B. K.; Malik, R. P.
2017-08-01
Within the framework of Becchi-Rouet-Stora-Tyutin (BRST) formalism, we discuss some novel features of a two (1+1)-dimensional (2D) non-Abelian 1-form gauge theory (without any interaction with matter fields). Besides the usual off-shell nilpotent and absolutely anticommutating (anti-)BRST symmetry transformations, we discuss the off-shell nilpotent and absolutely anticommutating (anti-)co-BRST symmetry transformations. Particularly, we lay emphasis on the existence of the coupled (but equivalent) Lagrangian densities of the 2D non-Abelian theory in view of the presence of (anti-)co-BRST symmetry transformations where we pin-point some novel features associated with the Curci-Ferrari (CF-)type restrictions. We demonstrate that these CF-type restrictions can be incorporated into the (anti-)co-BRST invariant Lagrangian densities through the fermionic Lagrange multipliers which carry specific ghost numbers. The modified versions of the Lagrangian densities (where we get rid of the new CF-type restrictions) respect some precise symmetries as well as a couple of symmetries with CF-type constraints. These observations are completely novel as far as the BRST formalism, with proper (anti-)co-BRST symmetries, is concerned.
Fractal: An Educational Model for the Convergence of Formal and Non-Formal Education
ERIC Educational Resources Information Center
Enríquez, Larisa
2017-01-01
For the last two decades, different authors have mentioned the need to have new pedagogies that respond better to current times, which are surrounded by a complex set of issues such as mobility, interculturality, curricular flexibility, accreditation and academic coverage. Fractal is an educational model proposal for online learning that is formed…
Teachers Learning to Use the iPad in Scotland and Wales: A New Model of Professional Development
ERIC Educational Resources Information Center
Beauchamp, Gary; Burden, Kevin; Abbinett, Emily
2015-01-01
In learning to use a new technology like the iPad, primary teachers adopt a diverse range of experiential, informal and playful strategies contrasting sharply with traditional models underpinning professional development which emphasise formal courses and events led by "experts" conducted in formal settings such as the school. Since…
Symbolic discrete event system specification
NASA Technical Reports Server (NTRS)
Zeigler, Bernard P.; Chi, Sungdo
1992-01-01
Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.
Approaches to education provision for mobile pastoralists.
Dyer, C
2016-11-01
Experiences of mobile pastoralists often attest to a wide range of contradictions about the presumed advantages of formal education. While effort to 'reach' pastoralists has intensified under the global Education for All movement, there remain considerable difficulties in finding ways to make formal education relate to pastoralist livelihoods and complement endogenous knowledge. This paper examines how these dynamics play out across models of formal and non-formal education service provision, and identifies innovations that offer promising ways forward: Alternative Basic Education, Open and Distance Learning, and Pastoralist Field Schools.
The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1995-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
Applications of a formal approach to decipher discrete genetic networks.
Corblin, Fabien; Fanchon, Eric; Trilling, Laurent
2010-07-20
A growing demand for tools to assist the building and analysis of biological networks exists in systems biology. We argue that the use of a formal approach is relevant and applicable to address questions raised by biologists about such networks. The behaviour of these systems being complex, it is essential to exploit efficiently every bit of experimental information. In our approach, both the evolution rules and the partial knowledge about the structure and the behaviour of the network are formalized using a common constraint-based language. In this article our formal and declarative approach is applied to three biological applications. The software environment that we developed allows to specifically address each application through a new class of biologically relevant queries. We show that we can describe easily and in a formal manner the partial knowledge about a genetic network. Moreover we show that this environment, based on a constraint algorithmic approach, offers a wide variety of functionalities, going beyond simple simulations, such as proof of consistency, model revision, prediction of properties, search for minimal models relatively to specified criteria. The formal approach proposed here deeply changes the way to proceed in the exploration of genetic and biochemical networks, first by avoiding the usual trial-and-error procedure, and second by placing the emphasis on sets of solutions, rather than a single solution arbitrarily chosen among many others. Last, the constraint approach promotes an integration of model and experimental data in a single framework.
Duality constructions from quantum state manifolds
NASA Astrophysics Data System (ADS)
Kriel, J. N.; van Zyl, H. J. R.; Scholtz, F. G.
2015-11-01
The formalism of quantum state space geometry on manifolds of generalised coherent states is proposed as a natural setting for the construction of geometric dual descriptions of non-relativistic quantum systems. These state manifolds are equipped with natural Riemannian and symplectic structures derived from the Hilbert space inner product. This approach allows for the systematic construction of geometries which reflect the dynamical symmetries of the quantum system under consideration. We analyse here in detail the two dimensional case and demonstrate how existing results in the AdS 2 /CF T 1 context can be understood within this framework. We show how the radial/bulk coordinate emerges as an energy scale associated with a regularisation procedure and find that, under quite general conditions, these state manifolds are asymptotically anti-de Sitter solutions of a class of classical dilaton gravity models. For the model of conformal quantum mechanics proposed by de Alfaro et al. [1] the corresponding state manifold is seen to be exactly AdS 2 with a scalar curvature determined by the representation of the symmetry algebra. It is also shown that the dilaton field itself is given by the quantum mechanical expectation values of the dynamical symmetry generators and as a result exhibits dynamics equivalent to that of a conformal mechanical system.
NASA Astrophysics Data System (ADS)
Mao, Xiaochen; McKinnon, William B.
2018-01-01
We show that Ceres' measured degree-2 zonal gravity, J2, is smaller by about 10% than that derived assuming Ceres' rotational flattening, as measured by Dawn, is hydrostatic. Irrespective of Ceres' radial density variation, as long as its internal structure is hydrostatic the J2 predicted from the shape model is consistently larger than measured. As an explanation, we suggest that Ceres' current shape may be a fossil remnant of faster rotation in the geologic past. We propose that up to ∼7% of Ceres' previous spin angular momentum has been removed by dynamic perturbations such as a random walk due to impacts or a loss of satellite that slowed Ceres spin as it tidally evolved outward. As an alternative, we also consider a formal degree-2 admittance solution, from which we infer a range of possible non-hydrostatic contributions to J2 from uncompensated, deep-seated density anomalies. We show that such density anomalies could be due to low order convection or upwelling. The normalized moments-of-inertia derived for the two explanations - faster paleospin and deep-seated density anomalies - range between 0.353 ± 0.009 and 0.375 ± 0.001 for a spherically equivalent Ceres, which can be used as constraints on more complex Ceres interior models.
Coherent states field theory in supramolecular polymer physics
NASA Astrophysics Data System (ADS)
Fredrickson, Glenn H.; Delaney, Kris T.
2018-05-01
In 1970, Edwards and Freed presented an elegant representation of interacting branched polymers that resembles the coherent states (CS) formulation of second-quantized field theory. This CS polymer field theory has been largely overlooked during the intervening period in favor of more conventional "auxiliary field" (AF) interacting polymer representations that form the basis of modern self-consistent field theory (SCFT) and field-theoretic simulation approaches. Here we argue that the CS representation provides a simpler and computationally more efficient framework than the AF approach for broad classes of reversibly bonding polymers encountered in supramolecular polymer science. The CS formalism is reviewed, initially for a simple homopolymer solution, and then extended to supramolecular polymers capable of forming reversible linkages and networks. In the context of the Edwards model of a non-reacting homopolymer solution and one and two-component models of telechelic reacting polymers, we discuss the structure of CS mean-field theory, including the equivalence to SCFT, and show how weak-amplitude expansions (random phase approximations) can be readily developed without explicit enumeration of all reaction products in a mixture. We further illustrate how to analyze CS field theories beyond SCFT at the level of Gaussian field fluctuations and provide a perspective on direct numerical simulations using a recently developed complex Langevin technique.
Sub-grid scale models for discontinuous Galerkin methods based on the Mori-Zwanzig formalism
NASA Astrophysics Data System (ADS)
Parish, Eric; Duraisamy, Karthk
2017-11-01
The optimal prediction framework of Chorin et al., which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived closure models. The M-Z formalism provides a methodology to reformulate a high-dimensional Markovian dynamical system as a lower-dimensional, non-Markovian (non-local) system. In this lower-dimensional system, the effects of the unresolved scales on the resolved scales are non-local and appear as a convolution integral. The non-Markovian system is an exact statement of the original dynamics and is used as a starting point for model development. In this work, we investigate the development of M-Z-based closures model within the context of the Variational Multiscale Method (VMS). The method relies on a decomposition of the solution space into two orthogonal subspaces. The impact of the unresolved subspace on the resolved subspace is shown to be non-local in time and is modeled through the M-Z-formalism. The models are applied to hierarchical discontinuous Galerkin discretizations. Commonalities between the M-Z closures and conventional flux schemes are explored. This work was supported in part by AFOSR under the project ''LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.
Formalization of an environmental model using formal concept analysis - FCA
NASA Astrophysics Data System (ADS)
Bourdon-García, Rubén D.; Burgos-Salcedo, Javier D.
2016-08-01
Nowadays, there is a huge necessity to generate novel strategies for social-ecological systems analyses for resolving global sustainability problems. This paper has as main purpose the application of the formal concept analysis to formalize the theory of Augusto Ángel Maya, who without a doubt, was one of the most important environmental philosophers in South America; Ángel Maya proposed and established that Ecosystem-Culture relations, instead Human-Nature ones, are determinants in our understanding and management of natural resources. Based on this, a concept lattice, formal concepts, subconcept-superconcept relations, partially ordered sets, supremum and infimum of the lattice and implications between attributes (Duquenne-Guigues base), were determined for the ecosystem-culture relations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
A, Popescu I; Lobo, J; Sawkey, D
2014-06-15
Purpose: To simulate and measure radiation backscattered into the monitor chamber of a TrueBeam linac; establish a rigorous framework for absolute dose calculations for TrueBeam Monte Carlo (MC) simulations through a novel approach, taking into account the backscattered radiation and the actual machine output during beam delivery; improve agreement between measured and simulated relative output factors. Methods: The ‘monitor backscatter factor’ is an essential ingredient of a well-established MC absolute dose formalism (the MC equivalent of the TG-51 protocol). This quantity was determined for the 6 MV, 6X FFF, and 10X FFF beams by two independent Methods: (1) MC simulationsmore » in the monitor chamber of the TrueBeam linac; (2) linac-generated beam record data for target current, logged for each beam delivery. Upper head MC simulations used a freelyavailable manufacturer-provided interface to a cloud-based platform, allowing use of the same head model as that used to generate the publicly-available TrueBeam phase spaces, without revealing the upper head design. The MC absolute dose formalism was expanded to allow direct use of target current data. Results: The relation between backscatter, number of electrons incident on the target for one monitor unit, and MC absolute dose was analyzed for open fields, as well as a jaw-tracking VMAT plan. The agreement between the two methods was better than 0.15%. It was demonstrated that the agreement between measured and simulated relative output factors improves across all field sizes when backscatter is taken into account. Conclusion: For the first time, simulated monitor chamber dose and measured target current for an actual TrueBeam linac were incorporated in the MC absolute dose formalism. In conjunction with the use of MC inputs generated from post-delivery trajectory-log files, the present method allows accurate MC dose calculations, without resorting to any of the simplifying assumptions previously made in the TrueBeam MC literature. This work has been partially funded by Varian Medical Systems.« less
Lidar cross-sections of soot fractal aggregates: Assessment of equivalent-sphere models
NASA Astrophysics Data System (ADS)
Ceolato, Romain; Gaudfrin, Florian; Pujol, Olivier; Riviere, Nicolas; Berg, Matthew J.; Sorensen, Christopher M.
2018-06-01
This work assesses the ability of equivalent-sphere models to reproduce the optical properties of soot aggregates relevant for lidar remote sensing, i.e. the backscattering and extinction cross sections. Lidar cross-sections are computed with a spectral discrete dipole approximation model over the visible-to-infrared (400-5000 nm) spectrum and compared with equivalent-sphere approximations. It is shown that the equivalent-sphere approximation, applied to fractal aggregates, has a limited ability to calculate such cross-sections well. The approximation should thus be used with caution for the computation of broadband lidar cross-sections, especially backscattering, at small and intermediate wavelengths (e.g. UV to visible).
Verifying Hybrid Systems Modeled as Timed Automata: A Case Study
1997-03-01
Introduction Researchers have proposed many innovative formal methods for developing real - time systems [9]. Such methods can give system developers and...customers greater con dence that real - time systems satisfy their requirements, especially their crit- ical requirements. However, applying formal methods...specifying and reasoning about real - time systems that is designed to address these challenging problems. Our approach is to build formal reasoning tools
ERIC Educational Resources Information Center
Virkkula, Esa
2016-01-01
The present article will examine informal learning in popular and jazz music education in Finland and evaluate it as a part of formal upper secondary vocational musicians' training, which is typically teacher directed. It is not necessarily the best model of working in popular and jazz music learning, which has traditionally benefitted from…
The COUNSELOR Project: Understanding Legal Argument.
1986-01-01
utilize is one presented by Stephen Toulmin [ Toulmin 58]. The Toulmin model is one of the most widely accepted formalizations in existence as it is...provides allows analysis and criticism of propositions to occur at several levels. Argument, as seen by Toulmin , is defined as "movement from...optional pieces of the Toulmin model. It is these features that allow the model a great deal of flexibility and give it advantages over other formalisms
Formal modeling of a system of chemical reactions under uncertainty.
Ghosh, Krishnendu; Schlipf, John
2014-10-01
We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.
An Ontology for State Analysis: Formalizing the Mapping to SysML
NASA Technical Reports Server (NTRS)
Wagner, David A.; Bennett, Matthew B.; Karban, Robert; Rouquette, Nicolas; Jenkins, Steven; Ingham, Michel
2012-01-01
State Analysis is a methodology developed over the last decade for architecting, designing and documenting complex control systems. Although it was originally conceived for designing robotic spacecraft, recent applications include the design of control systems for large ground-based telescopes. The European Southern Observatory (ESO) began a project to design the European Extremely Large Telescope (E-ELT), which will require coordinated control of over a thousand articulated mirror segments. The designers are using State Analysis as a methodology and the Systems Modeling Language (SysML) as a modeling and documentation language in this task. To effectively apply the State Analysis methodology in this context it became necessary to provide ontological definitions of the concepts and relations in State Analysis and greater flexibility through a mapping of State Analysis into a practical extension of SysML. The ontology provides the formal basis for verifying compliance with State Analysis semantics including architectural constraints. The SysML extension provides the practical basis for applying the State Analysis methodology with SysML tools. This paper will discuss the method used to develop these formalisms (the ontology), the formalisms themselves, the mapping to SysML and approach to using these formalisms to specify a control system and enforce architectural constraints in a SysML model.
2013-01-01
Objectives To establish the current state of knowledge on the effect of surgical simulation on the development of technical competence during surgical training. Methods Using a defined search strategy, the medical and educational literature was searched to identify empirical research that uses simulation as an educational intervention with surgical trainees. Included studies were analysed according to guidelines adapted from a Best Evidence in Medical Education review. Results A total of 32 studies were analysed, across 5 main categories of surgical simulation technique - use of bench models and box trainers (9 studies); Virtual Reality (14 studies); human cadavers (4 studies); animal models (2 studies) and robotics (3 studies). An improvement in technical skill was seen within the simulated environment across all five categories. This improvement was seen to transfer to the real patient in the operating room in all categories except the use of animals. Conclusions Based on current evidence, surgical trainees should be confident in the effects of using simulation, and should have access to formal, structured simulation as part of their training. Surgical simulation should incorporate the use of bench models and box trainers, with the use of Virtual Reality where resources allow. Alternatives to cadaveric and animal models should be considered due to the ethical and moral issues surrounding their use, and due to their equivalency with other simulation techniques. However, any use of surgical simulation must be tailored to the individual needs of trainees, and should be accompanied by feedback from expert tutors.
NASA Astrophysics Data System (ADS)
Kobayashi, Kiyoshi; Suzuki, Tohru S.
2018-03-01
A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.
Nekrasov and Argyres-Douglas theories in spherical Hecke algebra representation
NASA Astrophysics Data System (ADS)
Rim, Chaiho; Zhang, Hong
2017-06-01
AGT conjecture connects Nekrasov instanton partition function of 4D quiver gauge theory with 2D Liouville conformal blocks. We re-investigate this connection using the central extension of spherical Hecke algebra in q-coordinate representation, q being the instanton expansion parameter. Based on AFLT basis together with intertwiners we construct gauge conformal state and demonstrate its equivalence to the Liouville conformal state, with careful attention to the proper scaling behavior of the state. Using the colliding limit of regular states, we obtain the formal expression of irregular conformal states corresponding to Argyres-Douglas theory, which involves summation of functions over Young diagrams.
White-light parametric instabilities in plasmas.
Santos, J E; Silva, L O; Bingham, R
2007-06-08
Parametric instabilities driven by partially coherent radiation in plasmas are described by a generalized statistical Wigner-Moyal set of equations, formally equivalent to the full wave equation, coupled to the plasma fluid equations. A generalized dispersion relation for stimulated Raman scattering driven by a partially coherent pump field is derived, revealing a growth rate dependence, with the coherence width sigma of the radiation field, scaling with 1/sigma for backscattering (three-wave process), and with 1/sigma1/2 for direct forward scattering (four-wave process). Our results demonstrate the possibility to control the growth rates of these instabilities by properly using broadband pump radiation fields.
Symplectic analysis of three-dimensional Abelian topological gravity
NASA Astrophysics Data System (ADS)
Cartas-Fuentevilla, R.; Escalante, Alberto; Herrera-Aguilar, Alfredo
2017-02-01
A detailed Faddeev-Jackiw quantization of an Abelian topological gravity is performed; we show that this formalism is equivalent and more economical than Dirac's method. In particular, we identify the complete set of constraints of the theory, from which the number of physical degrees of freedom is explicitly computed. We prove that the generalized Faddeev-Jackiw brackets and the Dirac ones coincide with each other. Moreover, we perform the Faddeev-Jackiw analysis of the theory at the chiral point, and the full set of constraints and the generalized Faddeev-Jackiw brackets are constructed. Finally we compare our results with those found in the literature and we discuss some remarks and prospects.
A note on closed-string interactions a la witten
NASA Astrophysics Data System (ADS)
Romans, L. J.
1987-08-01
We consider the problem of formulating a field theory of interacting closed strings analogous to Witten's open-string field theory. Two natural candidates have been suggested for an off-shell three-string interaction vertex: one scheme involves a cyclic geometric overlap in spacetime, while the other is obtained by ``stuttering'' the Fock-space realization of the open-string vertex. We demonstrate that these two approaches are in fact equivalent, utilizing the operator formalism as developed to describe Witten's theory. Implications of this result for the construction of closed-string theories are briefly discussed. Address after August 1, 1987: Department of Physics, University of Southern California, Los Angeles, CA 90089, USA.
Identification of Low Order Equivalent System Models From Flight Test Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
Identification of low order equivalent system dynamic models from flight test data was studied. Inputs were pilot control deflections, and outputs were aircraft responses, so the models characterized the total aircraft response including bare airframe and flight control system. Theoretical investigations were conducted and related to results found in the literature. Low order equivalent system modeling techniques using output error and equation error parameter estimation in the frequency domain were developed and validated on simulation data. It was found that some common difficulties encountered in identifying closed loop low order equivalent system models from flight test data could be overcome using the developed techniques. Implications for data requirements and experiment design were discussed. The developed methods were demonstrated using realistic simulation cases, then applied to closed loop flight test data from the NASA F-18 High Alpha Research Vehicle.
Folk Theorems on the Correspondence between State-Based and Event-Based Systems
NASA Astrophysics Data System (ADS)
Reniers, Michel A.; Willemse, Tim A. C.
Kripke Structures and Labelled Transition Systems are the two most prominent semantic models used in concurrency theory. Both models are commonly believed to be equi-expressive. One can find many ad-hoc embeddings of one of these models into the other. We build upon the seminal work of De Nicola and Vaandrager that firmly established the correspondence between stuttering equivalence in Kripke Structures and divergence-sensitive branching bisimulation in Labelled Transition Systems. We show that their embeddings can also be used for a range of other equivalences of interest, such as strong bisimilarity, simulation equivalence, and trace equivalence. Furthermore, we extend the results by De Nicola and Vaandrager by showing that there are additional translations that allow one to use minimisation techniques in one semantic domain to obtain minimal representatives in the other semantic domain for these equivalences.
ERIC Educational Resources Information Center
Santally, Mohammad Issack; Cooshna-Naik, Dorothy; Conruyt, Noel; Wing, Caroline Koa
2015-01-01
This paper describes a social partnership model based on the living lab concept to promote the professional development of educators through formal and informal capacity-building initiatives. The aim is to have a broader impact on society through community outreach educational initiatives. A Living Lab is an environment for user-centered…
Protege Career Aspirations: The Influence of Formal E-Mentor Networks and Family-Based Role Models
ERIC Educational Resources Information Center
DiRenzo, Marco S.; Weer, Christy H.; Linnehan, Frank
2013-01-01
Using longitudinal data from a nine-month e-mentoring program, we analyzed the influence of formal e-mentor networks and family-based role models on increases in both psychosocial and career-related outcomes. Findings indicate that e-mentor network relationship quality positively influenced general- and career-based self-efficacy which, in turn,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Conti, C.; Barbero, C.; Galeão, A. P.
In this work we compute the one-nucleon-induced nonmesonic hypernuclear decay rates of {sub Λ}{sup 5}He, {sub Λ}{sup 12}C and {sub Λ}{sup 13}C using a formalism based on the independent particle shell model in terms of laboratory coordinates. To ascertain the correctness and precision of the method, these results are compared with those obtained using a formalism in terms of center-of-mass coordinates, which has been previously reported in the literature. The formalism in terms of laboratory coordinates will be useful in the shell-model approach to two-nucleon-induced transitions.
Formal Semantics and Implementation of BPMN 2.0 Inclusive Gateways
NASA Astrophysics Data System (ADS)
Christiansen, David Raymond; Carbone, Marco; Hildebrandt, Thomas
We present the first direct formalization of the semantics of inclusive gateways as described in the Business Process Modeling Notation (BPMN) 2.0 Beta 1 specification. The formal semantics is given for a minimal subset of BPMN 2.0 containing just the inclusive and exclusive gateways and the start and stop events. By focusing on this subset we achieve a simple graph model that highlights the particular non-local features of the inclusive gateway semantics. We sketch two ways of implementing the semantics using algorithms based on incrementally updated data structures and also discuss distributed communication-based implementations of the two algorithms.
Systems engineering principles for the design of biomedical signal processing systems.
Faust, Oliver; Acharya U, Rajendra; Sputh, Bernhard H C; Min, Lim Choo
2011-06-01
Systems engineering aims to produce reliable systems which function according to specification. In this paper we follow a systems engineering approach to design a biomedical signal processing system. We discuss requirements capturing, specification definition, implementation and testing of a classification system. These steps are executed as formal as possible. The requirements, which motivate the system design, are based on diabetes research. The main requirement for the classification system is to be a reliable component of a machine which controls diabetes. Reliability is very important, because uncontrolled diabetes may lead to hyperglycaemia (raised blood sugar) and over a period of time may cause serious damage to many of the body systems, especially the nerves and blood vessels. In a second step, these requirements are refined into a formal CSP‖ B model. The formal model expresses the system functionality in a clear and semantically strong way. Subsequently, the proven system model was translated into an implementation. This implementation was tested with use cases and failure cases. Formal modeling and automated model checking gave us deep insight in the system functionality. This insight enabled us to create a reliable and trustworthy implementation. With extensive tests we established trust in the reliability of the implementation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish
2014-05-01
While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess the accuracy of equivalent cross-section approach, the sub-basins are also divided into equally spaced multiple hillslope cross-sections. These cross-sections are simulated in a fully distributed settings using the 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the contributing area of each cross-section to get total fluxes from each sub-basin referred as reference fluxes. The equivalent cross-section approach is investigated for seven first order sub-basins of the McLaughlin catchment of the Snowy River, NSW, Australia, and evaluated in Wagga-Wagga experimental catchment. Our results show that the simulated fluxes using an equivalent cross-section approach are very close to the reference fluxes whereas computational time is reduced of the order of ~4 to ~22 times in comparison to the fully distributed settings. The transpiration and soil evaporation are the dominant fluxes and constitute ~85% of actual rainfall. Overall, the accuracy achieved in dominant fluxes is higher than the other fluxes. The simulated soil moistures from equivalent cross-section approach are compared with the in-situ soil moisture observations in the Wagga-Wagga experimental catchment in NSW, and results found to be consistent. Our results illustrate that the equivalent cross-section approach reduces the computational time significantly while maintaining the same order of accuracy in predicting the hydrological fluxes. As a result, this approach provides a great potential for implementation of distributed hydrological models at regional scales.
Wang, C K; Nelson, C F; Brinkman, A M; Miller, A C; Hoeffler, W K
2000-04-01
We show that an inherent ability of two distinct cell types, keratinocytes and fibroblasts, can be relied upon to accurately reconstitute full-thickness human skin including the dermal-epidermal junction by a cell-sorting mechanism. A cell slurry containing both cell types added to silicone chambers implanted on the backs of severe combined immunodeficient mice sorts out to reconstitute a clearly defined dermis and stratified epidermis within 2 wk, forming a cell-sorted skin equivalent. Immunostaining of the cell-sorted skin equivalent with human cell markers showed patterns similar to those of normal full-thickness skin. We compared the cell-sorted skin equivalent model with a composite skin model also made on severe combined immunodeficient mice. The composite grafts were constructed from partially differentiated keratinocyte sheets placed on top of a dermal equivalent constructed of devitalized dermis. Electron microscopy revealed that both models formed ample numbers of normal appearing hemidesmosomes. The cell-sorted skin equivalent model, however, had greater numbers of keratin intermediate filaments within the basal keratinocytes that connected to hemidesmosomes, and on the dermal side both collagen filaments and anchoring fibril connections to the lamina densa were more numerous compared with the composite model. Our results may provide some insight into why, in clinical applications for treating burns and other wounds, composite grafts may exhibit surface instability and blistering for up to a year following grafting, and suggest the possible usefulness of the cell-sorted skin equivalent in future grafting applications.
Application of Lightweight Formal Methods to Software Security
NASA Technical Reports Server (NTRS)
Gilliam, David P.; Powell, John D.; Bishop, Matt
2005-01-01
Formal specification and verification of security has proven a challenging task. There is no single method that has proven feasible. Instead, an integrated approach which combines several formal techniques can increase the confidence in the verification of software security properties. Such an approach which species security properties in a library that can be reused by 2 instruments and their methodologies developed for the National Aeronautics and Space Administration (NASA) at the Jet Propulsion Laboratory (JPL) are described herein The Flexible Modeling Framework (FMF) is a model based verijkation instrument that uses Promela and the SPIN model checker. The Property Based Tester (PBT) uses TASPEC and a Text Execution Monitor (TEM). They are used to reduce vulnerabilities and unwanted exposures in software during the development and maintenance life cycles.
Formal implementation of a performance evaluation model for the face recognition system.
Shin, Yong-Nyuo; Kim, Jason; Lee, Yong-Jun; Shin, Woochang; Choi, Jin-Young
2008-01-01
Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.
Electrothermal Equivalent Three-Dimensional Finite-Element Model of a Single Neuron.
Cinelli, Ilaria; Destrade, Michel; Duffy, Maeve; McHugh, Peter
2018-06-01
We propose a novel approach for modelling the interdependence of electrical and mechanical phenomena in nervous cells, by using electrothermal equivalences in finite element (FE) analysis so that existing thermomechanical tools can be applied. First, the equivalence between electrical and thermal properties of the nerve materials is established, and results of a pure heat conduction analysis performed in Abaqus CAE Software 6.13-3 are validated with analytical solutions for a range of steady and transient conditions. This validation includes the definition of equivalent active membrane properties that enable prediction of the action potential. Then, as a step toward fully coupled models, electromechanical coupling is implemented through the definition of equivalent piezoelectric properties of the nerve membrane using the thermal expansion coefficient, enabling prediction of the mechanical response of the nerve to the action potential. Results of the coupled electromechanical model are validated with previously published experimental results of deformation for squid giant axon, crab nerve fibre, and garfish olfactory nerve fibre. A simplified coupled electromechanical modelling approach is established through an electrothermal equivalent FE model of a nervous cell for biomedical applications. One of the key findings is the mechanical characterization of the neural activity in a coupled electromechanical domain, which provides insights into the electromechanical behaviour of nervous cells, such as thinning of the membrane. This is a first step toward modelling three-dimensional electromechanical alteration induced by trauma at nerve bundle, tissue, and organ levels.
Formal Assurance for Cognitive Architecture Based Autonomous Agent
NASA Technical Reports Server (NTRS)
Bhattacharyya, Siddhartha; Eskridge, Thomas; Neogi, Natasha; Carvalho, Marco
2017-01-01
Autonomous systems are designed and deployed in different modeling paradigms. These environments focus on specific concepts in designing the system. We focus our effort in the use of cognitive architectures to design autonomous agents to collaborate with humans to accomplish tasks in a mission. Our research focuses on introducing formal assurance methods to verify the behavior of agents designed in Soar, by translating the agent to the formal verification environment Uppaal.