NASA Astrophysics Data System (ADS)
Austin, Rickey W.
In Einstein's theory of Special Relativity (SR), one method to derive relativistic kinetic energy is via applying the classical work-energy theorem to relativistic momentum. This approach starts with a classical based work-energy theorem and applies SR's momentum to the derivation. One outcome of this derivation is relativistic kinetic energy. From this derivation, it is rather straight forward to form a kinetic energy based time dilation function. In the derivation of General Relativity a common approach is to bypass classical laws as a starting point. Instead a rigorous development of differential geometry and Riemannian space is constructed, from which classical based laws are derived. This is in contrast to SR's approach of starting with classical laws and applying the consequences of the universal speed of light by all observers. A possible method to derive time dilation due to Newtonian gravitational potential energy (NGPE) is to apply SR's approach to deriving relativistic kinetic energy. It will be shown this method gives a first order accuracy compared to Schwarzschild's metric. The SR's kinetic energy and the newly derived NGPE derivation are combined to form a Riemannian metric based on these two energies. A geodesic is derived and calculations compared to Schwarzschild's geodesic for an orbiting test mass about a central, non-rotating, non-charged massive body. The new metric results in high accuracy calculations when compared to Einsteins General Relativity's prediction. The new method provides a candidate approach for starting with classical laws and deriving General Relativity effects. This approach mimics SR's method of starting with classical mechanics when deriving relativistic equations. As a compliment to introducing General Relativity, it provides a plausible scaffolding method from classical physics when teaching introductory General Relativity. A straight forward path from classical laws to General Relativity will be derived. This derivation provides a minimum first order accuracy to Schwarzschild's solution to Einstein's field equations.
Raykov, Tenko; Marcoulides, George A
2016-04-01
The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete nature of the observed items. Two distinct observational equivalence approaches are outlined that render the item response models from corresponding classical test theory-based models, and can each be used to obtain the former from the latter models. Similarly, classical test theory models can be furnished using the reverse application of either of those approaches from corresponding item response models.
Classical Dynamics of Fullerenes
NASA Astrophysics Data System (ADS)
Sławianowski, Jan J.; Kotowski, Romuald K.
2017-06-01
The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.
ERIC Educational Resources Information Center
García, Nuria Alonso; Caplan, Alison
2014-01-01
While there are a number of important critical pedagogies being proposed in the field of foreign language study, more attention should be given to providing concrete examples of how to apply these ideas in the classroom. This article offers a new approach to the textual analysis of literary classics through the keyword-based methodology originally…
Quantum-classical interface based on single flux quantum digital logic
NASA Astrophysics Data System (ADS)
McDermott, R.; Vavilov, M. G.; Plourde, B. L. T.; Wilhelm, F. K.; Liebermann, P. J.; Mukhanov, O. A.; Ohki, T. A.
2018-04-01
We describe an approach to the integrated control and measurement of a large-scale superconducting multiqubit array comprising up to 108 physical qubits using a proximal coprocessor based on the Single Flux Quantum (SFQ) digital logic family. Coherent control is realized by irradiating the qubits directly with classical bitstreams derived from optimal control theory. Qubit measurement is performed by a Josephson photon counter, which provides access to the classical result of projective quantum measurement at the millikelvin stage. We analyze the power budget and physical footprint of the SFQ coprocessor and discuss challenges and opportunities associated with this approach.
Relational similarity-based model of data part 1: foundations and query systems
NASA Astrophysics Data System (ADS)
Belohlavek, Radim; Vychodil, Vilem
2017-10-01
We present a general rank-aware model of data which supports handling of similarity in relational databases. The model is based on the assumption that in many cases it is desirable to replace equalities on values in data tables by similarity relations expressing degrees to which the values are similar. In this context, we study various phenomena which emerge in the model, including similarity-based queries and similarity-based data dependencies. Central notion in our model is that of a ranked data table over domains with similarities which is our counterpart to the notion of relation on relation scheme from the classical relational model. Compared to other approaches which cover related problems, we do not propose a similarity-based or ranking module on top of the classical relational model. Instead, we generalize the very core of the model by replacing the classical, two-valued logic upon which the classical model is built by a more general logic involving a scale of truth degrees that, in addition to the classical truth degrees 0 and 1, contains intermediate truth degrees. While the classical truth degrees 0 and 1 represent nonequality and equality of values, and subsequently mismatch and match of queries, the intermediate truth degrees in the new model represent similarity of values and partial match of queries. Moreover, the truth functions of many-valued logical connectives in the new model serve to aggregate degrees of similarity. The presented approach is conceptually clean, logically sound, and retains most properties of the classical model while enabling us to employ new types of queries and data dependencies. Most importantly, similarity is not handled in an ad hoc way or by putting a "similarity module" atop the classical model in our approach. Rather, it is consistently viewed as a notion that generalizes and replaces equality in the very core of the relational model. We present fundamentals of the formal model and two equivalent query systems which are analogues of the classical relational algebra and domain relational calculus with range declarations. In the sequel to this paper, we deal with similarity-based dependencies.
Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall
2016-01-01
Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.
Teaching Statistics Using Classic Psychology Research: An Activities-Based Approach
ERIC Educational Resources Information Center
Holmes, Karen Y.; Dodd, Brett A.
2012-01-01
In this article, we discuss a collection of active learning activities derived from classic psychology studies that illustrate the appropriate use of descriptive and inferential statistics. (Contains 2 tables.)
Supervised Learning Based Hypothesis Generation from Biomedical Literature.
Sang, Shengtian; Yang, Zhihao; Li, Zongyao; Lin, Hongfei
2015-01-01
Nowadays, the amount of biomedical literatures is growing at an explosive speed, and there is much useful knowledge undiscovered in this literature. Researchers can form biomedical hypotheses through mining these works. In this paper, we propose a supervised learning based approach to generate hypotheses from biomedical literature. This approach splits the traditional processing of hypothesis generation with classic ABC model into AB model and BC model which are constructed with supervised learning method. Compared with the concept cooccurrence and grammar engineering-based approaches like SemRep, machine learning based models usually can achieve better performance in information extraction (IE) from texts. Then through combining the two models, the approach reconstructs the ABC model and generates biomedical hypotheses from literature. The experimental results on the three classic Swanson hypotheses show that our approach outperforms SemRep system.
Optimized swimmer tracking system based on a novel multi-related-targets approach
NASA Astrophysics Data System (ADS)
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2017-02-01
Robust tracking is a crucial step in automatic swimmer evaluation from video sequences. We designed a robust swimmer tracking system using a new multi-related-targets approach. The main idea is to consider the swimmer as a bloc of connected subtargets that advance at the same speed. If one of the subtargets is partially or totally occluded, it can be localized by knowing the position of the others. In this paper, we first introduce the two-dimensional direct linear transformation technique that we used to calibrate the videos. Then, we present the classical tracking approach based on dynamic fusion. Next, we highlight the main contribution of our work, which is the multi-related-targets tracking approach. This approach, the classical head-only approach and the ground truth are then compared, through testing on a database of high-level swimmers in training, national and international competitions (French National Championships, Limoges 2015, and World Championships, Kazan 2015). Tracking percentage and the accuracy of the instantaneous speed are evaluated and the findings show that our new appraoach is significantly more accurate than the classical approach.
Classical versus Computer Algebra Methods in Elementary Geometry
ERIC Educational Resources Information Center
Pech, Pavel
2005-01-01
Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…
The Classics Major and Liberal Education
ERIC Educational Resources Information Center
Liberal Education, 2009
2009-01-01
Over the course of eighteen months, a project based at the Center for Hellenic Studies in Washington, DC, studied undergraduate programs in classics with the goal of developing a better sense of how a major in classics fit within the broader agenda of liberal education. The study adopted a student-centered approach, employing a team of six…
Mioni, Roberto; Marega, Alessandra; Lo Cicero, Marco; Montanaro, Domenico
2016-11-01
The approach to acid-base chemistry in medicine includes several methods. Currently, the two most popular procedures are derived from Stewart's studies and from the bicarbonate/BE-based classical formulation. Another method, unfortunately little known, follows the Kildeberg theory applied to acid-base titration. By using the data produced by Dana Atchley in 1933, regarding electrolytes and blood gas analysis applied to diabetes, we compared the three aforementioned methods, in order to highlight their strengths and their weaknesses. The results obtained, by reprocessing the data of Atchley, have shown that Kildeberg's approach, unlike the other two methods, is consistent, rational and complete for describing the organ-physiological behavior of the hydrogen ion turnover in human organism. In contrast, the data obtained using the Stewart approach and the bicarbonate-based classical formulation are misleading and fail to specify which organs or systems are involved in causing or maintaining the diabetic acidosis. Stewart's approach, despite being considered 'quantitative', does not propose in any way the concept of 'an amount of acid' and becomes even more confusing, because it is not clear how to distinguish between 'strong' and 'weak' ions. As for Stewart's approach, the classical method makes no distinction between hydrogen ions managed by the intermediate metabolism and hydroxyl ions handled by the kidney, but, at least, it is based on the concept of titration (base-excess) and indirectly defines the concept of 'an amount of acid'. In conclusion, only Kildeberg's approach offers a complete understanding of the causes and remedies against any type of acid-base disturbance.
Introducing Hurst exponent in pair trading
NASA Astrophysics Data System (ADS)
Ramos-Requena, J. P.; Trinidad-Segovia, J. E.; Sánchez-Granero, M. A.
2017-12-01
In this paper we introduce a new methodology for pair trading. This new method is based on the calculation of the Hurst exponent of a pair. Our approach is inspired by the classical concepts of co-integration and mean reversion but joined under a unique strategy. We will show how Hurst approach presents better results than classical Distance Method and Correlation strategies in different scenarios. Results obtained prove that this new methodology is consistent and suitable by reducing the drawdown of trading over the classical ones getting as a result a better performance.
Oligo/Polynucleotide-Based Gene Modification: Strategies and Therapeutic Potential
Sargent, R. Geoffrey; Kim, Soya
2011-01-01
Oligonucleotide- and polynucleotide-based gene modification strategies were developed as an alternative to transgene-based and classical gene targeting-based gene therapy approaches for treatment of genetic disorders. Unlike the transgene-based strategies, oligo/polynucleotide gene targeting approaches maintain gene integrity and the relationship between the protein coding and gene-specific regulatory sequences. Oligo/polynucleotide-based gene modification also has several advantages over classical vector-based homologous recombination approaches. These include essentially complete homology to the target sequence and the potential to rapidly engineer patient-specific oligo/polynucleotide gene modification reagents. Several oligo/polynucleotide-based approaches have been shown to successfully mediate sequence-specific modification of genomic DNA in mammalian cells. The strategies involve the use of polynucleotide small DNA fragments, triplex-forming oligonucleotides, and single-stranded oligodeoxynucleotides to mediate homologous exchange. The primary focus of this review will be on the mechanistic aspects of the small fragment homologous replacement, triplex-forming oligonucleotide-mediated, and single-stranded oligodeoxynucleotide-mediated gene modification strategies as it relates to their therapeutic potential. PMID:21417933
ERIC Educational Resources Information Center
Segev, Arik
2017-01-01
Phillip Cam recently published a study on the separation between the teaching and learning of classic school curriculum (CSC) on the one hand and morality on the other. He suggests an approach to integrate them. The goal of this article was to suggest a complementary alternative approach, to Cam's. Based on a MacIntyrean paradigm, I argue that…
ERIC Educational Resources Information Center
Wilson, Mark; Allen, Diane D.; Li, Jun Corser
2006-01-01
This paper compares the approach and resultant outcomes of item response models (IRMs) and classical test theory (CTT). First, it reviews basic ideas of CTT, and compares them to the ideas about using IRMs introduced in an earlier paper. It then applies a comparison scheme based on the AERA/APA/NCME "Standards for Educational and…
González-García, I; García-Arieta, A; Merino-Sanjuan, M; Mangas-Sanjuan, V; Bermejo, M
2018-07-01
Regulatory guidelines recommend that, when a level A IVIVC is established, dissolution specification should be established using averaged data and the maximum difference between AUC and C max between the reference and test formulations cannot be greater than 20%. However, averaging data assumes a loss of information and may reflect a bias in the results. The objective of the current work is to present a new approach to establish dissolution specifications using a new methodology (individual approach) instead of average data (classical approach). Different scenarios were established based on the relationship between in vitro-in vivo dissolution rate coefficient using a level A IVIVC of a controlled release formulation. Then, in order to compare this new approach with the classical one, six additional batches were simulated. For each batch, 1000 simulations of a dissolution assay were run. C max ratios between the reference formulation and each batch were calculated showing that the individual approach was more sensitive and able to detect differences between the reference and the batch formulation compared to the classical approach. Additionally, the new methodology displays wider dissolution specification limits than the classical approach, ensuring that any tablet from the new batch would generate in vivo profiles which its AUC or C max ratio will be out of the 0.8-1.25 range, taking into account the in vitro and in vivo variability of the new batches developed. Copyright © 2018 Elsevier B.V. All rights reserved.
An Introduction to Differentials Based on Hyperreal Numbers and Infinite Microscopes
ERIC Educational Resources Information Center
Henry, Valerie
2010-01-01
In this article, we propose to introduce the differential of a function through a non-classical way, lying on hyperreals and infinite microscopes. This approach is based on the developments of nonstandard analysis, wants to be more intuitive than the classical one and tries to emphasize the functional and geometric aspects of the differential. In…
de Bock, Élodie; Hardouin, Jean-Benoit; Blanchin, Myriam; Le Neel, Tanguy; Kubis, Gildas; Bonnaud-Antignac, Angélique; Dantan, Étienne; Sébille, Véronique
2016-10-01
The objective was to compare classical test theory and Rasch-family models derived from item response theory for the analysis of longitudinal patient-reported outcomes data with possibly informative intermittent missing items. A simulation study was performed in order to assess and compare the performance of classical test theory and Rasch model in terms of bias, control of the type I error and power of the test of time effect. The type I error was controlled for classical test theory and Rasch model whether data were complete or some items were missing. Both methods were unbiased and displayed similar power with complete data. When items were missing, Rasch model remained unbiased and displayed higher power than classical test theory. Rasch model performed better than the classical test theory approach regarding the analysis of longitudinal patient-reported outcomes with possibly informative intermittent missing items mainly for power. This study highlights the interest of Rasch-based models in clinical research and epidemiology for the analysis of incomplete patient-reported outcomes data. © The Author(s) 2013.
Developing Students' Ideas about Lens Imaging: Teaching Experiments with an Image-Based Approach
ERIC Educational Resources Information Center
Grusche, Sascha
2017-01-01
Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists' analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students' ideas, teaching experiments are performed and evaluated using…
Quantum vertex model for reversible classical computing.
Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C
2017-05-12
Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.
Quantum vertex model for reversible classical computing
NASA Astrophysics Data System (ADS)
Chamon, C.; Mucciolo, E. R.; Ruckenstein, A. E.; Yang, Z.-C.
2017-05-01
Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without `learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.
Quasi-classical approaches to vibronic spectra revisited
NASA Astrophysics Data System (ADS)
Karsten, Sven; Ivanov, Sergei D.; Bokarev, Sergey I.; Kühn, Oliver
2018-03-01
The framework to approach quasi-classical dynamics in the electronic ground state is well established and is based on the Kubo-transformed time correlation function (TCF), being the most classical-like quantum TCF. Here we discuss whether the choice of the Kubo-transformed TCF as a starting point for simulating vibronic spectra is as unambiguous as it is for vibrational ones. Employing imaginary-time path integral techniques in combination with the interaction representation allowed us to formulate a method for simulating vibronic spectra in the adiabatic regime that takes nuclear quantum effects and dynamics on multiple potential energy surfaces into account. Further, a generalized quantum TCF is proposed that contains many well-established TCFs, including the Kubo one, as particular cases. Importantly, it also provides a framework to construct new quantum TCFs. Applying the developed methodology to the generalized TCF leads to a plethora of simulation protocols, which are based on the well-known TCFs as well as on new ones. Their performance is investigated on 1D anharmonic model systems at finite temperatures. It is shown that the protocols based on the new TCFs may lead to superior results with respect to those based on the common ones. The strategies to find the optimal approach are discussed.
An extension of the directed search domain algorithm to bilevel optimization
NASA Astrophysics Data System (ADS)
Wang, Kaiqiang; Utyuzhnikov, Sergey V.
2017-08-01
A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.
Ionospheric very low frequency transmitter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuo, Spencer P.
2015-02-15
The theme of this paper is to establish a reliable ionospheric very low frequency (VLF) transmitter, which is also broad band. Two approaches are studied that generate VLF waves in the ionosphere. The first, classic approach employs a ground-based HF heater to directly modulate the high latitude ionospheric, or auroral electrojet. In the classic approach, the intensity-modulated HF heater induces an alternating current in the electrojet, which serves as a virtual antenna to transmit VLF waves. The spatial and temporal variations of the electrojet impact the reliability of the classic approach. The second, beat-wave approach also employs a ground-based HFmore » heater; however, in this approach, the heater operates in a continuous wave mode at two HF frequencies separated by the desired VLF frequency. Theories for both approaches are formulated, calculations performed with numerical model simulations, and the calculations are compared to experimental results. Theory for the classic approach shows that an HF heater wave, intensity-modulated at VLF, modulates the electron temperature dependent electrical conductivity of the ionospheric electrojet, which, in turn, induces an ac electrojet current. Thus, the electrojet becomes a virtual VLF antenna. The numerical results show that the radiation intensity of the modulated electrojet decreases with an increase in VLF radiation frequency. Theory for the beat wave approach shows that the VLF radiation intensity depends upon the HF heater intensity rather than the electrojet strength, and yet this approach can also modulate the electrojet when present. HF heater experiments were conducted for both the intensity modulated and beat wave approaches. VLF radiations were generated and the experimental results confirm the numerical simulations. Theory and experimental results both show that in the absence of the electrojet, VLF radiation from the F-region is generated via the beat wave approach. Additionally, the beat wave approach generates VLF radiations over a larger frequency band than by the modulated electrojet.« less
Petruševska, Marija; Urleb, Uroš; Peternel, Luka
2013-11-01
The excipient-mediated precipitation inhibition is classically determined by the quantification of the dissolved compound in the solution. In this study, two alternative approaches were evaluated, one is the light scattering (nephelometer) and other is the turbidity (plate reader) microtiter plate-based methods which are based on the quantification of the compound precipitate. Following the optimization of the nephelometer settings (beam focus, laser gain) and the experimental conditions, the screening of 23 excipients on the precipitation inhibition of poorly soluble fenofibrate and dipyridamole was performed. The light scattering method resulted in excellent correlation (r>0.91) between the calculated precipitation inhibitor parameters (PIPs) and the precipitation inhibition index (PI(classical)) obtained by the classical approach for fenofibrate and dipyridamole. Among the evaluated PIPs AUC100 (nephelometer) resulted in only four false positives and lack of false negatives. In the case of the turbidity-based method a good correlation of the PI(classical) was obtained for the PIP maximal optical density (OD(max), r=0.91), however, only for fenofibrate. In the case of the OD(max) (plate reader) five false positives and two false negatives were identified. In conclusion, the light scattering-based method outperformed the turbidity-based one and could be reliably used for identification of novel precipitation inhibitors. Copyright © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Kazakeviciute, Agne; Urbone, Renata; Petraite, Monika
2016-01-01
University-based entrepreneurship education is facing a paradigm shift between the classical "business school" and the contemporary cross-disciplinary "technology venturing" approach, mainly advocated by engineering schools and other communities outside business schools. The conflict is between structured "business…
Capomaccio, Stefano; Milanesi, Marco; Bomba, Lorenzo; Cappelli, Katia; Nicolazzi, Ezequiel L; Williams, John L; Ajmone-Marsan, Paolo; Stefanon, Bruno
2015-08-01
Genome-wide association studies (GWAS) have been widely applied to disentangle the genetic basis of complex traits. In cattle breeds, classical GWAS approaches with medium-density marker panels are far from conclusive, especially for complex traits. This is due to the intrinsic limitations of GWAS and the assumptions that are made to step from the association signals to the functional variations. Here, we applied a gene-based strategy to prioritize genotype-phenotype associations found for milk production and quality traits with classical approaches in three Italian dairy cattle breeds with different sample sizes (Italian Brown n = 745; Italian Holstein n = 2058; Italian Simmental n = 477). Although classical regression on single markers revealed only a single genome-wide significant genotype-phenotype association, for Italian Holstein, the gene-based approach identified specific genes in each breed that are associated with milk physiology and mammary gland development. As no standard method has yet been established to step from variation to functional units (i.e., genes), the strategy proposed here may contribute to revealing new genes that play significant roles in complex traits, such as those investigated here, amplifying low association signals using a gene-centric approach. © 2015 Stichting International Foundation for Animal Genetics.
Modeling of delays in PKPD: classical approaches and a tutorial for delay differential equations.
Koch, Gilbert; Krzyzanski, Wojciech; Pérez-Ruixo, Juan Jose; Schropp, Johannes
2014-08-01
In pharmacokinetics/pharmacodynamics (PKPD) the measured response is often delayed relative to drug administration, individuals in a population have a certain lifespan until they maturate or the change of biomarkers does not immediately affects the primary endpoint. The classical approach in PKPD is to apply transit compartment models (TCM) based on ordinary differential equations to handle such delays. However, an alternative approach to deal with delays are delay differential equations (DDE). DDEs feature additional flexibility and properties, realize more complex dynamics and can complementary be used together with TCMs. We introduce several delay based PKPD models and investigate mathematical properties of general DDE based models, which serve as subunits in order to build larger PKPD models. Finally, we review current PKPD software with respect to the implementation of DDEs for PKPD analysis.
A Theoretical and Empirical Comparison of Three Approaches to Achievement Testing.
ERIC Educational Resources Information Center
Haladyna, Tom; Roid, Gale
Three approaches to the construction of achievement tests are compared: construct, operational, and empirical. The construct approach is based upon classical test theory and measures an abstract representation of the instructional objectives. The operational approach specifies instructional intent through instructional objectives, facet design,…
Human monoclonal antibodies: the residual challenge of antibody immunogenicity.
Waldmann, Herman
2014-01-01
One of the major reasons for seeking human monoclonal antibodies has been to eliminate immunogenicity seen with rodent antibodies. Thus far, there has yet been no approach which absolutely abolishes that risk for cell-binding antibodies. In this short article, I draw attention to classical work which shows that monomeric immunoglobulins are intrinsically tolerogenic if they can be prevented from creating aggregates or immune complexes. Based on these classical studies two approaches for active tolerization to therapeutic antibodies are described.
Modelling Systems of Classical/Quantum Identical Particles by Focusing on Algorithms
ERIC Educational Resources Information Center
Guastella, Ivan; Fazio, Claudio; Sperandeo-Mineo, Rosa Maria
2012-01-01
A procedure modelling ideal classical and quantum gases is discussed. The proposed approach is mainly based on the idea that modelling and algorithm analysis can provide a deeper understanding of particularly complex physical systems. Appropriate representations and physical models able to mimic possible pseudo-mechanisms of functioning and having…
Bridging Quantum, Classical and Stochastic Shortcuts to Adiabaticity
NASA Astrophysics Data System (ADS)
Patra, Ayoti
Adiabatic invariants - quantities that are preserved under the slow driving of a system's external parameters - are important in classical mechanics, quantum mechanics and thermodynamics. Adiabatic processes allow a system to be guided to evolve to a desired final state. However, the slow driving of a quantum system makes it vulnerable to environmental decoherence, and for both quantum and classical systems, it is often desirable and time-efficient to speed up a process. Shortcuts to adiabaticity are strategies for preserving adiabatic invariants under rapid driving, typically by means of an auxiliary field that suppresses excitations, otherwise generated during rapid driving. Several theoretical approaches have been developed to construct such shortcuts. In this dissertation we focus on two different approaches, namely counterdiabatic driving and fast-forward driving, which were originally developed for quantum systems. The counterdiabatic approach introduced independently by Dermirplak and Rice [J. Phys. Chem. A, 107:9937, 2003], and Berry [J. Phys. A: Math. Theor., 42:365303, 2009] formally provides an exact expression for the auxiliary Hamiltonian, which however is abstract and difficult to translate into an experimentally implementable form. By contrast, the fast-forward approach developed by Masuda and Nakamura [Proc. R. Soc. A, 466(2116):1135, 2010] provides an auxiliary potential that may be experimentally implementable but generally applies only to ground states. The central theme of this dissertation is that classical shortcuts to adiabaticity can provide useful physical insights and lead to experimentally implementable shortcuts for analogous quantum systems. We start by studying a model system of a tilted piston to provide a proof of principle that quantum shortcuts can successfully be constructed from their classical counterparts. In the remainder of the dissertation, we develop a general approach based on flow-fields which produces simple expressions for auxiliary terms required for both counterdiabatic and fast-forward driving. We demonstrate the applicability of this approach for classical, quantum as well as stochastic systems. We establish strong connections between counterdiabatic and fast-forward approaches, and also between shortcut protocols required for classical, quantum and stochastic systems. In particular, we show how the fast-forward approach can be extended to highly excited states of quantum systems.
Seleson, Pablo; Du, Qiang; Parks, Michael L.
2016-08-16
The peridynamic theory of solid mechanics is a nonlocal reformulation of the classical continuum mechanics theory. At the continuum level, it has been demonstrated that classical (local) elasticity is a special case of peridynamics. Such a connection between these theories has not been extensively explored at the discrete level. This paper investigates the consistency between nearest-neighbor discretizations of linear elastic peridynamic models and finite difference discretizations of the Navier–Cauchy equation of classical elasticity. While nearest-neighbor discretizations in peridynamics have been numerically observed to present grid-dependent crack paths or spurious microcracks, this paper focuses on a different, analytical aspect of suchmore » discretizations. We demonstrate that, even in the absence of cracks, such discretizations may be problematic unless a proper selection of weights is used. Specifically, we demonstrate that using the standard meshfree approach in peridynamics, nearest-neighbor discretizations do not reduce, in general, to discretizations of corresponding classical models. We study nodal-based quadratures for the discretization of peridynamic models, and we derive quadrature weights that result in consistency between nearest-neighbor discretizations of peridynamic models and discretized classical models. The quadrature weights that lead to such consistency are, however, model-/discretization-dependent. We motivate the choice of those quadrature weights through a quadratic approximation of displacement fields. The stability of nearest-neighbor peridynamic schemes is demonstrated through a Fourier mode analysis. Finally, an approach based on a normalization of peridynamic constitutive constants at the discrete level is explored. This approach results in the desired consistency for one-dimensional models, but does not work in higher dimensions. The results of the work presented in this paper suggest that even though nearest-neighbor discretizations should be avoided in peridynamic simulations involving cracks, such discretizations are viable, for example for verification or validation purposes, in problems characterized by smooth deformations. Furthermore, we demonstrate that better quadrature rules in peridynamics can be obtained based on the functional form of solutions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shalashilin, Dmitrii V.; Burghardt, Irene
2008-08-28
In this article, two coherent-state based methods of quantum propagation, namely, coupled coherent states (CCS) and Gaussian-based multiconfiguration time-dependent Hartree (G-MCTDH), are put on the same formal footing, using a derivation from a variational principle in Lagrangian form. By this approach, oscillations of the classical-like Gaussian parameters and oscillations of the quantum amplitudes are formally treated in an identical fashion. We also suggest a new approach denoted here as coupled coherent states trajectories (CCST), which completes the family of Gaussian-based methods. Using the same formalism for all related techniques allows their systematization and a straightforward comparison of their mathematical structuremore » and cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mastromatteo, Michael; Jackson, Bret, E-mail: jackson@chem.umass.edu
Electronic structure methods based on density functional theory are used to construct a reaction path Hamiltonian for CH{sub 4} dissociation on the Ni(100) and Ni(111) surfaces. Both quantum and quasi-classical trajectory approaches are used to compute dissociative sticking probabilities, including all molecular degrees of freedom and the effects of lattice motion. Both approaches show a large enhancement in sticking when the incident molecule is vibrationally excited, and both can reproduce the mode specificity observed in experiments. However, the quasi-classical calculations significantly overestimate the ground state dissociative sticking at all energies, and the magnitude of the enhancement in sticking with vibrationalmore » excitation is much smaller than that computed using the quantum approach or observed in the experiments. The origin of this behavior is an unphysical flow of zero point energy from the nine normal vibrational modes into the reaction coordinate, giving large values for reaction at energies below the activation energy. Perturbative assumptions made in the quantum studies are shown to be accurate at all energies studied.« less
SU-D-BRB-05: Quantum Learning for Knowledge-Based Response-Adaptive Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Naqa, I; Ten, R
Purpose: There is tremendous excitement in radiotherapy about applying data-driven methods to develop personalized clinical decisions for real-time response-based adaptation. However, classical statistical learning methods lack in terms of efficiency and ability to predict outcomes under conditions of uncertainty and incomplete information. Therefore, we are investigating physics-inspired machine learning approaches by utilizing quantum principles for developing a robust framework to dynamically adapt treatments to individual patient’s characteristics and optimize outcomes. Methods: We studied 88 liver SBRT patients with 35 on non-adaptive and 53 on adaptive protocols. Adaptation was based on liver function using a split-course of 3+2 fractions with amore » month break. The radiotherapy environment was modeled as a Markov decision process (MDP) of baseline and one month into treatment states. The patient environment was modeled by a 5-variable state represented by patient’s clinical and dosimetric covariates. For comparison of classical and quantum learning methods, decision-making to adapt at one month was considered. The MDP objective was defined by the complication-free tumor control (P{sup +}=TCPx(1-NTCP)). A simple regression model represented state-action mapping. Single bit in classical MDP and a qubit of 2-superimposed states in quantum MDP represented the decision actions. Classical decision selection was done using reinforcement Q-learning and quantum searching was performed using Grover’s algorithm, which applies uniform superposition over possible states and yields quadratic speed-up. Results: Classical/quantum MDPs suggested adaptation (probability amplitude ≥0.5) 79% of the time for splitcourses and 100% for continuous-courses. However, the classical MDP had an average adaptation probability of 0.5±0.22 while the quantum algorithm reached 0.76±0.28. In cases where adaptation failed, classical MDP yielded 0.31±0.26 average amplitude while the quantum approach averaged a more optimistic 0.57±0.4, but with high phase fluctuations. Conclusion: Our results demonstrate that quantum machine learning approaches provide a feasible and promising framework for real-time and sequential clinical decision-making in adaptive radiotherapy.« less
New VLBI2010 scheduling strategies and implications on the terrestrial reference frames.
Sun, Jing; Böhm, Johannes; Nilsson, Tobias; Krásná, Hana; Böhm, Sigrid; Schuh, Harald
In connection with the work for the next generation VLBI2010 Global Observing System (VGOS) of the International VLBI Service for Geodesy and Astrometry, a new scheduling package (Vie_Sched) has been developed at the Vienna University of Technology as a part of the Vienna VLBI Software. In addition to the classical station-based approach it is equipped with a new scheduling strategy based on the radio sources to be observed. We introduce different configurations of source-based scheduling options and investigate the implications on present and future VLBI2010 geodetic schedules. By comparison to existing VLBI schedules of the continuous campaign CONT11, we find that the source-based approach with two sources has a performance similar to the station-based approach in terms of number of observations, sky coverage, and geodetic parameters. For an artificial 16 station VLBI2010 network, the source-based approach with four sources provides an improved distribution of source observations on the celestial sphere. Monte Carlo simulations yield slightly better repeatabilities of station coordinates with the source-based approach with two sources or four sources than the classical strategy. The new VLBI scheduling software with its alternative scheduling strategy offers a promising option with respect to applications of the VGOS.
New VLBI2010 scheduling strategies and implications on the terrestrial reference frames
NASA Astrophysics Data System (ADS)
Sun, Jing; Böhm, Johannes; Nilsson, Tobias; Krásná, Hana; Böhm, Sigrid; Schuh, Harald
2014-05-01
In connection with the work for the next generation VLBI2010 Global Observing System (VGOS) of the International VLBI Service for Geodesy and Astrometry, a new scheduling package (Vie_Sched) has been developed at the Vienna University of Technology as a part of the Vienna VLBI Software. In addition to the classical station-based approach it is equipped with a new scheduling strategy based on the radio sources to be observed. We introduce different configurations of source-based scheduling options and investigate the implications on present and future VLBI2010 geodetic schedules. By comparison to existing VLBI schedules of the continuous campaign CONT11, we find that the source-based approach with two sources has a performance similar to the station-based approach in terms of number of observations, sky coverage, and geodetic parameters. For an artificial 16 station VLBI2010 network, the source-based approach with four sources provides an improved distribution of source observations on the celestial sphere. Monte Carlo simulations yield slightly better repeatabilities of station coordinates with the source-based approach with two sources or four sources than the classical strategy. The new VLBI scheduling software with its alternative scheduling strategy offers a promising option with respect to applications of the VGOS.
Dynamic Network-Based Epistasis Analysis: Boolean Examples
Azpeitia, Eugenio; Benítez, Mariana; Padilla-Longoria, Pablo; Espinosa-Soto, Carlos; Alvarez-Buylla, Elena R.
2011-01-01
In this article we focus on how the hierarchical and single-path assumptions of epistasis analysis can bias the inference of gene regulatory networks. Here we emphasize the critical importance of dynamic analyses, and specifically illustrate the use of Boolean network models. Epistasis in a broad sense refers to gene interactions, however, as originally proposed by Bateson, epistasis is defined as the blocking of a particular allelic effect due to the effect of another allele at a different locus (herein, classical epistasis). Classical epistasis analysis has proven powerful and useful, allowing researchers to infer and assign directionality to gene interactions. As larger data sets are becoming available, the analysis of classical epistasis is being complemented with computer science tools and system biology approaches. We show that when the hierarchical and single-path assumptions are not met in classical epistasis analysis, the access to relevant information and the correct inference of gene interaction topologies is hindered, and it becomes necessary to consider the temporal dynamics of gene interactions. The use of dynamical networks can overcome these limitations. We particularly focus on the use of Boolean networks that, like classical epistasis analysis, relies on logical formalisms, and hence can complement classical epistasis analysis and relax its assumptions. We develop a couple of theoretical examples and analyze them from a dynamic Boolean network model perspective. Boolean networks could help to guide additional experiments and discern among alternative regulatory schemes that would be impossible or difficult to infer without the elimination of these assumption from the classical epistasis analysis. We also use examples from the literature to show how a Boolean network-based approach has resolved ambiguities and guided epistasis analysis. Our article complements previous accounts, not only by focusing on the implications of the hierarchical and single-path assumption, but also by demonstrating the importance of considering temporal dynamics, and specifically introducing the usefulness of Boolean network models and also reviewing some key properties of network approaches. PMID:22645556
Single-snapshot DOA estimation by using Compressed Sensing
NASA Astrophysics Data System (ADS)
Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin
2014-12-01
This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
A genetic graph-based approach for partitional clustering.
Menéndez, Héctor D; Barrero, David F; Camacho, David
2014-05-01
Clustering is one of the most versatile tools for data analysis. In the recent years, clustering that seeks the continuity of data (in opposition to classical centroid-based approaches) has attracted an increasing research interest. It is a challenging problem with a remarkable practical interest. The most popular continuity clustering method is the spectral clustering (SC) algorithm, which is based on graph cut: It initially generates a similarity graph using a distance measure and then studies its graph spectrum to find the best cut. This approach is sensitive to the parameters of the metric, and a correct parameter choice is critical to the quality of the cluster. This work proposes a new algorithm, inspired by SC, that reduces the parameter dependency while maintaining the quality of the solution. The new algorithm, named genetic graph-based clustering (GGC), takes an evolutionary approach introducing a genetic algorithm (GA) to cluster the similarity graph. The experimental validation shows that GGC increases robustness of SC and has competitive performance in comparison with classical clustering methods, at least, in the synthetic and real dataset used in the experiments.
Positivists, Postmodernists, Aristotelians, and the Challenger Disaster.
ERIC Educational Resources Information Center
Walzer, Arthur E.; Gross, Alan
1994-01-01
Examines the deliberations prior to the Challenger disaster from the perspective of three major approaches in recent scholarship in rhetoric as applied to technical communications: positivism, postmodernistic social constructionism, and classical Aristotelianism. Champions an approach based on Aristotle's "Rhetoric." (HB)
Appreciating Music: An Active Approach
ERIC Educational Resources Information Center
Levin, Andrew R.; Pargas, Roy P.
2005-01-01
A particularly innovative use of laptops is to enhance the music appreciation experience. Group listening and discussion, in combination with a new Web-based application, lead to deeper understanding of classical music. ["Appreciating Music: An Active Approach" was written with Joshua Austin.
Estimating Causal Effects in Mediation Analysis Using Propensity Scores
ERIC Educational Resources Information Center
Coffman, Donna L.
2011-01-01
Mediation is usually assessed by a regression-based or structural equation modeling (SEM) approach that we refer to as the classical approach. This approach relies on the assumption that there are no confounders that influence both the mediator, "M", and the outcome, "Y". This assumption holds if individuals are randomly…
Comparing Health Education Approaches in Textbooks of Sixteen Countries
ERIC Educational Resources Information Center
Carvalho, Graca S.; Dantas, Catarina; Rauma, Anna-Liisa; Luzi, Daniela; Ruggieri, Roberta; Bogner, Franz; Geier, Christine; Caussidier, Claude; Berger, Dominique; Clement, Pierre
2008-01-01
Classically, health education has provided mainly factual knowledge about diseases and their prevention. This educational approach is within the so called Biomedical Model (BM). It is based on pathologic (Pa), curative (Cu) and preventive (Pr) conceptions of health. In contrast, the Health Promotion (HP) approach of health education intends to…
Efficient fractal-based mutation in evolutionary algorithms from iterated function systems
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.
2018-03-01
In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.
Sumner, Isaiah; Iyengar, Srinivasan S
2007-10-18
We have introduced a computational methodology to study vibrational spectroscopy in clusters inclusive of critical nuclear quantum effects. This approach is based on the recently developed quantum wavepacket ab initio molecular dynamics method that combines quantum wavepacket dynamics with ab initio molecular dynamics. The computational efficiency of the dynamical procedure is drastically improved (by several orders of magnitude) through the utilization of wavelet-based techniques combined with the previously introduced time-dependent deterministic sampling procedure measure to achieve stable, picosecond length, quantum-classical dynamics of electrons and nuclei in clusters. The dynamical information is employed to construct a novel cumulative flux/velocity correlation function, where the wavepacket flux from the quantized particle is combined with classical nuclear velocities to obtain the vibrational density of states. The approach is demonstrated by computing the vibrational density of states of [Cl-H-Cl]-, inclusive of critical quantum nuclear effects, and our results are in good agreement with experiment. A general hierarchical procedure is also provided, based on electronic structure harmonic frequencies, classical ab initio molecular dynamics, computation of nuclear quantum-mechanical eigenstates, and employing quantum wavepacket ab initio dynamics to understand vibrational spectroscopy in hydrogen-bonded clusters that display large degrees of anharmonicities.
NASA Astrophysics Data System (ADS)
Gerstmayr, Johannes; Irschik, Hans
2008-12-01
In finite element methods that are based on position and slope coordinates, a representation of axial and bending deformation by means of an elastic line approach has become popular. Such beam and plate formulations based on the so-called absolute nodal coordinate formulation have not yet been verified sufficiently enough with respect to analytical results or classical nonlinear rod theories. Examining the existing planar absolute nodal coordinate element, which uses a curvature proportional bending strain expression, it turns out that the deformation does not fully agree with the solution of the geometrically exact theory and, even more serious, the normal force is incorrect. A correction based on the classical ideas of the extensible elastica and geometrically exact theories is applied and a consistent strain energy and bending moment relations are derived. The strain energy of the solid finite element formulation of the absolute nodal coordinate beam is based on the St. Venant-Kirchhoff material: therefore, the strain energy is derived for the latter case and compared to classical nonlinear rod theories. The error in the original absolute nodal coordinate formulation is documented by numerical examples. The numerical example of a large deformation cantilever beam shows that the normal force is incorrect when using the previous approach, while a perfect agreement between the absolute nodal coordinate formulation and the extensible elastica can be gained when applying the proposed modifications. The numerical examples show a very good agreement of reference analytical and numerical solutions with the solutions of the proposed beam formulation for the case of large deformation pre-curved static and dynamic problems, including buckling and eigenvalue analysis. The resulting beam formulation does not employ rotational degrees of freedom and therefore has advantages compared to classical beam elements regarding energy-momentum conservation.
Physics of automated driving in framework of three-phase traffic theory.
Kerner, Boris S
2018-04-01
We have revealed physical features of automated driving in the framework of the three-phase traffic theory for which there is no fixed time headway to the preceding vehicle. A comparison with the classical model approach to automated driving for which an automated driving vehicle tries to reach a fixed (desired or "optimal") time headway to the preceding vehicle has been made. It turns out that automated driving in the framework of the three-phase traffic theory can exhibit the following advantages in comparison with the classical model of automated driving: (i) The absence of string instability. (ii) Considerably smaller speed disturbances at road bottlenecks. (iii) Automated driving vehicles based on the three-phase theory can decrease the probability of traffic breakdown at the bottleneck in mixed traffic flow consisting of human driving and automated driving vehicles; on the contrary, even a single automated driving vehicle based on the classical approach can provoke traffic breakdown at the bottleneck in mixed traffic flow.
Physics of automated driving in framework of three-phase traffic theory
NASA Astrophysics Data System (ADS)
Kerner, Boris S.
2018-04-01
We have revealed physical features of automated driving in the framework of the three-phase traffic theory for which there is no fixed time headway to the preceding vehicle. A comparison with the classical model approach to automated driving for which an automated driving vehicle tries to reach a fixed (desired or "optimal") time headway to the preceding vehicle has been made. It turns out that automated driving in the framework of the three-phase traffic theory can exhibit the following advantages in comparison with the classical model of automated driving: (i) The absence of string instability. (ii) Considerably smaller speed disturbances at road bottlenecks. (iii) Automated driving vehicles based on the three-phase theory can decrease the probability of traffic breakdown at the bottleneck in mixed traffic flow consisting of human driving and automated driving vehicles; on the contrary, even a single automated driving vehicle based on the classical approach can provoke traffic breakdown at the bottleneck in mixed traffic flow.
ERIC Educational Resources Information Center
Fernandino, Leonardo; Iacoboni, Marco
2010-01-01
The embodied cognition approach to the study of the mind proposes that higher order mental processes such as concept formation and language are essentially based on perceptual and motor processes. Contrary to the classical approach in cognitive science, in which concepts are viewed as amodal, arbitrary symbols, embodied semantics argues that…
Estimating Tree Height-Diameter Models with the Bayesian Method
Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733
Estimating tree height-diameter models with the Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.
Integration of heterogeneous data for classification in hyperspectral satellite imagery
NASA Astrophysics Data System (ADS)
Benedetto, J.; Czaja, W.; Dobrosotskaya, J.; Doster, T.; Duke, K.; Gillis, D.
2012-06-01
As new remote sensing modalities emerge, it becomes increasingly important to nd more suitable algorithms for fusion and integration of dierent data types for the purposes of target/anomaly detection and classication. Typical techniques that deal with this problem are based on performing detection/classication/segmentation separately in chosen modalities, and then integrating the resulting outcomes into a more complete picture. In this paper we provide a broad analysis of a new approach, based on creating fused representations of the multi- modal data, which then can be subjected to analysis by means of the state-of-the-art classiers or detectors. In this scenario we shall consider the hyperspectral imagery combined with spatial information. Our approach involves machine learning techniques based on analysis of joint data-dependent graphs and their associated diusion kernels. Then, the signicant eigenvectors of the derived fused graph Laplace operator form the new representation, which provides integrated features from the heterogeneous input data. We compare these fused approaches with analysis of integrated outputs of spatial and spectral graph methods.
A reductionist perspective on quantum statistical mechanics: Coarse-graining of path integrals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinitskiy, Anton V.; Voth, Gregory A., E-mail: gavoth@uchicago.edu
2015-09-07
Computational modeling of the condensed phase based on classical statistical mechanics has been rapidly developing over the last few decades and has yielded important information on various systems containing up to millions of atoms. However, if a system of interest contains important quantum effects, well-developed classical techniques cannot be used. One way of treating finite temperature quantum systems at equilibrium has been based on Feynman’s imaginary time path integral approach and the ensuing quantum-classical isomorphism. This isomorphism is exact only in the limit of infinitely many classical quasiparticles representing each physical quantum particle. In this work, we present a reductionistmore » perspective on this problem based on the emerging methodology of coarse-graining. This perspective allows for the representations of one quantum particle with only two classical-like quasiparticles and their conjugate momenta. One of these coupled quasiparticles is the centroid particle of the quantum path integral quasiparticle distribution. Only this quasiparticle feels the potential energy function. The other quasiparticle directly provides the observable averages of quantum mechanical operators. The theory offers a simplified perspective on quantum statistical mechanics, revealing its most reductionist connection to classical statistical physics. By doing so, it can facilitate a simpler representation of certain quantum effects in complex molecular environments.« less
A reductionist perspective on quantum statistical mechanics: Coarse-graining of path integrals.
Sinitskiy, Anton V; Voth, Gregory A
2015-09-07
Computational modeling of the condensed phase based on classical statistical mechanics has been rapidly developing over the last few decades and has yielded important information on various systems containing up to millions of atoms. However, if a system of interest contains important quantum effects, well-developed classical techniques cannot be used. One way of treating finite temperature quantum systems at equilibrium has been based on Feynman's imaginary time path integral approach and the ensuing quantum-classical isomorphism. This isomorphism is exact only in the limit of infinitely many classical quasiparticles representing each physical quantum particle. In this work, we present a reductionist perspective on this problem based on the emerging methodology of coarse-graining. This perspective allows for the representations of one quantum particle with only two classical-like quasiparticles and their conjugate momenta. One of these coupled quasiparticles is the centroid particle of the quantum path integral quasiparticle distribution. Only this quasiparticle feels the potential energy function. The other quasiparticle directly provides the observable averages of quantum mechanical operators. The theory offers a simplified perspective on quantum statistical mechanics, revealing its most reductionist connection to classical statistical physics. By doing so, it can facilitate a simpler representation of certain quantum effects in complex molecular environments.
On simulations of rarefied vapor flows with condensation
NASA Astrophysics Data System (ADS)
Bykov, Nikolay; Gorbachev, Yuriy; Fyodorov, Stanislav
2018-05-01
Results of the direct simulation Monte Carlo of 1D spherical and 2D axisymmetric expansions into vacuum of condens-ing water vapor are presented. Two models based on the kinetic approach and the size-corrected classical nucleation theory are employed for simulations. The difference in obtained results is discussed and advantages of the kinetic approach in comparison with the modified classical theory are demonstrated. The impact of clusterization on flow parameters is observed when volume fraction of clusters in the expansion region exceeds 5%. Comparison of the simulation data with the experimental results demonstrates good agreement.
Stott, Clifford; Drury, John
2016-04-01
This article explores the origins and ideology of classical crowd psychology, a body of theory reflected in contemporary popularised understandings such as of the 2011 English 'riots'. This article argues that during the nineteenth century, the crowd came to symbolise a fear of 'mass society' and that 'classical' crowd psychology was a product of these fears. Classical crowd psychology pathologised, reified and decontextualised the crowd, offering the ruling elites a perceived opportunity to control it. We contend that classical theory misrepresents crowd psychology and survives in contemporary understanding because it is ideological. We conclude by discussing how classical theory has been supplanted in academic contexts by an identity-based crowd psychology that restores the meaning to crowd action, replaces it in its social context and in so doing transforms theoretical understanding of 'riots' and the nature of the self. © The Author(s) 2016.
Integrative Approaches to Evaluating Neurotoxicity Data for Risk Assessment.
Risk assessment classically has been based on single adverse outcomes identified as the Lowest Observable Adverse Effect Level (LOAEL) or the highest dose level in a credible study producing a No Observable Adverse Effect Level (NOAEL). While this approach has been useful overal...
Tomographic PIV: particles versus blobs
NASA Astrophysics Data System (ADS)
Champagnat, Frédéric; Cornic, Philippe; Cheminet, Adam; Leclaire, Benjamin; Le Besnerais, Guy; Plyer, Aurélien
2014-08-01
We present an alternative approach to tomographic particle image velocimetry (tomo-PIV) that seeks to recover nearly single voxel particles rather than blobs of extended size. The baseline of our approach is a particle-based representation of image data. An appropriate discretization of this representation yields an original linear forward model with a weight matrix built with specific samples of the system’s point spread function (PSF). Such an approach requires only a few voxels to explain the image appearance, therefore it favors much more sparsely reconstructed volumes than classic tomo-PIV. The proposed forward model is general and flexible and can be embedded in a classical multiplicative algebraic reconstruction technique (MART) or a simultaneous multiplicative algebraic reconstruction technique (SMART) inversion procedure. We show, using synthetic PIV images and by way of a large exploration of the generating conditions and a variety of performance metrics, that the model leads to better results than the classical tomo-PIV approach, in particular in the case of seeding densities greater than 0.06 particles per pixel and of PSFs characterized by a standard deviation larger than 0.8 pixels.
Hybrid quantum-classical modeling of quantum dot devices
NASA Astrophysics Data System (ADS)
Kantner, Markus; Mittnenzweig, Markus; Koprucki, Thomas
2017-11-01
The design of electrically driven quantum dot devices for quantum optical applications asks for modeling approaches combining classical device physics with quantum mechanics. We connect the well-established fields of semiclassical semiconductor transport theory and the theory of open quantum systems to meet this requirement. By coupling the van Roosbroeck system with a quantum master equation in Lindblad form, we introduce a new hybrid quantum-classical modeling approach, which provides a comprehensive description of quantum dot devices on multiple scales: it enables the calculation of quantum optical figures of merit and the spatially resolved simulation of the current flow in realistic semiconductor device geometries in a unified way. We construct the interface between both theories in such a way, that the resulting hybrid system obeys the fundamental axioms of (non)equilibrium thermodynamics. We show that our approach guarantees the conservation of charge, consistency with the thermodynamic equilibrium and the second law of thermodynamics. The feasibility of the approach is demonstrated by numerical simulations of an electrically driven single-photon source based on a single quantum dot in the stationary and transient operation regime.
Advances in studies of disease-navigating webs: Sarcoptes scabiei as a case study
2014-01-01
The discipline of epidemiology is the study of the patterns, causes and effects of health and disease conditions in defined anima populations. It is the key to evidence-based medicine, which is one of the cornerstones of public health. One of the important facets of epidemiology is disease-navigating webs (disease-NW) through which zoonotic and multi-host parasites in general move from one host to another. Epidemiology in this context includes (i) classical epidemiological approaches based on the statistical analysis of disease prevalence and distribution and, more recently, (ii) genetic approaches with approximations of disease-agent population genetics. Both approaches, classical epidemiology and population genetics, are useful for studying disease-NW. However, both have strengths and weaknesses when applied separately, which, unfortunately, is too often current practice. In this paper, we use Sarcoptes scabiei mite epidemiology as a case study to show how important an integrated approach can be in understanding disease-NW and subsequent disease control. PMID:24406101
NASA Astrophysics Data System (ADS)
Labunets, Valeri G.; Labunets-Rundblad, Ekaterina V.; Astola, Jaakko T.
2001-12-01
Fast algorithms for a wide class of non-separable n-dimensional (nD) discrete unitary K-transforms (DKT) are introduced. They need less 1D DKTs than in the case of the classical radix-2 FFT-type approach. The method utilizes a decomposition of the nD K-transform into the product of a new nD discrete Radon transform and of a set of parallel/independ 1D K-transforms. If the nD K-transform has a separable kernel (e.g., the case of the discrete Fourier transform) our approach leads to decrease of multiplicative complexity by the factor of n comparing to the classical row/column separable approach. It is well known that an n-th order Volterra filter of one dimensional signal can be evaluated by an appropriate nD linear convolution. This work describes new superfast algorithm for Volterra filtering. New approach is based on the superfast discrete Radon and Nussbaumer polynomial transforms.
A Monte Carlo–Based Bayesian Approach for Measuring Agreement in a Qualitative Scale
Pérez Sánchez, Carlos Javier
2014-01-01
Agreement analysis has been an active research area whose techniques have been widely applied in psychology and other fields. However, statistical agreement among raters has been mainly considered from a classical statistics point of view. Bayesian methodology is a viable alternative that allows the inclusion of subjective initial information coming from expert opinions, personal judgments, or historical data. A Bayesian approach is proposed by providing a unified Monte Carlo–based framework to estimate all types of measures of agreement in a qualitative scale of response. The approach is conceptually simple and it has a low computational cost. Both informative and non-informative scenarios are considered. In case no initial information is available, the results are in line with the classical methodology, but providing more information on the measures of agreement. For the informative case, some guidelines are presented to elicitate the prior distribution. The approach has been applied to two applications related to schizophrenia diagnosis and sensory analysis. PMID:29881002
Confidence of compliance: a Bayesian approach for percentile standards.
McBride, G B; Ellis, J C
2001-04-01
Rules for assessing compliance with percentile standards commonly limit the number of exceedances permitted in a batch of samples taken over a defined assessment period. Such rules are commonly developed using classical statistical methods. Results from alternative Bayesian methods are presented (using beta-distributed prior information and a binomial likelihood), resulting in "confidence of compliance" graphs. These allow simple reading of the consumer's risk and the supplier's risks for any proposed rule. The influence of the prior assumptions required by the Bayesian technique on the confidence results is demonstrated, using two reference priors (uniform and Jeffreys') and also using optimistic and pessimistic user-defined priors. All four give less pessimistic results than does the classical technique, because interpreting classical results as "confidence of compliance" actually invokes a Bayesian approach with an extreme prior distribution. Jeffreys' prior is shown to be the most generally appropriate choice of prior distribution. Cost savings can be expected using rules based on this approach.
Discrete-time Quantum Walks via Interchange Framework and Memory in Quantum Evolution
NASA Astrophysics Data System (ADS)
Dimcovic, Zlatko
One of the newer and rapidly developing approaches in quantum computing is based on "quantum walks," which are quantum processes on discrete space that evolve in either discrete or continuous time and are characterized by mixing of components at each step. The idea emerged in analogy with the classical random walks and stochastic techniques, but these unitary processes are very different even as they have intriguing similarities. This thesis is concerned with study of discrete-time quantum walks. The original motivation from classical Markov chains required for discrete-time quantum walks that one adds an auxiliary Hilbert space, unrelated to the one in which the system evolves, in order to be able to mix components in that space and then take the evolution steps accordingly (based on the state in that space). This additional, "coin," space is very often an internal degree of freedom like spin. We have introduced a general framework for construction of discrete-time quantum walks in a close analogy with the classical random walks with memory that is rather different from the standard "coin" approach. In this method there is no need to bring in a different degree of freedom, while the full state of the system is still described in the direct product of spaces (of states). The state can be thought of as an arrow pointing from the previous to the current site in the evolution, representing the one-step memory. The next step is then controlled by a single local operator assigned to each site in the space, acting quite like a scattering operator. This allows us to probe and solve some problems of interest that have not had successful approaches with "coined" walks. We construct and solve a walk on the binary tree, a structure of great interest but until our result without an explicit discrete time quantum walk, due to difficulties in managing coin spaces necessary in the standard approach. Beyond algorithmic interests, the model based on memory allows one to explore effects of history on the quantum evolution and the subtle emergence of classical features as "memory" is explicitly kept for additional steps. We construct and solve a walk with an additional correlation step, finding interesting new features. On the other hand, the fact that the evolution is driven entirely by a local operator, not involving additional spaces, enables us to choose the Fourier transform as an operator completely controlling the evolution. This in turn allows us to combine the quantum walk approach with Fourier transform based techniques, something decidedly not possible in classical computational physics. We are developing a formalism for building networks manageable by walks constructed with this framework, based on the surprising efficiency of our framework in discovering internals of a simple network that we so far solved. Finally, in line with our expectation that the field of quantum walks can take cues from the rich history of development of the classical stochastic techniques, we establish starting points for the work on non-Abelian quantum walks, with a particular quantum-walk analog of the classical "card shuffling," the walk on the permutation group. In summary, this thesis presents a new framework for construction of discrete time quantum walks, employing and exploring memoried nature of unitary evolution. It is applied to fully solving the problems of: A walk on the binary tree and exploration of the quantum-to-classical transition with increased correlation length (history). It is then used for simple network discovery, and to lay the groundwork for analysis of complex networks, based on combined power of efficient exploration of the Hilbert space (as a walk mixing components) and Fourier transformation (since we can choose this for the evolution operator). We hope to establish this as a general technique as its power would be unmatched by any approaches available in the classical computing. We also looked at the promising and challenging prospect of walks on non-Abelian structures by setting up the problem of "quantum card shuffling," a quantum walk on the permutation group. Relation to other work is thoroughly discussed throughout, along with examination of the context of our work and overviews of our current and future work.
Semiclassical propagator of the Wigner function.
Dittrich, Thomas; Viviescas, Carlos; Sandoval, Luis
2006-02-24
Propagation of the Wigner function is studied on two levels of semiclassical propagation: one based on the Van Vleck propagator, the other on phase-space path integration. Leading quantum corrections to the classical Liouville propagator take the form of a time-dependent quantum spot. Its oscillatory structure depends on whether the underlying classical flow is elliptic or hyperbolic. It can be interpreted as the result of interference of a pair of classical trajectories, indicating how quantum coherences are to be propagated semiclassically in phase space. The phase-space path-integral approach allows for a finer resolution of the quantum spot in terms of Airy functions.
Hybrid Quantum-Classical Approach to Quantum Optimal Control.
Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu
2017-04-14
A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.
Classical Limit and Quantum Logic
NASA Astrophysics Data System (ADS)
Losada, Marcelo; Fortin, Sebastian; Holik, Federico
2018-02-01
The analysis of the classical limit of quantum mechanics usually focuses on the state of the system. The general idea is to explain the disappearance of the interference terms of quantum states appealing to the decoherence process induced by the environment. However, in these approaches it is not explained how the structure of quantum properties becomes classical. In this paper, we consider the classical limit from a different perspective. We consider the set of properties of a quantum system and we study the quantum-to-classical transition of its logical structure. The aim is to open the door to a new study based on dynamical logics, that is, logics that change over time. In particular, we appeal to the notion of hybrid logics to describe semiclassical systems. Moreover, we consider systems with many characteristic decoherence times, whose sublattices of properties become distributive at different times.
Paraconsistent Reasoning for OWL 2
NASA Astrophysics Data System (ADS)
Ma, Yue; Hitzler, Pascal
A four-valued description logic has been proposed to reason with description logic based inconsistent knowledge bases. This approach has a distinct advantage that it can be implemented by invoking classical reasoners to keep the same complexity as under the classical semantics. However, this approach has so far only been studied for the basic description logic mathcal{ALC}. In this paper, we further study how to extend the four-valued semantics to the more expressive description logic mathcal{SROIQ} which underlies the forthcoming revision of the Web Ontology Language, OWL 2, and also investigate how it fares when adapted to tractable description logics including mathcal{EL++}, DL-Lite, and Horn-DLs. We define the four-valued semantics along the same lines as for mathcal{ALC} and show that we can retain most of the desired properties.
Schulz, Katja; Peyre, Marisa; Staubach, Christoph; Schauer, Birgit; Schulz, Jana; Calba, Clémentine; Häsler, Barbara; Conraths, Franz J.
2017-01-01
Surveillance of Classical Swine Fever (CSF) should not only focus on livestock, but must also include wild boar. To prevent disease transmission into commercial pig herds, it is therefore vital to have knowledge about the disease status in wild boar. In the present study, we performed a comprehensive evaluation of alternative surveillance strategies for Classical Swine Fever (CSF) in wild boar and compared them with the currently implemented conventional approach. The evaluation protocol was designed using the EVA tool, a decision support tool to help in the development of an economic and epidemiological evaluation protocol for surveillance. To evaluate the effectiveness of the surveillance strategies, we investigated their sensitivity and timeliness. Acceptability was analysed and finally, the cost-effectiveness of the surveillance strategies was determined. We developed 69 surveillance strategies for comparative evaluation between the existing approach and the novel proposed strategies. Sampling only within sub-adults resulted in a better acceptability and timeliness than the currently implemented strategy. Strategies that were completely based on passive surveillance performance did not achieve the desired detection probability of 95%. In conclusion, the results of the study suggest that risk-based approaches can be an option to design more effective CSF surveillance strategies in wild boar. PMID:28266576
Schulz, Katja; Peyre, Marisa; Staubach, Christoph; Schauer, Birgit; Schulz, Jana; Calba, Clémentine; Häsler, Barbara; Conraths, Franz J
2017-03-07
Surveillance of Classical Swine Fever (CSF) should not only focus on livestock, but must also include wild boar. To prevent disease transmission into commercial pig herds, it is therefore vital to have knowledge about the disease status in wild boar. In the present study, we performed a comprehensive evaluation of alternative surveillance strategies for Classical Swine Fever (CSF) in wild boar and compared them with the currently implemented conventional approach. The evaluation protocol was designed using the EVA tool, a decision support tool to help in the development of an economic and epidemiological evaluation protocol for surveillance. To evaluate the effectiveness of the surveillance strategies, we investigated their sensitivity and timeliness. Acceptability was analysed and finally, the cost-effectiveness of the surveillance strategies was determined. We developed 69 surveillance strategies for comparative evaluation between the existing approach and the novel proposed strategies. Sampling only within sub-adults resulted in a better acceptability and timeliness than the currently implemented strategy. Strategies that were completely based on passive surveillance performance did not achieve the desired detection probability of 95%. In conclusion, the results of the study suggest that risk-based approaches can be an option to design more effective CSF surveillance strategies in wild boar.
Prudlo, Johannes; Bißbort, Charlotte; Glass, Aenne; Grossmann, Annette; Hauenstein, Karlheinz; Benecke, Reiner; Teipel, Stefan J
2012-09-01
The aim of this work was to investigate white-matter microstructural changes within and outside the corticospinal tract in classical amyotrophic lateral sclerosis (ALS) and in lower motor neuron (LMN) ALS variants by means of diffusion tensor imaging (DTI). We investigated 22 ALS patients and 21 age-matched controls utilizing a whole-brain approach with a 1.5-T scanner for DTI. The patient group was comprised of 15 classical ALS- and seven LMN ALS-variant patients (progressive muscular atrophy, flail arm and flail leg syndrome). Disease severity was measured by the revised version of the functional rating scale. White matter fractional anisotropy (FA) was assessed using tract-based spatial statistics (TBSS) and a region of interest (ROI) approach. We found significant FA reductions in motor and extra-motor cerebral fiber tracts in classical ALS and in the LMN ALS-variant patients compared to controls. The voxel-based TBSS results were confirmed by the ROI findings. The white matter damage correlated with the disease severity in the patient group and was found in a similar distribution, but to a lesser extent, among the LMN ALS-variant subgroup. ALS and LMN ALS variants are multisystem degenerations. DTI shows the potential to determine an earlier diagnosis, particularly in LMN ALS variants. The statistically identical findings of white matter lesions in classical ALS and LMN variants as ascertained by DTI further underline that these variants should be regarded as part of the ALS spectrum.
Silva, George; Poirot, Laurent; Galetto, Roman; Smith, Julianne; Montoya, Guillermo; Duchateau, Philippe; Pâques, Frédéric
2011-01-01
The importance of safer approaches for gene therapy has been underscored by a series of severe adverse events (SAEs) observed in patients involved in clinical trials for Severe Combined Immune Deficiency Disease (SCID) and Chromic Granulomatous Disease (CGD). While a new generation of viral vectors is in the process of replacing the classical gamma-retrovirus–based approach, a number of strategies have emerged based on non-viral vectorization and/or targeted insertion aimed at achieving safer gene transfer. Currently, these methods display lower efficacies than viral transduction although many of them can yield more than 1% engineered cells in vitro. Nuclease-based approaches, wherein an endonuclease is used to trigger site-specific genome editing, can significantly increase the percentage of targeted cells. These methods therefore provide a real alternative to classical gene transfer as well as gene editing. However, the first endonuclease to be in clinic today is not used for gene transfer, but to inactivate a gene (CCR5) required for HIV infection. Here, we review these alternative approaches, with a special emphasis on meganucleases, a family of naturally occurring rare-cutting endonucleases, and speculate on their current and future potential. PMID:21182466
ERIC Educational Resources Information Center
Pandey, Anjali
2012-01-01
This article calls for a rethinking of pure process-based approaches in the teaching of second language writers in the middle school classroom. The author provides evidence from a detailed case study of the writing of a Korean middle school student in a U.S. school setting to make a case for rethinking the efficacy of classic process-based…
Harmonic oscillators and resonance series generated by a periodic unstable classical orbit
NASA Technical Reports Server (NTRS)
Kazansky, A. K.; Ostrovsky, Valentin N.
1995-01-01
The presence of an unstable periodic classical orbit allows one to introduce the decay time as a purely classical magnitude: inverse of the Lyapunov index which characterizes the orbit instability. The Uncertainty Relation gives the corresponding resonance width which is proportional to the Planck constant. The more elaborate analysis is based on the parabolic equation method where the problem is effectively reduced to the multidimensional harmonic oscillator with the time-dependent frequency. The resonances form series in the complex energy plane which is equidistant in the direction perpendicular to the real axis. The applications of the general approach to various problems in atomic physics are briefly exposed.
Modern versus Tradition: Are There Two Different Approaches to Reading of the Confucian Classics?
ERIC Educational Resources Information Center
Cheng, Chung-yi
2016-01-01
How to read the Confucian Classics today? Scholars with philosophical training usually emphasize that the philosophical approach, in comparison with the classicist and historical ones, is the best way to read the Confucian Classics, for it can dig out as much intellectual resources as possible from the classical texts in order to show their modern…
Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan
2018-03-01
Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Special Report: Rhetorical Criticism: The State of the Art.
ERIC Educational Resources Information Center
Leff, Michael C., Ed.
1980-01-01
The seven articles in this journal issue survey and assess the art of rhetorical criticism based on evidence derived from critical practice. The first five articles analyze the literature subsumed with certain approaches to rhetorical criticism and are arranged in the chronological order of the emergence of the approach: neo-classical criticism,…
The Bipolar Approach: A Model for Interdisciplinary Art History Courses.
ERIC Educational Resources Information Center
Calabrese, John A.
1993-01-01
Describes a college level art history course based on the opposing concepts of Classicism and Romanticism. Contends that all creative work, such as film or architecture, can be categorized according to this bipolar model. Includes suggestions for objects to study and recommends this approach for art education at all education levels. (CFR)
Guzik, Przemyslaw; Piekos, Caroline; Pierog, Olivia; Fenech, Naiman; Krauze, Tomasz; Piskorski, Jaroslaw; Wykretowicz, Andrzej
2018-05-01
We compared classic ECG-derived versus a mobile approach to heart rate variability (HRV) measurement. 29 young adult healthy volunteers underwent a simultaneous recording of heart rate using an ECG and a chest heart rate monitor at supine rest, during mental stress and active standing. Mean RR interval, Standard Deviation of Normal-to-Normal (SDNN) of RR intervals, and Root Mean Square of the Successive Differences (RMSSD) between RR intervals were computed in 168 pairs of 5-minute epochs by in-house software on a PC (only sinus beats) and by mobile application "ELITEHRV" on a smartphone (no beat type identification). ECG analysis showed that 33.9% of the recordings contained at least one non-sinus beat or artefact, the mobile app did not report this. The mean RR intervals were significantly longer (p = 0.0378), while SDNN (p = 0.0001) and RMSSD (p = 0.0199) were smaller for the mobile approach. Measures of identical HRV parameters by ECG-based and mobile approaches are not equivalent. Copyright © 2018 Elsevier B.V. All rights reserved.
Analysing causal structures with entropy
Weilenmann, Mirjam
2017-01-01
A central question for causal inference is to decide whether a set of correlations fits a given causal structure. In general, this decision problem is computationally infeasible and hence several approaches have emerged that look for certificates of compatibility. Here, we review several such approaches based on entropy. We bring together the key aspects of these entropic techniques with unified terminology, filling several gaps and establishing new connections, all illustrated with examples. We consider cases where unobserved causes are classical, quantum and post-quantum, and discuss what entropic analyses tell us about the difference. This difference has applications to quantum cryptography, where it can be crucial to eliminate the possibility of classical causes. We discuss the achievements and limitations of the entropic approach in comparison to other techniques and point out the main open problems. PMID:29225499
Analysing causal structures with entropy
NASA Astrophysics Data System (ADS)
Weilenmann, Mirjam; Colbeck, Roger
2017-11-01
A central question for causal inference is to decide whether a set of correlations fits a given causal structure. In general, this decision problem is computationally infeasible and hence several approaches have emerged that look for certificates of compatibility. Here, we review several such approaches based on entropy. We bring together the key aspects of these entropic techniques with unified terminology, filling several gaps and establishing new connections, all illustrated with examples. We consider cases where unobserved causes are classical, quantum and post-quantum, and discuss what entropic analyses tell us about the difference. This difference has applications to quantum cryptography, where it can be crucial to eliminate the possibility of classical causes. We discuss the achievements and limitations of the entropic approach in comparison to other techniques and point out the main open problems.
An Efficient Numerical Approach for Nonlinear Fokker-Planck equations
NASA Astrophysics Data System (ADS)
Otten, Dustin; Vedula, Prakash
2009-03-01
Fokker-Planck equations which are nonlinear with respect to their probability densities that occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, classical fermions and bosons can be challenging to solve numerically. To address some underlying challenges in obtaining numerical solutions, we propose a quadrature based moment method for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations. In this approach the distribution function is represented as a collection of Dirac delta functions with corresponding quadrature weights and locations, that are in turn determined from constraints based on evolution of generalized moments. Properties of the distribution function can be obtained by solution of transport equations for quadrature weights and locations. We will apply this computational approach to study a wide range of problems, including the Desai-Zwanzig Model (for nonlinear muscular contraction) and multivariate nonlinear Fokker-Planck equations describing classical fermions and bosons, and will also demonstrate good agreement with results obtained from Monte Carlo and other standard numerical methods.
Developing students’ ideas about lens imaging: teaching experiments with an image-based approach
NASA Astrophysics Data System (ADS)
Grusche, Sascha
2017-07-01
Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists’ analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students’ ideas, teaching experiments are performed and evaluated using qualitative content analysis. Some of the students’ ideas have not been reported before, namely those related to blurry lens images, and those developed by the proposed teaching approach. To describe learning pathways systematically, a conception-versus-time coordinate system is introduced, specifying how teaching actions help students advance toward a scientific understanding.
Teaching Biochemistry at a Medical Faculty with a Problem-Based Learning System.
ERIC Educational Resources Information Center
Rosing, Jan
1997-01-01
Highlights the differences between classical teaching methods and problem-based learning. Describes the curriculum and problem-based approach of the Faculty of Medicine at the Maastricht University and gives an overview of the implementation of biochemistry in the medical curriculum. Discusses the procedure for student assessment and presents…
NASA Technical Reports Server (NTRS)
Baker, John; Thorpe, Ira
2012-01-01
Thoroughly studied classic space-based gravitational-wave missions concepts such as the Laser Interferometer Space Antenna (LISA) are based on laser-interferometry techniques. Ongoing developments in atom-interferometry techniques have spurred recently proposed alternative mission concepts. These different approaches can be understood on a common footing. We present an comparative analysis of how each type of instrument responds to some of the noise sources which may limiting gravitational-wave mission concepts. Sensitivity to laser frequency instability is essentially the same for either approach. Spacecraft acceleration reference stability sensitivities are different, allowing smaller spacecraft separations in the atom interferometry approach, but acceleration noise requirements are nonetheless similar. Each approach has distinct additional measurement noise issues.
Nichols, James D.; Hines, James E.
2002-01-01
We first consider the estimation of the finite rate of population increase or population growth rate, u i , using capture-recapture data from open populations. We review estimation and modelling of u i under three main approaches to modelling openpopulation data: the classic approach of Jolly (1965) and Seber (1965), the superpopulation approach of Crosbie & Manly (1985) and Schwarz & Arnason (1996), and the temporal symmetry approach of Pradel (1996). Next, we consider the contributions of different demographic components to u i using a probabilistic approach based on the composition of the population at time i + 1 (Nichols et al., 2000b). The parameters of interest are identical to the seniority parameters, n i , of Pradel (1996). We review estimation of n i under the classic, superpopulation, and temporal symmetry approaches. We then compare these direct estimation approaches for u i and n i with analogues computed using projection matrix asymptotics. We also discuss various extensions of the estimation approaches to multistate applications and to joint likelihoods involving multiple data types.
Nichols, J.D.; Hines, J.E.
2002-01-01
We first consider the estimation of the finite rate of population increase or population growth rate, lambda sub i, using capture-recapture data from open populations. We review estimation and modelling of lambda sub i under three main approaches to modelling open-population data: the classic approach of Jolly (1965) and Seber (1965), the superpopulation approach of Crosbie & Manly (1985) and Schwarz & Arnason (1996), and the temporal symmetry approach of Pradel (1996). Next, we consider the contributions of different demographic components to lambda sub i using a probabilistic approach based on the composition of the population at time i + 1 (Nichols et al., 2000b). The parameters of interest are identical to the seniority parameters, gamma sub i, of Pradel (1996). We review estimation of gamma sub i under the classic, superpopulation, and temporal symmetry approaches. We then compare these direct estimation approaches for lambda sub i and gamma sub i with analogues computed using projection matrix asymptotics. We also discuss various extensions of the estimation approaches to multistate applications and to joint likelihoods involving multiple data types.
Path integral Monte Carlo ground state approach: formalism, implementation, and applications
NASA Astrophysics Data System (ADS)
Yan, Yangqian; Blume, D.
2017-11-01
Monte Carlo techniques have played an important role in understanding strongly correlated systems across many areas of physics, covering a wide range of energy and length scales. Among the many Monte Carlo methods applicable to quantum mechanical systems, the path integral Monte Carlo approach with its variants has been employed widely. Since semi-classical or classical approaches will not be discussed in this review, path integral based approaches can for our purposes be divided into two categories: approaches applicable to quantum mechanical systems at zero temperature and approaches applicable to quantum mechanical systems at finite temperature. While these two approaches are related to each other, the underlying formulation and aspects of the algorithm differ. This paper reviews the path integral Monte Carlo ground state (PIGS) approach, which solves the time-independent Schrödinger equation. Specifically, the PIGS approach allows for the determination of expectation values with respect to eigen states of the few- or many-body Schrödinger equation provided the system Hamiltonian is known. The theoretical framework behind the PIGS algorithm, implementation details, and sample applications for fermionic systems are presented.
Baudrot, Virgile; Preux, Sara; Ducrot, Virginie; Pave, Alain; Charles, Sandrine
2018-02-06
Toxicokinetic-toxicodynamic (TKTD) models, as the General Unified Threshold model of Survival (GUTS), provide a consistent process-based framework compared to classical dose-response models to analyze both time and concentration-dependent data sets. However, the extent to which GUTS models (Stochastic Death (SD) and Individual Tolerance (IT)) lead to a better fitting than classical dose-response model at a given target time (TT) has poorly been investigated. Our paper highlights that GUTS estimates are generally more conservative and have a reduced uncertainty through smaller credible intervals for the studied data sets than classical TT approaches. Also, GUTS models enable estimating any x% lethal concentration at any time (LC x,t ), and provide biological information on the internal processes occurring during the experiments. While both GUTS-SD and GUTS-IT models outcompete classical TT approaches, choosing one preferentially to the other is still challenging. Indeed, the estimates of survival rate over time and LC x,t are very close between both models, but our study also points out that the joint posterior distributions of SD model parameters are sometimes bimodal, while two parameters of the IT model seems strongly correlated. Therefore, the selection between these two models has to be supported by the experimental design and the biological objectives, and this paper provides some insights to drive this choice.
Experimental quantum annealing: case study involving the graph isomorphism problem.
Zick, Kenneth M; Shehab, Omar; French, Matthew
2015-06-08
Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N(2) to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers.
Experimental quantum annealing: case study involving the graph isomorphism problem
Zick, Kenneth M.; Shehab, Omar; French, Matthew
2015-01-01
Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N2 to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers. PMID:26053973
A Scalable Approach for Protein False Discovery Rate Estimation in Large Proteomic Data Sets.
Savitski, Mikhail M; Wilhelm, Mathias; Hahne, Hannes; Kuster, Bernhard; Bantscheff, Marcus
2015-09-01
Calculating the number of confidently identified proteins and estimating false discovery rate (FDR) is a challenge when analyzing very large proteomic data sets such as entire human proteomes. Biological and technical heterogeneity in proteomic experiments further add to the challenge and there are strong differences in opinion regarding the conceptual validity of a protein FDR and no consensus regarding the methodology for protein FDR determination. There are also limitations inherent to the widely used classic target-decoy strategy that particularly show when analyzing very large data sets and that lead to a strong over-representation of decoy identifications. In this study, we investigated the merits of the classic, as well as a novel target-decoy-based protein FDR estimation approach, taking advantage of a heterogeneous data collection comprised of ∼19,000 LC-MS/MS runs deposited in ProteomicsDB (https://www.proteomicsdb.org). The "picked" protein FDR approach treats target and decoy sequences of the same protein as a pair rather than as individual entities and chooses either the target or the decoy sequence depending on which receives the highest score. We investigated the performance of this approach in combination with q-value based peptide scoring to normalize sample-, instrument-, and search engine-specific differences. The "picked" target-decoy strategy performed best when protein scoring was based on the best peptide q-value for each protein yielding a stable number of true positive protein identifications over a wide range of q-value thresholds. We show that this simple and unbiased strategy eliminates a conceptual issue in the commonly used "classic" protein FDR approach that causes overprediction of false-positive protein identification in large data sets. The approach scales from small to very large data sets without losing performance, consistently increases the number of true-positive protein identifications and is readily implemented in proteomics analysis software. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Khrennikov, Andrei
2011-09-01
We propose a model of quantum-like (QL) processing of mental information. This model is based on quantum information theory. However, in contrast to models of "quantum physical brain" reducing mental activity (at least at the highest level) to quantum physical phenomena in the brain, our model matches well with the basic neuronal paradigm of the cognitive science. QL information processing is based (surprisingly) on classical electromagnetic signals induced by joint activity of neurons. This novel approach to quantum information is based on representation of quantum mechanics as a version of classical signal theory which was recently elaborated by the author. The brain uses the QL representation (QLR) for working with abstract concepts; concrete images are described by classical information theory. Two processes, classical and QL, are performed parallely. Moreover, information is actively transmitted from one representation to another. A QL concept given in our model by a density operator can generate a variety of concrete images given by temporal realizations of the corresponding (Gaussian) random signal. This signal has the covariance operator coinciding with the density operator encoding the abstract concept under consideration. The presence of various temporal scales in the brain plays the crucial role in creation of QLR in the brain. Moreover, in our model electromagnetic noise produced by neurons is a source of superstrong QL correlations between processes in different spatial domains in the brain; the binding problem is solved on the QL level, but with the aid of the classical background fluctuations. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wallis, Graham B.
1989-01-01
Some features of two recent approaches of two-phase potential flow are presented. The first approach is based on a set of progressive examples that can be analyzed using common techniques, such as conservation laws, and taken together appear to lead in the direction of a general theory. The second approach is based on variational methods, a classical approach to conservative mechanical systems that has a respectable history of application to single phase flows. This latter approach, exemplified by several recent papers by Geurst, appears generally to be consistent with the former approach, at least in those cases for which it is possible to obtain comparable results. Each approach has a justifiable theoretical base and is self-consistent. Moreover, both approaches appear to give the right prediction for several well-defined situations.
A Managerial Approach to Compensation
ERIC Educational Resources Information Center
Wolfe, Arthur V.
1975-01-01
The article examines the major external forces constraining equitable employee compensation, sets forth the classical employee compensation assumptions, suggests somewhat more realistic employee compensation assumptions, and proposes guidelines based on analysis of these external constraints and assumptions. (Author)
Using Rasch Analysis to Inform Rating Scale Development
ERIC Educational Resources Information Center
Van Zile-Tamsen, Carol
2017-01-01
The use of surveys, questionnaires, and rating scales to measure important outcomes in higher education is pervasive, but reliability and validity information is often based on problematic Classical Test Theory approaches. Rasch Analysis, based on Item Response Theory, provides a better alternative for examining the psychometric quality of rating…
Reveal quantum correlation in complementary bases
Wu, Shengjun; Ma, Zhihao; Chen, Zhihua; Yu, Sixia
2014-01-01
An essential feature of genuine quantum correlation is the simultaneous existence of correlation in complementary bases. We reveal this feature of quantum correlation by defining measures based on invariance under a basis change. For a bipartite quantum state, the classical correlation is the maximal correlation present in a certain optimum basis, while the quantum correlation is characterized as a series of residual correlations in the mutually unbiased bases. Compared with other approaches to quantify quantum correlation, our approach gives information-theoretical measures that directly reflect the essential feature of quantum correlation. PMID:24503595
NASA Technical Reports Server (NTRS)
Tsue, Yasuhiko
1994-01-01
A general framework for time-dependent variational approach in terms of squeezed coherent states is constructed with the aim of describing quantal systems by means of classical mechanics including higher order quantal effects with the aid of canonicity conditions developed in the time-dependent Hartree-Fock theory. The Maslov phase occurring in a semi-classical quantization rule is investigated in this framework. In the limit of a semi-classical approximation in this approach, it is definitely shown that the Maslov phase has a geometric nature analogous to the Berry phase. It is also indicated that this squeezed coherent state approach is a possible way to go beyond the usual WKB approximation.
GoWeb: a semantic search engine for the life science web.
Dietze, Heiko; Schroeder, Michael
2009-10-01
Current search engines are keyword-based. Semantic technologies promise a next generation of semantic search engines, which will be able to answer questions. Current approaches either apply natural language processing to unstructured text or they assume the existence of structured statements over which they can reason. Here, we introduce a third approach, GoWeb, which combines classical keyword-based Web search with text-mining and ontologies to navigate large results sets and facilitate question answering. We evaluate GoWeb on three benchmarks of questions on genes and functions, on symptoms and diseases, and on proteins and diseases. The first benchmark is based on the BioCreAtivE 1 Task 2 and links 457 gene names with 1352 functions. GoWeb finds 58% of the functional GeneOntology annotations. The second benchmark is based on 26 case reports and links symptoms with diseases. GoWeb achieves 77% success rate improving an existing approach by nearly 20%. The third benchmark is based on 28 questions in the TREC genomics challenge and links proteins to diseases. GoWeb achieves a success rate of 79%. GoWeb's combination of classical Web search with text-mining and ontologies is a first step towards answering questions in the biomedical domain. GoWeb is online at: http://www.gopubmed.org/goweb.
Analysis of the Temperature and Strain-Rate Dependences of Strain Hardening
NASA Astrophysics Data System (ADS)
Kreyca, Johannes; Kozeschnik, Ernst
2018-01-01
A classical constitutive modeling-based Ansatz for the impact of thermal activation on the stress-strain response of metallic materials is compared with the state parameter-based Kocks-Mecking model. The predicted functional dependencies suggest that, in the first approach, only the dislocation storage mechanism is a thermally activated process, whereas, in the second approach, only the mechanism of dynamic recovery is. In contradiction to each of these individual approaches, our analysis and comparison with experimental evidence shows that thermal activation contributes both to dislocation generation and annihilation.
Extreme value analysis in biometrics.
Hüsler, Jürg
2009-04-01
We review some approaches of extreme value analysis in the context of biometrical applications. The classical extreme value analysis is based on iid random variables. Two different general methods are applied, which will be discussed together with biometrical examples. Different estimation, testing, goodness-of-fit procedures for applications are discussed. Furthermore, some non-classical situations are considered where the data are possibly dependent, where a non-stationary behavior is observed in the data or where the observations are not univariate. A few open problems are also stated.
Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan
Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less
Cognitive Radios Exploiting Gray Spaces via Compressed Sensing
NASA Astrophysics Data System (ADS)
Wieruch, Dennis; Jung, Peter; Wirth, Thomas; Dekorsy, Armin; Haustein, Thomas
2016-07-01
We suggest an interweave cognitive radio system with a gray space detector, which is properly identifying a small fraction of unused resources within an active band of a primary user system like 3GPP LTE. Therefore, the gray space detector can cope with frequency fading holes and distinguish them from inactive resources. Different approaches of the gray space detector are investigated, the conventional reduced-rank least squares method as well as the compressed sensing-based orthogonal matching pursuit and basis pursuit denoising algorithm. In addition, the gray space detector is compared with the classical energy detector. Simulation results present the receiver operating characteristic at several SNRs and the detection performance over further aspects like base station system load for practical false alarm rates. The results show, that especially for practical false alarm rates the compressed sensing algorithm are more suitable than the classical energy detector and reduced-rank least squares approach.
Ensembles and Experiments in Classical and Quantum Physics
NASA Astrophysics Data System (ADS)
Neumaier, Arnold
A philosophically consistent axiomatic approach to classical and quantum mechanics is given. The approach realizes a strong formal implementation of Bohr's correspondence principle. In all instances, classical and quantum concepts are fully parallel: the same general theory has a classical realization and a quantum realization. Extending the ''probability via expectation'' approach of Whittle to noncommuting quantities, this paper defines quantities, ensembles, and experiments as mathematical concepts and shows how to model complementarity, uncertainty, probability, nonlocality and dynamics in these terms. The approach carries no connotation of unlimited repeatability; hence it can be applied to unique systems such as the universe. Consistent experiments provide an elegant solution to the reality problem, confirming the insistence of the orthodox Copenhagen interpretation on that there is nothing but ensembles, while avoiding its elusive reality picture. The weak law of large numbers explains the emergence of classical properties for macroscopic systems.
Student Support for Research in Hierarchical Control and Trajectory Planning
NASA Technical Reports Server (NTRS)
Martin, Clyde F.
1999-01-01
Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.
Quantum games of opinion formation based on the Marinatto-Weber quantum game scheme
NASA Astrophysics Data System (ADS)
Deng, Xinyang; Deng, Yong; Liu, Qi; Shi, Lei; Wang, Zhen
2016-06-01
Quantization has become a new way to investigate classical game theory since quantum strategies and quantum games were proposed. In the existing studies, many typical game models, such as the prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been extensively explored by using quantization approach. Along a similar method, here several game models of opinion formations will be quantized on the basis of the Marinatto-Weber quantum game scheme, a frequently used scheme of converting classical games to quantum versions. Our results show that the quantization can fascinatingly change the properties of some classical opinion formation game models so as to generate win-win outcomes.
Extended reactance domain algorithms for DoA estimation onto an ESPAR antennas
NASA Astrophysics Data System (ADS)
Harabi, F.; Akkar, S.; Gharsallah, A.
2016-07-01
Based on an extended reactance domain (RD) covariance matrix, this article proposes new alternatives for directions of arrival (DoAs) estimation of narrowband sources through an electronically steerable parasitic array radiator (ESPAR) antennas. Because of the centro symmetry of the classic ESPAR antennas, an unitary transformation is applied to the collected data that allow an important reduction in both computational cost and processing time and, also, an enhancement of the resolution capabilities of the proposed algorithms. Moreover, this article proposes a new approach for eigenvalues estimation through only some linear operations. The developed DoAs estimation algorithms based on this new approach has illustrated a good behaviour with less calculation cost and processing time as compared to other schemes based on the classic eigenvalues approach. The conducted simulations demonstrate that high-precision and high-resolution DoAs estimation can be reached especially in very closely sources situation and low sources power as compared to the RD-MUSIC algorithm and the RD-PM algorithm. The asymptotic behaviours of the proposed DoAs estimators are analysed in various scenarios and compared with the Cramer-Rao bound (CRB). The conducted simulations testify the high-resolution of the developed algorithms and prove the efficiently of the proposed approach.
Biogeochemical behaviour and bioremediation of uranium in waters of abandoned mines.
Mkandawire, Martin
2013-11-01
The discharges of uranium and associated radionuclides as well as heavy metals and metalloids from waste and tailing dumps in abandoned uranium mining and processing sites pose contamination risks to surface and groundwater. Although many more are being planned for nuclear energy purposes, most of the abandoned uranium mines are a legacy of uranium production that fuelled arms race during the cold war of the last century. Since the end of cold war, there have been efforts to rehabilitate the mining sites, initially, using classical remediation techniques based on high chemical and civil engineering. Recently, bioremediation technology has been sought as alternatives to the classical approach due to reasons, which include: (a) high demand of sites requiring remediation; (b) the economic implication of running and maintaining the facilities due to high energy and work force demand; and (c) the pattern and characteristics of contaminant discharges in most of the former uranium mining and processing sites prevents the use of classical methods. This review discusses risks of uranium contamination from abandoned uranium mines from the biogeochemical point of view and the potential and limitation of uranium bioremediation technique as alternative to classical approach in abandoned uranium mining and processing sites.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.
NASA Astrophysics Data System (ADS)
Dür, Wolfgang; Lamprecht, Raphael; Heusler, Stefan
2017-07-01
A long-range quantum communication network is among the most promising applications of emerging quantum technologies. We discuss the potential of such a quantum internet for the secure transmission of classical and quantum information, as well as theoretical and experimental approaches and recent advances to realize them. We illustrate the involved concepts such as error correction, teleportation or quantum repeaters and consider an approach to this topic based on catchy visualizations as a context-based, modern treatment of quantum theory at high school.
Aben, Nanne; Vis, Daniel J; Michaut, Magali; Wessels, Lodewyk F A
2016-09-01
Clinical response to anti-cancer drugs varies between patients. A large portion of this variation can be explained by differences in molecular features, such as mutation status, copy number alterations, methylation and gene expression profiles. We show that the classic approach for combining these molecular features (Elastic Net regression on all molecular features simultaneously) results in models that are almost exclusively based on gene expression. The gene expression features selected by the classic approach are difficult to interpret as they often represent poorly studied combinations of genes, activated by aberrations in upstream signaling pathways. To utilize all data types in a more balanced way, we developed TANDEM, a two-stage approach in which the first stage explains response using upstream features (mutations, copy number, methylation and cancer type) and the second stage explains the remainder using downstream features (gene expression). Applying TANDEM to 934 cell lines profiled across 265 drugs (GDSC1000), we show that the resulting models are more interpretable, while retaining the same predictive performance as the classic approach. Using the more balanced contributions per data type as determined with TANDEM, we find that response to MAPK pathway inhibitors is largely predicted by mutation data, while predicting response to DNA damaging agents requires gene expression data, in particular SLFN11 expression. TANDEM is available as an R package on CRAN (for more information, see http://ccb.nki.nl/software/tandem). m.michaut@nki.nl or l.wessels@nki.nl Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Memory preservation made prestigious but easy
NASA Astrophysics Data System (ADS)
Fageth, Reiner; Debus, Christina; Sandhaus, Philipp
2011-01-01
Preserving memories combined with story-telling using either photo books for multiple images or high quality products such as one or a few images printed on canvas or images mounted on acryl to create high-quality wall decorations are gradually becoming more popular than classical 4*6 prints and classical silver halide posters. Digital printing via electro photography and ink jet is increasingly replacing classical silver halide technology as the dominant production technology for these kinds of products. Maintaining a consistent and comparable quality of output is becoming more challenging than using silver halide paper for both, prints and posters. This paper describes a unique approach of combining both desktop based software to initiate a compelling project and the use of online capabilities in order to finalize and optimize that project in an online environment in a community process. A comparison of the consumer behavior between online and desktop based solutions for generating photo books will be presented.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2014-11-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud masks were designed to be numerically efficient and suited for the processing of large amounts of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient amounts of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2015-04-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud detection schemes were designed to be numerically efficient and suited for the processing of large numbers of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient numbers of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
Quantum Vertex Model for Reversible Classical Computing
NASA Astrophysics Data System (ADS)
Chamon, Claudio; Mucciolo, Eduardo; Ruckenstein, Andrei; Yang, Zhicheng
We present a planar vertex model that encodes the result of a universal reversible classical computation in its ground state. The approach involves Boolean variables (spins) placed on links of a two-dimensional lattice, with vertices representing logic gates. Large short-ranged interactions between at most two spins implement the operation of each gate. The lattice is anisotropic with one direction corresponding to computational time, and with transverse boundaries storing the computation's input and output. The model displays no finite temperature phase transitions, including no glass transitions, independent of circuit. The computational complexity is encoded in the scaling of the relaxation rate into the ground state with the system size. We use thermal annealing and a novel and more efficient heuristic \\x9Dannealing with learning to study various computational problems. To explore faster relaxation routes, we construct an explicit mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating a novel approach to reversible classical computation based on quantum annealing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutjahr, A.L.; Kincaid, C.T.; Mercer, J.W.
1987-04-01
The objective of this report is to summarize the various modeling approaches that were used to simulate solute transport in a variably saturated emission. In particular, the technical strengths and weaknesses of each approach are discussed, and conclusions and recommendations for future studies are made. Five models are considered: (1) one-dimensional analytical and semianalytical solutions of the classical deterministic convection-dispersion equation (van Genuchten, Parker, and Kool, this report ); (2) one-dimensional simulation using a continuous-time Markov process (Knighton and Wagenet, this report); (3) one-dimensional simulation using the time domain method and the frequency domain method (Duffy and Al-Hassan, this report);more » (4) one-dimensional numerical approach that combines a solution of the classical deterministic convection-dispersion equation with a chemical equilibrium speciation model (Cederberg, this report); and (5) three-dimensional numerical solution of the classical deterministic convection-dispersion equation (Huyakorn, Jones, Parker, Wadsworth, and White, this report). As part of the discussion, the input data and modeling results are summarized. The models were used in a data analysis mode, as opposed to a predictive mode. Thus, the following discussion will concentrate on the data analysis aspects of model use. Also, all the approaches were similar in that they were based on a convection-dispersion model of solute transport. Each discussion addresses the modeling approaches in the order listed above.« less
Quantization and Quantum-Like Phenomena: A Number Amplitude Approach
NASA Astrophysics Data System (ADS)
Robinson, T. R.; Haven, E.
2015-12-01
Historically, quantization has meant turning the dynamical variables of classical mechanics that are represented by numbers into their corresponding operators. Thus the relationships between classical variables determine the relationships between the corresponding quantum mechanical operators. Here, we take a radically different approach to this conventional quantization procedure. Our approach does not rely on any relations based on classical Hamiltonian or Lagrangian mechanics nor on any canonical quantization relations, nor even on any preconceptions of particle trajectories in space and time. Instead we examine the symmetry properties of certain Hermitian operators with respect to phase changes. This introduces harmonic operators that can be identified with a variety of cyclic systems, from clocks to quantum fields. These operators are shown to have the characteristics of creation and annihilation operators that constitute the primitive fields of quantum field theory. Such an approach not only allows us to recover the Hamiltonian equations of classical mechanics and the Schrödinger wave equation from the fundamental quantization relations, but also, by freeing the quantum formalism from any physical connotation, makes it more directly applicable to non-physical, so-called quantum-like systems. Over the past decade or so, there has been a rapid growth of interest in such applications. These include, the use of the Schrödinger equation in finance, second quantization and the number operator in social interactions, population dynamics and financial trading, and quantum probability models in cognitive processes and decision-making. In this paper we try to look beyond physical analogies to provide a foundational underpinning of such applications.
Team-Based Learning in Anatomy: An Efficient, Effective, and Economical Strategy
ERIC Educational Resources Information Center
Vasan, Nagaswami S.; DeFouw, David O.; Compton, Scott
2011-01-01
Team-based learning (TBL) strategy is being adopted in medical education to implement interactive small group learning. We have modified classical TBL to fit our curricular needs and approach. Anatomy lectures were replaced with TBL that required preparation of assigned content specific discussion topics (in the text referred as "discussion…
ERIC Educational Resources Information Center
Psycharis, Sarantos
2016-01-01
Computational experiment approach considers models as the fundamental instructional units of Inquiry Based Science and Mathematics Education (IBSE) and STEM Education, where the model take the place of the "classical" experimental set-up and simulation replaces the experiment. Argumentation in IBSE and STEM education is related to the…
Melzer, Nina; Wittenburg, Dörte; Repsilber, Dirk
2013-01-01
In this study the benefit of metabolome level analysis for the prediction of genetic value of three traditional milk traits was investigated. Our proposed approach consists of three steps: First, milk metabolite profiles are used to predict three traditional milk traits of 1,305 Holstein cows. Two regression methods, both enabling variable selection, are applied to identify important milk metabolites in this step. Second, the prediction of these important milk metabolite from single nucleotide polymorphisms (SNPs) enables the detection of SNPs with significant genetic effects. Finally, these SNPs are used to predict milk traits. The observed precision of predicted genetic values was compared to the results observed for the classical genotype-phenotype prediction using all SNPs or a reduced SNP subset (reduced classical approach). To enable a comparison between SNP subsets, a special invariable evaluation design was implemented. SNPs close to or within known quantitative trait loci (QTL) were determined. This enabled us to determine if detected important SNP subsets were enriched in these regions. The results show that our approach can lead to genetic value prediction, but requires less than 1% of the total amount of (40,317) SNPs., significantly more important SNPs in known QTL regions were detected using our approach compared to the reduced classical approach. Concluding, our approach allows a deeper insight into the associations between the different levels of the genotype-phenotype map (genotype-metabolome, metabolome-phenotype, genotype-phenotype). PMID:23990900
Effects of Extrinsic Mortality on the Evolution of Aging: A Stochastic Modeling Approach
Shokhirev, Maxim Nikolaievich; Johnson, Adiv Adam
2014-01-01
The evolutionary theories of aging are useful for gaining insights into the complex mechanisms underlying senescence. Classical theories argue that high levels of extrinsic mortality should select for the evolution of shorter lifespans and earlier peak fertility. Non-classical theories, in contrast, posit that an increase in extrinsic mortality could select for the evolution of longer lifespans. Although numerous studies support the classical paradigm, recent data challenge classical predictions, finding that high extrinsic mortality can select for the evolution of longer lifespans. To further elucidate the role of extrinsic mortality in the evolution of aging, we implemented a stochastic, agent-based, computational model. We used a simulated annealing optimization approach to predict which model parameters predispose populations to evolve longer or shorter lifespans in response to increased levels of predation. We report that longer lifespans evolved in the presence of rising predation if the cost of mating is relatively high and if energy is available in excess. Conversely, we found that dramatically shorter lifespans evolved when mating costs were relatively low and food was relatively scarce. We also analyzed the effects of increased predation on various parameters related to density dependence and energy allocation. Longer and shorter lifespans were accompanied by increased and decreased investments of energy into somatic maintenance, respectively. Similarly, earlier and later maturation ages were accompanied by increased and decreased energetic investments into early fecundity, respectively. Higher predation significantly decreased the total population size, enlarged the shared resource pool, and redistributed energy reserves for mature individuals. These results both corroborate and refine classical predictions, demonstrating a population-level trade-off between longevity and fecundity and identifying conditions that produce both classical and non-classical lifespan effects. PMID:24466165
Skiera, Christina; Steliopoulos, Panagiotis; Kuballa, Thomas; Diehl, Bernd; Holzgrabe, Ulrike
2014-05-01
Indices like acid value, peroxide value, and saponification value play an important role in quality control and identification of lipids. Requirements on these parameters are given by the monographs of the European pharmacopeia. (1)H NMR spectroscopy provides a fast and simple alternative to these classical approaches. In the present work a new (1)H NMR approach to determine the acid value is described. The method was validated using a statistical approach based on a variance components model. The performance under repeatability and in-house reproducibility conditions was assessed. We applied this (1)H NMR assay to a wide range of different fatty oils. A total of 305 oil and fat samples were examined by both the classical and the NMR method. Except for hard fat, the data obtained by the two methods were in good agreement. The (1)H NMR method was adapted to analyse waxes and oleyloleat. Furthermore, the effect of solvent and in the case of castor oil the effect of the oil matrix on line broadening and chemical shift of the carboxyl group signal are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
Electrolyte and Acid-Base Disturbances in End-Stage Liver Disease: A Physiopathological Approach.
Jiménez, José Víctor; Carrillo-Pérez, Diego Luis; Rosado-Canto, Rodrigo; García-Juárez, Ignacio; Torre, Aldo; Kershenobich, David; Carrillo-Maravilla, Eduardo
2017-08-01
Electrolyte and acid-base disturbances are frequent in patients with end-stage liver disease; the underlying physiopathological mechanisms are often complex and represent a diagnostic and therapeutic challenge to the physician. Usually, these disorders do not develop in compensated cirrhotic patients, but with the onset of the classic complications of cirrhosis such as ascites, renal failure, spontaneous bacterial peritonitis and variceal bleeding, multiple electrolyte, and acid-base disturbances emerge. Hyponatremia parallels ascites formation and is a well-known trigger of hepatic encephalopathy; its management in this particular population poses a risky challenge due to the high susceptibility of cirrhotic patients to osmotic demyelination. Hypokalemia is common in the setting of cirrhosis: multiple potassium wasting mechanisms both inherent to the disease and resulting from its management make these patients particularly susceptible to potassium depletion even in the setting of normokalemia. Acid-base disturbances range from classical respiratory alkalosis to high anion gap metabolic acidosis, almost comprising the full acid-base spectrum. Because most electrolyte and acid-base disturbances are managed in terms of their underlying trigger factors, a systematic physiopathological approach to their diagnosis and treatment is required.
Rojo-Manaute, Jose Manuel; Capa-Grasa, Alberto; Del Cerro-Gutiérrez, Miguel; Martínez, Manuel Villanueva; Chana-Rodríguez, Francisco; Martín, Javier Vaquero
2012-03-01
Trigger digit surgery can be performed by an open approach using classic open surgery, by a wide-awake approach, or by sonographically guided first annular pulley release in day surgery and office-based ambulatory settings. Our goal was to perform a turnover and economic analysis of 3 surgical models. Two studies were conducted. The first was a turnover analysis of 57 patients allocated 4:4:1 into the surgical models: sonographically guided-office-based, classic open-day surgery, and wide-awake-office-based. Regression analysis for the turnover time was monitored for assessing stability (R(2) < .26). Second, on the basis of turnover times and hospital tariff revenues, we calculated the total costs, income to cost ratio, opportunity cost, true cost, true net income (primary variable), break-even points for sonographically guided fixed costs, and 1-way analysis for identifying thresholds among alternatives. Thirteen sonographically guided-office-based patients were withdrawn because of a learning curve influence. The wide-awake (n = 6) and classic (n = 26) models were compared to the last 25% of the sonographically guided group (n = 12), which showed significantly less mean turnover times, income to cost ratios 2.52 and 10.9 times larger, and true costs 75.48 and 20.92 times lower, respectively. A true net income break-even point happened after 19.78 sonographically guided-office-based procedures. Sensitivity analysis showed a threshold between wide-awake and last 25% sonographically guided true costs if the last 25% sonographically guided turnover times reached 65.23 and 27.81 minutes, respectively. However, this trial was underpowered. This trial comparing surgical models was underpowered and is inconclusive on turnover times; however, the sonographically guided-office-based approach showed shorter turnover times and better economic results with a quick recoup of the costs of sonographically assisted surgery.
A knowledge-based system for prototypical reasoning
NASA Astrophysics Data System (ADS)
Lieto, Antonio; Minieri, Andrea; Piana, Alberto; Radicioni, Daniele P.
2015-04-01
In this work we present a knowledge-based system equipped with a hybrid, cognitively inspired architecture for the representation of conceptual information. The proposed system aims at extending the classical representational and reasoning capabilities of the ontology-based frameworks towards the realm of the prototype theory. It is based on a hybrid knowledge base, composed of a classical symbolic component (grounded on a formal ontology) with a typicality based one (grounded on the conceptual spaces framework). The resulting system attempts to reconcile the heterogeneous approach to the concepts in Cognitive Science with the dual process theories of reasoning and rationality. The system has been experimentally assessed in a conceptual categorisation task where common sense linguistic descriptions were given in input, and the corresponding target concepts had to be identified. The results show that the proposed solution substantially extends the representational and reasoning 'conceptual' capabilities of standard ontology-based systems.
A Synthetic Approach to the Transfer Matrix Method in Classical and Quantum Physics
ERIC Educational Resources Information Center
Pujol, O.; Perez, J. P.
2007-01-01
The aim of this paper is to propose a synthetic approach to the transfer matrix method in classical and quantum physics. This method is an efficient tool to deal with complicated physical systems of practical importance in geometrical light or charged particle optics, classical electronics, mechanics, electromagnetics and quantum physics. Teaching…
NASA Astrophysics Data System (ADS)
Schubert, Alexander; Falvo, Cyril; Meier, Christoph
2016-08-01
We present mixed quantum-classical simulations on relaxation and dephasing of vibrationally excited carbon monoxide within a protein environment. The methodology is based on a vibrational surface hopping approach treating the vibrational states of CO quantum mechanically, while all remaining degrees of freedom are described by means of classical molecular dynamics. The CO vibrational states form the "surfaces" for the classical trajectories of protein and solvent atoms. In return, environmentally induced non-adiabatic couplings between these states cause transitions describing the vibrational relaxation from first principles. The molecular dynamics simulation yields a detailed atomistic picture of the energy relaxation pathways, taking the molecular structure and dynamics of the protein and its solvent fully into account. Using the ultrafast photolysis of CO in the hemoprotein FixL as an example, we study the relaxation of vibrationally excited CO and evaluate the role of each of the FixL residues forming the heme pocket.
A new procedure for calculating contact stresses in gear teeth
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.
1991-01-01
A numerical procedure for evaluating and monitoring contact stresses in meshing gear teeth is discussed. The procedure is intended to extend the range of applicability and to improve the accuracy of gear contact stress analysis. The procedure is based upon fundamental solution from the theory of elasticity. It is an iterative numerical procedure. The method is believed to have distinct advantages over the classical Hertz method, the finite-element method, and over existing approaches with the boundary element method. Unlike many classical contact stress analyses, friction effects and sliding are included. Slipping and sticking in the contact region are studied. Several examples are discussed. The results are in agreement with classical results. Applications are presented for spur gears.
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
New Approaches to the Teaching of the Classics.
ERIC Educational Resources Information Center
Masciantonio, Rudolph, Ed.; Weislogel, Stephen, Ed.
This four-part report of the 1971-72 Classical Association of the Atlantic States Working Committee deals with the rationale for new approaches and curriculums for schools and colleges. Implications of the new approaches in teacher education are also teated. The major section treating new model curriculums and approaches includes discussion of:…
Information categorization approach to literary authorship disputes
NASA Astrophysics Data System (ADS)
Yang, Albert C.-C.; Peng, C.-K.; Yien, H.-W.; Goldberger, Ary L.
2003-11-01
Scientific analysis of the linguistic styles of different authors has generated considerable interest. We present a generic approach to measuring the similarity of two symbolic sequences that requires minimal background knowledge about a given human language. Our analysis is based on word rank order-frequency statistics and phylogenetic tree construction. We demonstrate the applicability of this method to historic authorship questions related to the classic Chinese novel “The Dream of the Red Chamber,” to the plays of William Shakespeare, and to the Federalist papers. This method may also provide a simple approach to other large databases based on their information content.
Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B; Robinson, Terry E; Khamassi, Mehdi
2014-02-01
Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself - a lever - more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational neuroscience studies may be useful.
Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B.; Robinson, Terry E.; Khamassi, Mehdi
2014-01-01
Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself – a lever – more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational neuroscience studies may be useful. PMID:24550719
Faria, Raquel; Gonçalves, João; Dias, Rita
2017-01-01
Neuropsychiatric involvement in systemic lupus erythematosus (NPSLE) is a complex condition that remains poorly understood, and includes heterogeneous manifestations involving both the central and peripheral nervous system, with disabling effects. There are several models to improve NPSLE diagnosis when a neurological syndrome is present. In the last couple of years, the growing knowledge of the role of cytokines and antibodies in NPSLE, as well as the development of new functional imaging techniques, has brought some insights into the physiopathology of the disease, but their validation for clinical use remains undetermined. Furthermore, besides the classic clinical approach, a new tool for screening the 19 NPSLE syndromes has also been developed. Regarding NPSLE therapeutics, there is still no evidence-based treatment approach, but some data support the safety of biological medication when classic treatment fails. Despite the tendency to reclassify SLE patients in clinical and immunological subsets, we hope that these data will inspire medical professionals to approach NPSLE in a manner more tailored to the individual patient. PMID:28178431
Faria, Raquel; Gonçalves, João; Dias, Rita
2017-01-30
Neuropsychiatric involvement in systemic lupus erythematosus (NPSLE) is a complex condition that remains poorly understood, and includes heterogeneous manifestations involving both the central and peripheral nervous system, with disabling effects. There are several models to improve NPSLE diagnosis when a neurological syndrome is present. In the last couple of years, the growing knowledge of the role of cytokines and antibodies in NPSLE, as well as the development of new functional imaging techniques, has brought some insights into the physiopathology of the disease, but their validation for clinical use remains undetermined. Furthermore, besides the classic clinical approach, a new tool for screening the 19 NPSLE syndromes has also been developed. Regarding NPSLE therapeutics, there is still no evidence-based treatment approach, but some data support the safety of biological medication when classic treatment fails. Despite the tendency to reclassify SLE patients in clinical and immunological subsets, we hope that these data will inspire medical professionals to approach NPSLE in a manner more tailored to the individual patient.
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.
2010-01-01
The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.
Exact and approximate graph matching using random walks.
Gori, Marco; Maggini, Marco; Sarti, Lorenzo
2005-07-01
In this paper, we propose a general framework for graph matching which is suitable for different problems of pattern recognition. The pattern representation we assume is at the same time highly structured, like for classic syntactic and structural approaches, and of subsymbolic nature with real-valued features, like for connectionist and statistic approaches. We show that random walk based models, inspired by Google's PageRank, give rise to a spectral theory that nicely enhances the graph topological features at node level. As a straightforward consequence, we derive a polynomial algorithm for the classic graph isomorphism problem, under the restriction of dealing with Markovian spectrally distinguishable graphs (MSD), a class of graphs that does not seem to be easily reducible to others proposed in the literature. The experimental results that we found on different test-beds of the TC-15 graph database show that the defined MSD class "almost always" covers the database, and that the proposed algorithm is significantly more efficient than top scoring VF algorithm on the same data. Most interestingly, the proposed approach is very well-suited for dealing with partial and approximate graph matching problems, derived for instance from image retrieval tasks. We consider the objects of the COIL-100 visual collection and provide a graph-based representation, whose node's labels contain appropriate visual features. We show that the adoption of classic bipartite graph matching algorithms offers a straightforward generalization of the algorithm given for graph isomorphism and, finally, we report very promising experimental results on the COIL-100 visual collection.
A STATE-VARIABLE APPROACH FOR PREDICTING THE TIME REQUIRED FOR 50% RECRYSTALLIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. STOUT; ET AL
2000-08-01
It is important to be able to model the recrystallization kinetics in aluminum alloys during hot deformation. The industrial relevant process of hot rolling is an example of where the knowledge of whether or not a material recrystallizes is critical to making a product with the correct properties. Classically, the equations that describe the kinetics of recrystallization predict the time to 50% recrystallization. These equations are largely empirical; they are based on the free energy for recrystallization, a Zener-Holloman parameter, and have several adjustable exponents to fit the equation to engineering data. We have modified this form of classical theorymore » replacing the Zener-Hollomon parameter with a deformation energy increment, a free energy available to drive recrystallization. The advantage of this formulation is that the deformation energy increment is calculated based on the previously determined temperature and strain-rate sensitivity of the constitutive response. We modeled the constitutive response of the AA5182 aluminum using a state variable approach, the value of the state variable is a function of the temperature and strain-rate history of deformation. Thus, the recrystallization kinetics is a function of only the state variable and free energy for recrystallization. There are no adjustable exponents as in classical theory. Using this approach combined with engineering recrystallization data we have been able to predict the kinetics of recrystallization in AA5182 as a function of deformation strain rate and temperature.« less
Procedures for Selecting Items for Computerized Adaptive Tests.
ERIC Educational Resources Information Center
Kingsbury, G. Gage; Zara, Anthony R.
1989-01-01
Several classical approaches and alternative approaches to item selection for computerized adaptive testing (CAT) are reviewed and compared. The study also describes procedures for constrained CAT that may be added to classical item selection approaches to allow them to be used for applied testing. (TJH)
NASA Astrophysics Data System (ADS)
La Cour, Brian R.; Ostrove, Corey I.
2017-01-01
This paper describes a novel approach to solving unstructured search problems using a classical, signal-based emulation of a quantum computer. The classical nature of the representation allows one to perform subspace projections in addition to the usual unitary gate operations. Although bandwidth requirements will limit the scale of problems that can be solved by this method, it can nevertheless provide a significant computational advantage for problems of limited size. In particular, we find that, for the same number of noisy oracle calls, the proposed subspace projection method provides a higher probability of success for finding a solution than does an single application of Grover's algorithm on the same device.
Dilatancy Criteria for Salt Cavern Design: A Comparison Between Stress- and Strain-Based Approaches
NASA Astrophysics Data System (ADS)
Labaune, P.; Rouabhi, A.; Tijani, M.; Blanco-Martín, L.; You, T.
2018-02-01
This paper presents a new approach for salt cavern design, based on the use of the onset of dilatancy as a design threshold. In the proposed approach, a rheological model that includes dilatancy at the constitutive level is developed, and a strain-based dilatancy criterion is defined. As compared to classical design methods that consist in simulating cavern behavior through creep laws (fitted on long-term tests) and then using a criterion (derived from short-terms tests or experience) to determine the stability of the excavation, the proposed approach is consistent both with short- and long-term conditions. The new strain-based dilatancy criterion is compared to a stress-based dilatancy criterion through numerical simulations of salt caverns under cyclic loading conditions. The dilatancy zones predicted by the strain-based criterion are larger than the ones predicted by the stress-based criteria, which is conservative yet constructive for design purposes.
ERIC Educational Resources Information Center
National Council of Teachers of English, Urbana, IL.
New approaches to the teaching of the classics are explored in this collection of articles written by high school, junior college, college, and university literature instructors. The seven articles in the first section of the book discuss linking the classics. Specific topics covered in the articles include using the works of William Golding as a…
An alternative approach to the Boltzmann distribution through the chemical potential
NASA Astrophysics Data System (ADS)
D'Anna, Michele; Job, Georg
2016-05-01
The Boltzmann distribution is one of the most significant results of classical physics. Despite its importance and its wide range of application, at high school level it is mostly presented without any derivation or link to some basic ideas. In this contribution we present an approach based on the chemical potential that allows to derive it directly from the basic idea of thermodynamical equilibrium.
Classical molecular dynamics simulation of electronically non-adiabatic processes.
Miller, William H; Cotton, Stephen J
2016-12-22
Both classical and quantum mechanics (as well as hybrids thereof, i.e., semiclassical approaches) find widespread use in simulating dynamical processes in molecular systems. For large chemical systems, however, which involve potential energy surfaces (PES) of general/arbitrary form, it is usually the case that only classical molecular dynamics (MD) approaches are feasible, and their use is thus ubiquitous nowadays, at least for chemical processes involving dynamics on a single PES (i.e., within a single Born-Oppenheimer electronic state). This paper reviews recent developments in an approach which extends standard classical MD methods to the treatment of electronically non-adiabatic processes, i.e., those that involve transitions between different electronic states. The approach treats nuclear and electronic degrees of freedom (DOF) equivalently (i.e., by classical mechanics, thereby retaining the simplicity of standard MD), and provides "quantization" of the electronic states through a symmetrical quasi-classical (SQC) windowing model. The approach is seen to be capable of treating extreme regimes of strong and weak coupling between the electronic states, as well as accurately describing coherence effects in the electronic DOF (including the de-coherence of such effects caused by coupling to the nuclear DOF). A survey of recent applications is presented to illustrate the performance of the approach. Also described is a newly developed variation on the original SQC model (found universally superior to the original) and a general extension of the SQC model to obtain the full electronic density matrix (at no additional cost/complexity).
NASA Astrophysics Data System (ADS)
Dziedzic, Jacek; Mao, Yuezhi; Shao, Yihan; Ponder, Jay; Head-Gordon, Teresa; Head-Gordon, Martin; Skylaris, Chris-Kriton
2016-09-01
We present a novel quantum mechanical/molecular mechanics (QM/MM) approach in which a quantum subsystem is coupled to a classical subsystem described by the AMOEBA polarizable force field. Our approach permits mutual polarization between the QM and MM subsystems, effected through multipolar electrostatics. Self-consistency is achieved for both the QM and MM subsystems through a total energy minimization scheme. We provide an expression for the Hamiltonian of the coupled QM/MM system, which we minimize using gradient methods. The QM subsystem is described by the onetep linear-scaling DFT approach, which makes use of strictly localized orbitals expressed in a set of periodic sinc basis functions equivalent to plane waves. The MM subsystem is described by the multipolar, polarizable force field AMOEBA, as implemented in tinker. Distributed multipole analysis is used to obtain, on the fly, a classical representation of the QM subsystem in terms of atom-centered multipoles. This auxiliary representation is used for all polarization interactions between QM and MM, allowing us to treat them on the same footing as in AMOEBA. We validate our method in tests of solute-solvent interaction energies, for neutral and charged molecules, demonstrating the simultaneous optimization of the quantum and classical degrees of freedom. Encouragingly, we find that the inclusion of explicit polarization in the MM part of QM/MM improves the agreement with fully QM calculations.
A toxicity cost function approach to optimal CPA equilibration in tissues.
Benson, James D; Higgins, Adam Z; Desai, Kunjan; Eroglu, Ali
2018-02-01
There is growing need for cryopreserved tissue samples that can be used in transplantation and regenerative medicine. While a number of specific tissue types have been successfully cryopreserved, this success is not general, and there is not a uniform approach to cryopreservation of arbitrary tissues. Additionally, while there are a number of long-established approaches towards optimizing cryoprotocols in single cell suspensions, and even plated cell monolayers, computational approaches in tissue cryopreservation have classically been limited to explanatory models. Here we develop a numerical approach to adapt cell-based CPA equilibration damage models for use in a classical tissue mass transport model. To implement this with real-world parameters, we measured CPA diffusivity in three human-sourced tissue types, skin, fibroid and myometrium, yielding propylene glycol diffusivities of 0.6 × 10 -6 cm 2 /s, 1.2 × 10 -6 cm 2 /s and 1.3 × 10 -6 cm 2 /s, respectively. Based on these results, we numerically predict and compare optimal multistep equilibration protocols that minimize the cell-based cumulative toxicity cost function and the damage due to excessive osmotic gradients at the tissue boundary. Our numerical results show that there are fundamental differences between protocols designed to minimize total CPA exposure time in tissues and protocols designed to minimize accumulated CPA toxicity, and that "one size fits all" stepwise approaches are predicted to be more toxic and take considerably longer than needed. Copyright © 2017 Elsevier Inc. All rights reserved.
Larsson, Emanuel; Martin, Sabine; Lazzarini, Marcio; Tromba, Giuliana; Missbach-Guentner, Jeannine; Pinkert-Leetsch, Diana; Katschinski, Dörthe M.; Alves, Frauke
2017-01-01
The small size of the adult and developing mouse heart poses a great challenge for imaging in preclinical research. The aim of the study was to establish a phosphotungstic acid (PTA) ex-vivo staining approach that efficiently enhances the x-ray attenuation of soft-tissue to allow high resolution 3D visualization of mouse hearts by synchrotron radiation based μCT (SRμCT) and classical μCT. We demonstrate that SRμCT of PTA stained mouse hearts ex-vivo allows imaging of the cardiac atrium, ventricles, myocardium especially its fibre structure and vessel walls in great detail and furthermore enables the depiction of growth and anatomical changes during distinct developmental stages of hearts in mouse embryos. Our x-ray based virtual histology approach is not limited to SRμCT as it does not require monochromatic and/or coherent x-ray sources and even more importantly can be combined with conventional histological procedures. Furthermore, it permits volumetric measurements as we show for the assessment of the plaque volumes in the aortic valve region of mice from an ApoE-/- mouse model. Subsequent, Masson-Goldner trichrome staining of paraffin sections of PTA stained samples revealed intact collagen and muscle fibres and positive staining of CD31 on endothelial cells by immunohistochemistry illustrates that our approach does not prevent immunochemistry analysis. The feasibility to scan hearts already embedded in paraffin ensured a 100% correlation between virtual cut sections of the CT data sets and histological heart sections of the same sample and may allow in future guiding the cutting process to specific regions of interest. In summary, since our CT based virtual histology approach is a powerful tool for the 3D depiction of morphological alterations in hearts and embryos in high resolution and can be combined with classical histological analysis it may be used in preclinical research to unravel structural alterations of various heart diseases. PMID:28178293
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.
Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol
Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157
Further Development of an Optimal Design Approach Applied to Axial Magnetic Bearings
NASA Technical Reports Server (NTRS)
Bloodgood, V. Dale, Jr.; Groom, Nelson J.; Britcher, Colin P.
2000-01-01
Classical design methods involved in magnetic bearings and magnetic suspension systems have always had their limitations. Because of this, the overall effectiveness of a design has always relied heavily on the skill and experience of the individual designer. This paper combines two approaches that have been developed to aid the accuracy and efficiency of magnetostatic design. The first approach integrates classical magnetic circuit theory with modern optimization theory to increase design efficiency. The second approach uses loss factors to increase the accuracy of classical magnetic circuit theory. As an example, an axial magnetic thrust bearing is designed for minimum power.
Lai, Ying-Hui; Tsao, Yu; Lu, Xugang; Chen, Fei; Su, Yu-Ting; Chen, Kuang-Chao; Chen, Yu-Hsuan; Chen, Li-Ching; Po-Hung Li, Lieber; Lee, Chin-Hui
2018-01-20
We investigate the clinical effectiveness of a novel deep learning-based noise reduction (NR) approach under noisy conditions with challenging noise types at low signal to noise ratio (SNR) levels for Mandarin-speaking cochlear implant (CI) recipients. The deep learning-based NR approach used in this study consists of two modules: noise classifier (NC) and deep denoising autoencoder (DDAE), thus termed (NC + DDAE). In a series of comprehensive experiments, we conduct qualitative and quantitative analyses on the NC module and the overall NC + DDAE approach. Moreover, we evaluate the speech recognition performance of the NC + DDAE NR and classical single-microphone NR approaches for Mandarin-speaking CI recipients under different noisy conditions. The testing set contains Mandarin sentences corrupted by two types of maskers, two-talker babble noise, and a construction jackhammer noise, at 0 and 5 dB SNR levels. Two conventional NR techniques and the proposed deep learning-based approach are used to process the noisy utterances. We qualitatively compare the NR approaches by the amplitude envelope and spectrogram plots of the processed utterances. Quantitative objective measures include (1) normalized covariance measure to test the intelligibility of the utterances processed by each of the NR approaches; and (2) speech recognition tests conducted by nine Mandarin-speaking CI recipients. These nine CI recipients use their own clinical speech processors during testing. The experimental results of objective evaluation and listening test indicate that under challenging listening conditions, the proposed NC + DDAE NR approach yields higher intelligibility scores than the two compared classical NR techniques, under both matched and mismatched training-testing conditions. When compared to the two well-known conventional NR techniques under challenging listening condition, the proposed NC + DDAE NR approach has superior noise suppression capabilities and gives less distortion for the key speech envelope information, thus, improving speech recognition more effectively for Mandarin CI recipients. The results suggest that the proposed deep learning-based NR approach can potentially be integrated into existing CI signal processors to overcome the degradation of speech perception caused by noise.
The Ostrovsky-Vakhnenko equation by a Riemann-Hilbert approach
NASA Astrophysics Data System (ADS)
Boutet de Monvel, Anne; Shepelsky, Dmitry
2015-01-01
We present an inverse scattering transform (IST) approach for the (differentiated) Ostrovsky-Vakhnenko equation This equation can also be viewed as the short wave model for the Degasperis-Procesi (sDP) equation. Our IST approach is based on an associated Riemann-Hilbert problem, which allows us to give a representation for the classical (smooth) solution, to get the principal term of its long time asymptotics, and also to describe loop soliton solutions. Dedicated to Johannes Sjöstrand with gratitude and admiration.
NASA Astrophysics Data System (ADS)
Brambilla, Marco; Ceri, Stefano; Valle, Emanuele Della; Facca, Federico M.; Tziviskou, Christina
Although Semantic Web Services are expected to produce a revolution in the development of Web-based systems, very few enterprise-wide design experiences are available; one of the main reasons is the lack of sound Software Engineering methods and tools for the deployment of Semantic Web applications. In this chapter, we present an approach to software development for the Semantic Web based on classical Software Engineering methods (i.e., formal business process development, computer-aided and component-based software design, and automatic code generation) and on semantic methods and tools (i.e., ontology engineering, semantic service annotation and discovery).
Argasinski, Krzysztof
2006-07-01
This paper contains the basic extensions of classical evolutionary games (multipopulation and density dependent models). It is shown that classical bimatrix approach is inconsistent with other approaches because it does not depend on proportion between populations. The main conclusion is that interspecific proportion parameter is important and must be considered in multipopulation models. The paper provides a synthesis of both extensions (a metasimplex concept) which solves the problem intrinsic in the bimatrix model. It allows us to model interactions among any number of subpopulations including density dependence effects. We prove that all modern approaches to evolutionary games are closely related. All evolutionary models (except classical bimatrix approaches) can be reduced to a single population general model by a simple change of variables. Differences between classic bimatrix evolutionary games and a new model which is dependent on interspecific proportion are shown by examples.
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.
2016-10-01
Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in broad sense, of meta-heuristics, and describe free-accessible software frameworks which can be used to make easier the implementation of these algorithms.
Fonteyne, Margot; Gildemyn, Delphine; Peeters, Elisabeth; Mortier, Séverine Thérèse F C; Vercruysse, Jurgen; Gernaey, Krist V; Vervaet, Chris; Remon, Jean Paul; Nopens, Ingmar; De Beer, Thomas
2014-08-01
Classically, the end point detection during fluid bed drying has been performed using indirect parameters, such as the product temperature or the humidity of the outlet drying air. This paper aims at comparing those classic methods to both in-line moisture and solid-state determination by means of Process Analytical Technology (PAT) tools (Raman and NIR spectroscopy) and a mass balance approach. The six-segmented fluid bed drying system being part of a fully continuous from-powder-to-tablet production line (ConsiGma™-25) was used for this study. A theophylline:lactose:PVP (30:67.5:2.5) blend was chosen as model formulation. For the development of the NIR-based moisture determination model, 15 calibration experiments in the fluid bed dryer were performed. Six test experiments were conducted afterwards, and the product was monitored in-line with NIR and Raman spectroscopy during drying. The results (drying endpoint and residual moisture) obtained via the NIR-based moisture determination model, the classical approach by means of indirect parameters and the mass balance model were then compared. Our conclusion is that the PAT-based method is most suited for use in a production set-up. Secondly, the different size fractions of the dried granules obtained during different experiments (fines, yield and oversized granules) were compared separately, revealing differences in both solid state of theophylline and moisture content between the different granule size fractions. Copyright © 2014 Elsevier B.V. All rights reserved.
Sverdlov, Serge; Thompson, Elizabeth A.
2013-01-01
In classical quantitative genetics, the correlation between the phenotypes of individuals with unknown genotypes and a known pedigree relationship is expressed in terms of probabilities of IBD states. In existing approaches to the inverse problem where genotypes are observed but pedigree relationships are not, dependence between phenotypes is either modeled as Bayesian uncertainty or mapped to an IBD model via inferred relatedness parameters. Neither approach yields a relationship between genotypic similarity and phenotypic similarity with a probabilistic interpretation corresponding to a generative model. We introduce a generative model for diploid allele effect based on the classic infinite allele mutation process. This approach motivates the concept of IBF (Identity by Function). The phenotypic covariance between two individuals given their diploid genotypes is expressed in terms of functional identity states. The IBF parameters define a genetic architecture for a trait without reference to specific alleles or population. Given full genome sequences, we treat a gene-scale functional region, rather than a SNP, as a QTL, modeling patterns of dominance for multiple alleles. Applications demonstrated by simulation include phenotype and effect prediction and association, and estimation of heritability and classical variance components. A simulation case study of the Missing Heritability problem illustrates a decomposition of heritability under the IBF framework into Explained and Unexplained components. PMID:23851163
De Sanctis, Bianca; Krukov, Ivan; de Koning, A P Jason
2017-09-19
Determination of the age of an allele based on its population frequency is a well-studied problem in population genetics, for which a variety of approximations have been proposed. We present a new result that, surprisingly, allows the expectation and variance of allele age to be computed exactly (within machine precision) for any finite absorbing Markov chain model in a matter of seconds. This approach makes none of the classical assumptions (e.g., weak selection, reversibility, infinite sites), exploits modern sparse linear algebra techniques, integrates over all sample paths, and is rapidly computable for Wright-Fisher populations up to N e = 100,000. With this approach, we study the joint effect of recurrent mutation, dominance, and selection, and demonstrate new examples of "selective strolls" where the classical symmetry of allele age with respect to selection is violated by weakly selected alleles that are older than neutral alleles at the same frequency. We also show evidence for a strong age imbalance, where rare deleterious alleles are expected to be substantially older than advantageous alleles observed at the same frequency when population-scaled mutation rates are large. These results highlight the under-appreciated utility of computational methods for the direct analysis of Markov chain models in population genetics.
Structure and dynamics of the peptide strand KRFK from the thrombospondin TSP-1 in water.
Taleb Bendiab, W; Benomrane, B; Bounaceur, B; Dauchez, M; Krallafa, A M
2018-02-14
Theoretical investigations of a solute in liquid water at normal temperature and pressure can be performed at different levels of theory. Static quantum calculations as well as classical and ab initio molecular dynamics are used to completely explore the conformational space for large solvated molecular systems. In the classical approach, it is essential to describe all of the interactions of the solute and the solvent in detail. Water molecules are very often described as rigid bodies when the most commonly used interaction potentials, such as the SPCE and the TIP4P models, are employed. Recently, a physical model based upon a cluster of rigid water molecules with a tetrahedral architecture (AB 4 ) was proposed that describes liquid water as a mixture of both TIP4P and SPCE molecular species that occur in the proportions implied by the tetrahedral architecture (one central molecule versus four outer molecules; i.e., 20% TIP4P versus 80% SPCE molecules). In this work, theoretical spectroscopic data for a peptide strand were correlated with the structural properties of the peptide strand solvated in water, based on data calculated using different theoretical approaches and physical models. We focused on a particular peptide strand, KRFK (lysine-arginine-phenylalanine-lysine), found in the thrombospondin TSP-1, due to its interesting properties. As the activity and electronic structure of this system is strongly linked to its structure, we correlated its structure with charge-density maps obtained using different semi-empirical charge Q eq equations. The structural and thermodynamic properties obtained from classical simulations were correlated with ab initio molecular dynamics (AIMD) data. Structural changes in the peptide strand were rationalized in terms of the motions of atoms and groups of atoms. To achieve this, conformational changes were investigated using calculated infrared spectra for the peptide in the gas phase and in water solvent. The calculated AIMD infrared spectrum for the peptide was correlated with static quantum calculations of the molecular system based on a harmonic approach as well as the VDOS (vibrational density of states) spectra obtained using various classical solvent models (SPCE, TIP4P, and AB 4 ) and charge maps.
Classical theory of atomic collisions - The first hundred years
NASA Astrophysics Data System (ADS)
Grujić, Petar V.
2012-05-01
Classical calculations of the atomic processes started in 1911 with famous Rutherford's evaluation of the differential cross section for α particles scattered on foil atoms [1]. The success of these calculations was soon overshadowed by the rise of Quantum Mechanics in 1925 and its triumphal success in describing processes at the atomic and subatomic levels. It was generally recognized that the classical approach should be inadequate and it was neglected until 1953, when the famous paper by Gregory Wannier appeared, in which the threshold law for the single ionization cross section behaviour by electron impact was derived. All later calculations and experimental studies confirmed the law derived by purely classical theory. The next step was taken by Ian Percival and collaborators in 60s, who developed a general classical three-body computer code, which was used by many researchers in evaluating various atomic processes like ionization, excitation, detachment, dissociation, etc. Another approach was pursued by Michal Gryzinski from Warsaw, who started a far reaching programme for treating atomic particles and processes as purely classical objects [2]. Though often criticized for overestimating the domain of the classical theory, results of his group were able to match many experimental data. Belgrade group was pursuing the classical approach using both analytical and numerical calculations, studying a number of atomic collisions, in particular near-threshold processes. Riga group, lead by Modris Gailitis [3], contributed considerably to the field, as it was done by Valentin Ostrovsky and coworkers from Sanct Petersbourg, who developed powerful analytical methods within purely classical mechanics [4]. We shall make an overview of these approaches and show some of the remarkable results, which were subsequently confirmed by semiclassical and quantum mechanical calculations, as well as by the experimental evidence. Finally we discuss the theoretical and epistemological background of the classical calculations and explain why these turned out so successful, despite the essentially quantum nature of the atomic and subatomic systems.
From classical to quantum mechanics: ``How to translate physical ideas into mathematical language''
NASA Astrophysics Data System (ADS)
Bergeron, H.
2001-09-01
Following previous works by E. Prugovečki [Physica A 91A, 202 (1978) and Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)] on common features of classical and quantum mechanics, we develop a unified mathematical framework for classical and quantum mechanics (based on L2-spaces over classical phase space), in order to investigate to what extent quantum mechanics can be obtained as a simple modification of classical mechanics (on both logical and analytical levels). To obtain this unified framework, we split quantum theory in two parts: (i) general quantum axiomatics (a system is described by a state in a Hilbert space, observables are self-adjoints operators, and so on) and (ii) quantum mechanics proper that specifies the Hilbert space as L2(Rn); the Heisenberg rule [pi,qj]=-iℏδij with p=-iℏ∇, the free Hamiltonian H=-ℏ2Δ/2m and so on. We show that general quantum axiomatics (up to a supplementary "axiom of classicity") can be used as a nonstandard mathematical ground to formulate physical ideas and equations of ordinary classical statistical mechanics. So, the question of a "true quantization" with "ℏ" must be seen as an independent physical problem not directly related with quantum formalism. At this stage, we show that this nonstandard formulation of classical mechanics exhibits a new kind of operation that has no classical counterpart: this operation is related to the "quantization process," and we show why quantization physically depends on group theory (the Galilei group). This analytical procedure of quantization replaces the "correspondence principle" (or canonical quantization) and allows us to map classical mechanics into quantum mechanics, giving all operators of quantum dynamics and the Schrödinger equation. The great advantage of this point of view is that quantization is based on concrete physical arguments and not derived from some "pure algebraic rule" (we exhibit also some limit of the correspondence principle). Moreover spins for particles are naturally generated, including an approximation of their interaction with magnetic fields. We also recover by this approach the semi-classical formalism developed by E. Prugovečki [Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)].
On Some Assumptions of the Null Hypothesis Statistical Testing
ERIC Educational Resources Information Center
Patriota, Alexandre Galvão
2017-01-01
Bayesian and classical statistical approaches are based on different types of logical principles. In order to avoid mistaken inferences and misguided interpretations, the practitioner must respect the inference rules embedded into each statistical method. Ignoring these principles leads to the paradoxical conclusions that the hypothesis…
Photographs, Foxfire, and Flea-Markets.
ERIC Educational Resources Information Center
Buckley, Mary
An introductory literature course for college sophomores focuses on children's "classics," based on the premise that no book good for children is for children only. This document describes an approach to teaching the "Little House in the Big Woods" series of children's books. Three techniques are suggested for confirming the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khrennikov, Andrei
We present fundamentals of a prequantum model with hidden variables of the classical field type. In some sense this is the comeback of classical wave mechanics. Our approach also can be considered as incorporation of quantum mechanics into classical signal theory. All quantum averages (including correlations of entangled systems) can be represented as classical signal averages and correlations.
Whitley, Heather D.; Scullard, Christian R.; Benedict, Lorin X.; ...
2014-12-04
Here, we present a discussion of kinetic theory treatments of linear electrical and thermal transport in hydrogen plasmas, for a regime of interest to inertial confinement fusion applications. In order to assess the accuracy of one of the more involved of these approaches, classical Lenard-Balescu theory, we perform classical molecular dynamics simulations of hydrogen plasmas using 2-body quantum statistical potentials and compute both electrical and thermal conductivity from out particle trajectories using the Kubo approach. Our classical Lenard-Balescu results employing the identical statistical potentials agree well with the simulations.
Construction of the Second Quito Astrolabe Catalogue
NASA Astrophysics Data System (ADS)
Kolesnik, Y. B.
1994-03-01
A method for astrolabe catalogue construction is presented. It is based on classical concepts, but the model of conditional equations for the group reduction is modified, additional parameters being introduced in the step- wise regressions. The chain adjustment is neglected, and the advantages of this approach are discussed. The method has been applied to the data obtained with the astrolabe of the Quito Astronomical Observatory from 1964 to 1983. Various characteristics of the catalogue produced with this method are compared with those due to the rigorous classical method. Some improvement both in systematic and random errors is outlined.
NASA Astrophysics Data System (ADS)
Srimani, P. K.; Parimala, Y. G.
2011-12-01
A unique approach has been developed to study patterns in ragas of Carnatic Classical music based on artificial neural networks. Ragas in Carnatic music which have found their roots in the Vedic period, have grown on a Scientific foundation over thousands of years. However owing to its vastness and complexities it has always been a challenge for scientists and musicologists to give an all encompassing perspective both qualitatively and quantitatively. Cognition, comprehension and perception of ragas in Indian classical music have always been the subject of intensive research, highly intriguing and many facets of these are hitherto not unravelled. This paper is an attempt to view the melakartha ragas with a cognitive perspective using artificial neural network based approach which has given raise to very interesting results. The 72 ragas of the melakartha system were defined through the combination of frequencies occurring in each of them. The data sets were trained using several neural networks. 100% accurate pattern recognition and classification was obtained using linear regression, TLRN, MLP and RBF networks. Performance of the different network topologies, by varying various network parameters, were compared. Linear regression was found to be the best performing network.
An Experimental and Theoretical Study of Nitrogen-Broadened Acetylene Lines
NASA Technical Reports Server (NTRS)
Thibault, Franck; Martinez, Raul Z.; Bermejo, Dionisio; Ivanov, Sergey V.; Buzykin, Oleg G.; Ma, Qiancheng
2014-01-01
We present experimental nitrogen-broadening coefficients derived from Voigt profiles of isotropic Raman Q-lines measured in the 2 band of acetylene (C2H2) at 150 K and 298 K, and compare them to theoretical values obtained through calculations that were carried out specifically for this work. Namely, full classical calculations based on Gordon's approach, two kinds of semi-classical calculations based on Robert Bonamy method as well as full quantum dynamical calculations were performed. All the computations employed exactly the same ab initio potential energy surface for the C2H2N2 system which is, to our knowledge, the most realistic, accurate and up-to-date one. The resulting calculated collisional half-widths are in good agreement with the experimental ones only for the full classical and quantum dynamical methods. In addition, we have performed similar calculations for IR absorption lines and compared the results to bibliographic values. Results obtained with the full classical method are again in good agreement with the available room temperature experimental data. The quantum dynamical close-coupling calculations are too time consuming to provide a complete set of values and therefore have been performed only for the R(0) line of C2H2. The broadening coefficient obtained for this line at 173 K and 297 K also compares quite well with the available experimental data. The traditional Robert Bonamy semi-classical formalism, however, strongly overestimates the values of half-width for both Qand R-lines. The refined semi-classical Robert Bonamy method, first proposed for the calculations of pressure broadening coefficients of isotropic Raman lines, is also used for IR lines. By using this improved model that takes into account effects from line coupling, the calculated semi-classical widths are significantly reduced and closer to the measured ones.
A novel word spotting method based on recurrent neural networks.
Frinken, Volkmar; Fischer, Andreas; Manmatha, R; Bunke, Horst
2012-02-01
Keyword spotting refers to the process of retrieving all instances of a given keyword from a document. In the present paper, a novel keyword spotting method for handwritten documents is described. It is derived from a neural network-based system for unconstrained handwriting recognition. As such it performs template-free spotting, i.e., it is not necessary for a keyword to appear in the training set. The keyword spotting is done using a modification of the CTC Token Passing algorithm in conjunction with a recurrent neural network. We demonstrate that the proposed systems outperform not only a classical dynamic time warping-based approach but also a modern keyword spotting system, based on hidden Markov models. Furthermore, we analyze the performance of the underlying neural networks when using them in a recognition task followed by keyword spotting on the produced transcription. We point out the advantages of keyword spotting when compared to classic text line recognition.
Schmiedt, Hanno; Schlemmer, Stephan; Yurchenko, Sergey N.; Yachmenev, Andrey
2017-01-01
We report a new semi-classical method to compute highly excited rotational energy levels of an asymmetric-top molecule. The method forgoes the idea of a full quantum mechanical treatment of the ro-vibrational motion of the molecule. Instead, it employs a semi-classical Green's function approach to describe the rotational motion, while retaining a quantum mechanical description of the vibrations. Similar approaches have existed for some time, but the method proposed here has two novel features. First, inspired by the path integral method, periodic orbits in the phase space and tunneling paths are naturally obtained by means of molecular symmetry analysis. Second, the rigorous variational method is employed for the first time to describe the molecular vibrations. In addition, we present a new robust approach to generating rotational energy surfaces for vibrationally excited states; this is done in a fully quantum-mechanical, variational manner. The semi-classical approach of the present work is applied to calculating the energies of very highly excited rotational states and it reduces dramatically the computing time as well as the storage and memory requirements when compared to the fullly quantum-mechanical variational approach. Test calculations for excited states of SO2 yield semi-classical energies in very good agreement with the available experimental data and the results of fully quantum-mechanical calculations. PMID:28000807
Ercan, Serdar; Scerrati, Alba; Wu, Phengfei; Zhang, Jun; Ammirati, Mario
2017-07-01
OBJECTIVE The subtemporal approach is one of the surgical routes used to reach the interpeduncular fossa. Keyhole subtemporal approaches and zygomatic arch osteotomy have been proposed in an effort to decrease the amount of temporal lobe retraction. However, the effects of these modified subtemporal approaches on temporal lobe retraction have never been objectively validated. METHODS A keyhole and a classic subtemporal craniotomy were executed in 4 fresh-frozen silicone-injected cadaver heads. The target was defined as the area bordered by the superior cerebellar artery, the anterior clinoid process, supraclinoid internal carotid artery, and the posterior cerebral artery. Once the target was fully visualized, the authors evaluated the amount of temporal lobe retraction by measuring the distance between the base of the middle fossa and the temporal lobe. In addition, the volume of the surgical and anatomical corridors was assessed as well as the surgical maneuverability using navigation and 3D moldings. The same evaluation was conducted after a zygomatic osteotomy was added to the two approaches. RESULTS Temporal lobe retraction was the same in the two approaches evaluated while the surgical corridor and the maneuverability were all greater in the classic subtemporal approach. CONCLUSIONS The zygomatic arch osteotomy facilitates the maneuverability and the surgical volume in both approaches, but the temporal lobe retraction benefit is confined to the lateral part of the middle fossa skull base and does not result in the retraction necessary to expose the selected target.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schubert, Alexander, E-mail: schubert@irsamc.ups-tlse.fr; Meier, Christoph; Falvo, Cyril
2016-08-07
We present mixed quantum-classical simulations on relaxation and dephasing of vibrationally excited carbon monoxide within a protein environment. The methodology is based on a vibrational surface hopping approach treating the vibrational states of CO quantum mechanically, while all remaining degrees of freedom are described by means of classical molecular dynamics. The CO vibrational states form the “surfaces” for the classical trajectories of protein and solvent atoms. In return, environmentally induced non-adiabatic couplings between these states cause transitions describing the vibrational relaxation from first principles. The molecular dynamics simulation yields a detailed atomistic picture of the energy relaxation pathways, taking themore » molecular structure and dynamics of the protein and its solvent fully into account. Using the ultrafast photolysis of CO in the hemoprotein FixL as an example, we study the relaxation of vibrationally excited CO and evaluate the role of each of the FixL residues forming the heme pocket.« less
Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests
NASA Astrophysics Data System (ADS)
Shumway, R. H.
2001-10-01
- The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.
Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests
NASA Astrophysics Data System (ADS)
Shumway, R. H.
The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.
Dietze, Klaas; Tucakov, Anna; Engel, Tatjana; Wirtz, Sabine; Depner, Klaus; Globig, Anja; Kammerer, Robert; Mouchantat, Susan
2017-01-05
Non-invasive sampling techniques based on the analysis of oral fluid specimen have gained substantial importance in the field of swine herd management. Methodological advances have a focus on endemic viral diseases in commercial pig production. More recently, these approaches have been adapted to non-invasive sampling of wild boar for transboundary animal disease detection for which these effective population level sampling methods have not been available. In this study, a rope-in-a-bait based oral fluid sampling technique was tested to detect classical swine fever virus nucleic acid shedding from experimentally infected domestic pigs. Separated in two groups treated identically, the course of the infection was slightly differing in terms of onset of the clinical signs and levels of viral ribonucleic acid detection in the blood and oral fluid. The technique was capable of detecting classical swine fever virus nucleic acid as of day 7 post infection coinciding with the first detection in conventional oropharyngeal swab samples from some individual animals. Except for day 7 post infection in the "slower onset group", the chances of classical swine fever virus nucleic acid detection in ropes were identical or higher as compared to the individual sampling. With the provided evidence, non-invasive oral fluid sampling at group level can be considered as additional cost-effective detection tool in classical swine fever prevention and control strategies. The proposed methodology is of particular use in production systems with reduced access to veterinary services such as backyard or scavenging pig production where it can be integrated in feeding or baiting practices.
NASA Astrophysics Data System (ADS)
Ceballos, G. A.; Hernández, L. F.
2015-04-01
Objective. The classical ERP-based speller, or P300 Speller, is one of the most commonly used paradigms in the field of Brain Computer Interfaces (BCI). Several alterations to the visual stimuli presentation system have been developed to avoid unfavorable effects elicited by adjacent stimuli. However, there has been little, if any, regard to useful information contained in responses to adjacent stimuli about spatial location of target symbols. This paper aims to demonstrate that combining the classification of non-target adjacent stimuli with standard classification (target versus non-target) significantly improves classical ERP-based speller efficiency. Approach. Four SWLDA classifiers were trained and combined with the standard classifier: the lower row, upper row, right column and left column classifiers. This new feature extraction procedure and the classification method were carried out on three open databases: the UAM P300 database (Universidad Autonoma Metropolitana, Mexico), BCI competition II (dataset IIb) and BCI competition III (dataset II). Main results. The inclusion of the classification of non-target adjacent stimuli improves target classification in the classical row/column paradigm. A gain in mean single trial classification of 9.6% and an overall improvement of 25% in simulated spelling speed was achieved. Significance. We have provided further evidence that the ERPs produced by adjacent stimuli present discriminable features, which could provide additional information about the spatial location of intended symbols. This work promotes the searching of information on the peripheral stimulation responses to improve the performance of emerging visual ERP-based spellers.
A Dialogue on Reclaiming Troubled Youth
ERIC Educational Resources Information Center
Aichhorn, August; Redl, Fritz
2012-01-01
This discussion is drawn from the writings of two eminent founders of strength-based approaches to troubled children and adolescents. August Aichhorn is best known for his classic book, "Wayward Youth," and Fritz Redl as co-author of "Children Who Hate". August Aichhorn and Anna Freud mentored a young educational psychologist, Fritz Redl…
[Functional bowel disorders: impact and limitations of evidence-based medicine].
de Saussure, P; Bertolini, D
2006-09-06
Although tremendous efforts have been carried out to explore the physiopathology, classification and therapeutic modalities of functional bowels disorders, these conditions still elude the classical anatomical-clinical approach. This article summarizes recent advances in the field, discusses critically their impact on daily clinical practice and provides some practical recommendations.
Testing Based on Understanding: Implications from Studies of Spatial Ability.
ERIC Educational Resources Information Center
Egan, Dennis E.
1979-01-01
The information-processing approach and results of research on spatial ability are analyzed. Performance consists of a sequence of distinct mental operations that seem general across subjects, and can be individually measured. New interpretations for some classical concepts in psychological testing and procedures for abilities are suggested.…
Spacecraft Formation Control and Estimation Via Improved Relative Motion Dynamics
2017-03-30
statistical (e.g. batch least-squares or Extended Kalman Filter ) estimator. In addition, the IROD approach can be applied to classical (ground-based...covariance Test the viability of IROD solutions by injecting them into precise orbit determination schemes (e.g. various strains of Kalman filters
Unified Approximations: A New Approach for Monoprotic Weak Acid-Base Equilibria
ERIC Educational Resources Information Center
Pardue, Harry; Odeh, Ihab N.; Tesfai, Teweldemedhin M.
2004-01-01
The unified approximations reduce the conceptual complexity by combining solutions for a relatively large number of different situations into just two similar sets of processes. Processes used to solve problems by either the unified or classical approximations require similar degrees of understanding of the underlying chemical processes.
Book Selection, Collection Development, and Bounded Rationality.
ERIC Educational Resources Information Center
Schwartz, Charles A.
1989-01-01
Reviews previously proposed schemes of classical rationality in book selection, describes new approaches to rational choice behavior, and presents a model of book selection based on bounded rationality in a garbage can decision process. The role of tacit knowledge and symbolic content in the selection process are also discussed. (102 references)…
A Renewed Approach to Undergraduate Worship Leader Education
ERIC Educational Resources Information Center
Hendricks, Allen Sherman
2012-01-01
The church music degree program at Charleston Southern University, based on a European traditional/classical sacred music degree model, has been attracting fewer and fewer students. The last two students pursuing this degree were graduated in May, 2011. Prior to their graduation, the administration encouraged the music department to investigate…
Equal Employment Legislation: Alternative Means of Compliance.
ERIC Educational Resources Information Center
Daum, Jeffrey W.
Alternative means of compliance available to organizations to bring their manpower uses into line with existing equal employment legislation are discussed in this paper. The first area addressed concerns the classical approach to selection and placement based on testing methods. The second area discussed reviews various nontesting techniques, such…
ERIC Educational Resources Information Center
Hamilton, John D.
2007-01-01
Although the five classic evidence-based medicine steps to make the literature useful in caring for patients sound simple, most clinicians who try this approach do not find it easy. Although the first step--creating an answerable question--is manageable for busy clinicians, tracking down "the best evidence" and critically appraising it for…
Sophistic Ethics in the Technical Writing Classroom: Teaching "Nomos," Deliberation, and Action.
ERIC Educational Resources Information Center
Scott, J. Blake
1995-01-01
Claims that teaching ethics is particularly important to technical writing. Outlines a classical, sophistic approach to ethics based on the theories and pedagogies of Protagoras, Gorgias, and Isocrates, which emphasizes the Greek concept of "nomos," internal and external deliberation, and responsible action. Discusses problems and…
AI-Based Chatterbots and Spoken English Teaching: A Critical Analysis
ERIC Educational Resources Information Center
Sha, Guoquan
2009-01-01
The aim of various approaches implemented, whether the classical "three Ps" (presentation, practice, and production) or communicative language teaching (CLT), is to achieve communicative competence. Although a lot of software developed for teaching spoken English is dressed up to raise interaction, its methodology is largely rooted in tradition.…
Integrated control-system design via generalized LQG (GLQG) theory
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Hyland, David C.; Richter, Stephen; Haddad, Wassim M.
1989-01-01
Thirty years of control systems research has produced an enormous body of theoretical results in feedback synthesis. Yet such results see relatively little practical application, and there remains an unsettling gap between classical single-loop techniques (Nyquist, Bode, root locus, pole placement) and modern multivariable approaches (LQG and H infinity theory). Large scale, complex systems, such as high performance aircraft and flexible space structures, now demand efficient, reliable design of multivariable feedback controllers which optimally tradeoff performance against modeling accuracy, bandwidth, sensor noise, actuator power, and control law complexity. A methodology is described which encompasses numerous practical design constraints within a single unified formulation. The approach, which is based upon coupled systems or modified Riccati and Lyapunov equations, encompasses time-domain linear-quadratic-Gaussian theory and frequency-domain H theory, as well as classical objectives such as gain and phase margin via the Nyquist circle criterion. In addition, this approach encompasses the optimal projection approach to reduced-order controller design. The current status of the overall theory will be reviewed including both continuous-time and discrete-time (sampled-data) formulations.
A Parallel and Incremental Approach for Data-Intensive Learning of Bayesian Networks.
Yue, Kun; Fang, Qiyu; Wang, Xiaoling; Li, Jin; Liu, Weiyi
2015-12-01
Bayesian network (BN) has been adopted as the underlying model for representing and inferring uncertain knowledge. As the basis of realistic applications centered on probabilistic inferences, learning a BN from data is a critical subject of machine learning, artificial intelligence, and big data paradigms. Currently, it is necessary to extend the classical methods for learning BNs with respect to data-intensive computing or in cloud environments. In this paper, we propose a parallel and incremental approach for data-intensive learning of BNs from massive, distributed, and dynamically changing data by extending the classical scoring and search algorithm and using MapReduce. First, we adopt the minimum description length as the scoring metric and give the two-pass MapReduce-based algorithms for computing the required marginal probabilities and scoring the candidate graphical model from sample data. Then, we give the corresponding strategy for extending the classical hill-climbing algorithm to obtain the optimal structure, as well as that for storing a BN by
Management of relapsed/refractory classical Hodgkin lymphoma in transplant-ineligible patients.
Mehta-Shah, Neha; Bartlett, Nancy L
2018-04-12
Addition of brentuximab vedotin, a CD30-targeted antibody-drug conjugate, and the programmed death 1 (PD-1) inhibitors nivolumab and pembrolizumab to the armamentarium for transplant-ineligible relapsed/refractory classical Hodgkin lymphoma has resulted in improved outcomes, including the potential for cure in a small minority of patients. For patients who have failed prior transplant or are unsuitable for dose-intense approaches based on age or comorbidities, an individualized approach with sequential use of single agents such as brentuximab vedotin, PD-1 inhibitors, everolimus, lenalidomide, or conventional agents such as gemcitabine or vinorelbine may result in prolonged survival with a minimal or modest effect on quality of life. Participation in clinical trials evaluating new approaches such as combination immune checkpoint inhibition, novel antibody-drug conjugates, or cellular therapies such as Epstein-Barr virus-directed cytotoxic T lymphocytes and chimeric antigen receptor T cells offer additional options for eligible patients. © 2018 by The American Society of Hematology.
Speckle: tool for diagnosis assistance
NASA Astrophysics Data System (ADS)
Carvalho, O.; Guyot, S.; Roy, L.; Benderitter, M.; Clairac, B.
2006-09-01
In this paper, we present a new approach of the speckle phenomenon. This method is based on the fractal Brownian motion theory and allows the extraction of three stochastic parameters to characterize the speckle pattern. For the first time, we present the results of this method applied to the discrimination of the healthy vs. pathologic skin. We also demonstrate, in case of the scleroderma, than this method is more accurate than the classical frequential approach.
Understanding Cryptic Pocket Formation in Protein Targets by Enhanced Sampling Simulations.
Oleinikovas, Vladimiras; Saladino, Giorgio; Cossins, Benjamin P; Gervasio, Francesco L
2016-11-02
Cryptic pockets, that is, sites on protein targets that only become apparent when drugs bind, provide a promising alternative to classical binding sites for drug development. Here, we investigate the nature and dynamical properties of cryptic sites in four pharmacologically relevant targets, while comparing the efficacy of various simulation-based approaches in discovering them. We find that the studied cryptic sites do not correspond to local minima in the computed conformational free energy landscape of the unliganded proteins. They thus promptly close in all of the molecular dynamics simulations performed, irrespective of the force-field used. Temperature-based enhanced sampling approaches, such as Parallel Tempering, do not improve the situation, as the entropic term does not help in the opening of the sites. The use of fragment probes helps, as in long simulations occasionally it leads to the opening and binding to the cryptic sites. Our observed mechanism of cryptic site formation is suggestive of an interplay between two classical mechanisms: induced-fit and conformational selection. Employing this insight, we developed a novel Hamiltonian Replica Exchange-based method "SWISH" (Sampling Water Interfaces through Scaled Hamiltonians), which combined with probes resulted in a promising general approach for cryptic site discovery. We also addressed the issue of "false-positives" and propose a simple approach to distinguish them from druggable cryptic pockets. Our simulations, whose cumulative sampling time was more than 200 μs, help in clarifying the molecular mechanism of pocket formation, providing a solid basis for the choice of an efficient computational method.
Model-based Executive Control through Reactive Planning for Autonomous Rovers
NASA Technical Reports Server (NTRS)
Finzi, Alberto; Ingrand, Felix; Muscettola, Nicola
2004-01-01
This paper reports on the design and implementation of a real-time executive for a mobile rover that uses a model-based, declarative approach. The control system is based on the Intelligent Distributed Execution Architecture (IDEA), an approach to planning and execution that provides a unified representational and computational framework for an autonomous agent. The basic hypothesis of IDEA is that a large control system can be structured as a collection of interacting agents, each with the same fundamental structure. We show that planning and real-time response are compatible if the executive minimizes the size of the planning problem. We detail the implementation of this approach on an exploration rover (Gromit an RWI ATRV Junior at NASA Ames) presenting different IDEA controllers of the same domain and comparing them with more classical approaches. We demonstrate that the approach is scalable to complex coordination of functional modules needed for autonomous navigation and exploration.
Students' ideas about prismatic images: teaching experiments for an image-based approach
NASA Astrophysics Data System (ADS)
Grusche, Sascha
2017-05-01
Prismatic refraction is a classic topic in science education. To investigate how undergraduate students think about prismatic dispersion, and to see how they change their thinking when observing dispersed images, five teaching experiments were done and analysed according to the Model of Educational Reconstruction. For projection through a prism, the students used a 'split image projection' conceptualisation. For the view through a prism, this conceptualisation was not fruitful. Based on the observed images, six of seven students changed to a 'diverted image projection' conceptualisation. From a comparison between students' and scientists' ideas, teaching implications are derived for an image-based approach.
Rational design of gene-based vaccines.
Barouch, Dan H
2006-01-01
Vaccine development has traditionally been an empirical discipline. Classical vaccine strategies include the development of attenuated organisms, whole killed organisms, and protein subunits, followed by empirical optimization and iterative improvements. While these strategies have been remarkably successful for a wide variety of viruses and bacteria, these approaches have proven more limited for pathogens that require cellular immune responses for their control. In this review, current strategies to develop and optimize gene-based vaccines are described, with an emphasis on novel approaches to improve plasmid DNA vaccines and recombinant adenovirus vector-based vaccines. Copyright 2006 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.
Mixed QM/MM molecular electrostatic potentials.
Hernández, B; Luque, F J; Orozco, M
2000-05-01
A new method is presented for the calculation of the Molecular Electrostatic Potential (MEP) in large systems. Based on the mixed Quantum Mechanics/Molecular Mechanics (QM/MM) approach, the method assumes both a quantum and classical description for the molecule, and the calculation of the MEP in the space surrounding the molecule is made using this dual treatment. The MEP at points close to the molecule is computed using a full QM formalism, while a pure classical evaluation of the MEP is used for points located at large distances from the molecule. The algorithm allows the user to select the desired level of accuracy in the MEP, so that the definition of the regions where the MEP is computed at the classical or QM levels is adjusted automatically. The potential use of this QM/MM MEP in molecular modeling studies is discussed.
Rule-based spatial modeling with diffusing, geometrically constrained molecules.
Gruenert, Gerd; Ibrahim, Bashar; Lenser, Thorsten; Lohel, Maiko; Hinze, Thomas; Dittrich, Peter
2010-06-07
We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined. For our implementation (based on LAMMPS), we have chosen an already existing formalism (BioNetGen) for the implicit specification of the reaction network. This compatibility allows to import existing models easily, i.e., only additional geometry data files have to be provided. Our simulations show that the obtained dynamics can be fundamentally different from those simulations that use classical reaction-diffusion approaches like Partial Differential Equations or Gillespie-type spatial stochastic simulation. We show, for example, that the combination of combinatorial complexity and geometric effects leads to the emergence of complex self-assemblies and transportation phenomena happening faster than diffusion (using a model of molecular walkers on microtubules). When the mentioned classical simulation approaches are applied, these aspects of modeled systems cannot be observed without very special treatment. Further more, we show that the geometric information can even change the organizational structure of the reaction system. That is, a set of chemical species that can in principle form a stationary state in a Differential Equation formalism, is potentially unstable when geometry is considered, and vice versa. We conclude that our approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial simulation systems like those for DNA or virus capsid self-assembly.
Rule-based spatial modeling with diffusing, geometrically constrained molecules
2010-01-01
Background We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined. For our implementation (based on LAMMPS), we have chosen an already existing formalism (BioNetGen) for the implicit specification of the reaction network. This compatibility allows to import existing models easily, i.e., only additional geometry data files have to be provided. Results Our simulations show that the obtained dynamics can be fundamentally different from those simulations that use classical reaction-diffusion approaches like Partial Differential Equations or Gillespie-type spatial stochastic simulation. We show, for example, that the combination of combinatorial complexity and geometric effects leads to the emergence of complex self-assemblies and transportation phenomena happening faster than diffusion (using a model of molecular walkers on microtubules). When the mentioned classical simulation approaches are applied, these aspects of modeled systems cannot be observed without very special treatment. Further more, we show that the geometric information can even change the organizational structure of the reaction system. That is, a set of chemical species that can in principle form a stationary state in a Differential Equation formalism, is potentially unstable when geometry is considered, and vice versa. Conclusions We conclude that our approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial simulation systems like those for DNA or virus capsid self-assembly. PMID:20529264
Parenthood and Worrying About Climate Change: The Limitations of Previous Approaches.
Ekholm, Sara; Olofsson, Anna
2017-02-01
The present study considers the correlation between parenthood and worry about the consequences of climate change. Two approaches to gauging people's perceptions of the risks of climate change are compared: the classic approach, which measures risk perception, and the emotion-based approach, which measures feelings toward a risk object. The empirical material is based on a questionnaire-based survey of 3,529 people in Sweden, of whom 1,376 answered, giving a response rate of 39%. The results show that the correlation of parenthood and climate risk is significant when the emotional aspect is raised, but not when respondents were asked to do cognitive estimates of risk. Parenthood proves significant in all three questions that measure feelings, demonstrating that it is a determinant that serves to increase worry about climate change. © 2016 Society for Risk Analysis.
Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.
Segovia, F; Górriz, J M; Ramírez, J; Phillips, C
2016-01-01
Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods.
Quantum-capacity-approaching codes for the detected-jump channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grassl, Markus; Wei Zhaohui; Ji Zhengfeng
2010-12-15
The quantum-channel capacity gives the ultimate limit for the rate at which quantum data can be reliably transmitted through a noisy quantum channel. Degradable quantum channels are among the few channels whose quantum capacities are known. Given the quantum capacity of a degradable channel, it remains challenging to find a practical coding scheme which approaches capacity. Here we discuss code designs for the detected-jump channel, a degradable channel with practical relevance describing the physics of spontaneous decay of atoms with detected photon emission. We show that this channel can be used to simulate a binary classical channel with both erasuresmore » and bit flips. The capacity of the simulated classical channel gives a lower bound on the quantum capacity of the detected-jump channel. When the jump probability is small, it almost equals the quantum capacity. Hence using a classical capacity-approaching code for the simulated classical channel yields a quantum code which approaches the quantum capacity of the detected-jump channel.« less
De la Flor-Martínez, Maria; Galindo-Moreno, Pablo; Sánchez-Fernández, Elena; Piattelli, Adriano; Cobo, Manuel Jesus; Herrera-Viedma, Enrique
2016-10-01
The study of classic papers permits analysis of the past, present, and future of a specific area of knowledge. This type of analysis is becoming more frequent and more sophisticated. Our objective was to use the H-classics method, based on the h-index, to analyze classic papers in Implant Dentistry, Periodontics, and Oral Surgery (ID, P, and OS). First, an electronic search of documents related to ID, P, and OS was conducted in journals indexed in Journal Citation Reports (JCR) 2014 within the category 'Dentistry, Oral Surgery & Medicine'. Second, Web of Knowledge databases were searched using Mesh terms related to ID, P, and OS. Finally, the H-classics method was applied to select the classic articles in these disciplines, collecting data on associated research areas, document type, country, institutions, and authors. Of 267,611 documents related to ID, P, and OS retrieved from JCR journals (2014), 248 were selected as H-classics. They were published in 35 journals between 1953 and 2009, most frequently in the Journal of Clinical Periodontology (18.95%), the Journal of Periodontology (18.54%), International Journal of Oral and Maxillofacial Implants (9.27%), and Clinical Oral Implant Research (6.04%). These classic articles derived from the USA in 49.59% of cases and from Europe in 47.58%, while the most frequent host institution was the University of Gothenburg (17.74%) and the most frequent authors were J. Lindhe (10.48%) and S. Socransky (8.06%). The H-classics approach offers an objective method to identify core knowledge in clinical disciplines such as ID, P, and OS. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Stability analysis of spacecraft power systems
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.; Sheble, G. B.; Nelms, R. M.
1990-01-01
The problems in applying standard electric utility models, analyses, and algorithms to the study of the stability of spacecraft power conditioning and distribution systems are discussed. Both single-phase and three-phase systems are considered. Of particular concern are the load and generator models that are used in terrestrial power system studies, as well as the standard assumptions of load and topological balance that lead to the use of the positive sequence network. The standard assumptions regarding relative speeds of subsystem dynamic responses that are made in the classical transient stability algorithm, which forms the backbone of utility-based studies, are examined. The applicability of these assumptions to a spacecraft power system stability study is discussed in detail. In addition to the classical indirect method, the applicability of Liapunov's direct methods to the stability determination of spacecraft power systems is discussed. It is pointed out that while the proposed method uses a solution process similar to the classical algorithm, the models used for the sources, loads, and networks are, in general, more accurate. Some preliminary results are given for a linear-graph, state-variable-based modeling approach to the study of the stability of space-based power distribution networks.
NASA Astrophysics Data System (ADS)
Collins, Robert J.; Donaldon, Ross J.; Dunjko, Vedran; Wallden, Petros; Clarke, Patrick J.; Andersson, Erika; Jeffers, John; Buller, Gerald S.
2014-10-01
Classical digital signatures are commonly used in e-mail, electronic financial transactions and other forms of electronic communications to ensure that messages have not been tampered with in transit, and that messages are transferrable. The security of commonly used classical digital signature schemes relies on the computational difficulty of inverting certain mathematical functions. However, at present, there are no such one-way functions which have been proven to be hard to invert. With enough computational resources certain implementations of classical public key cryptosystems can be, and have been, broken with current technology. It is nevertheless possible to construct information-theoretically secure signature schemes, including quantum digital signature schemes. Quantum signature schemes can be made information theoretically secure based on the laws of quantum mechanics, while classical comparable protocols require additional resources such as secret communication and a trusted authority. Early demonstrations of quantum digital signatures required quantum memory, rendering them impractical at present. Our present implementation is based on a protocol that does not require quantum memory. It also uses the new technique of unambiguous quantum state elimination, Here we report experimental results for a test-bed system, recorded with a variety of different operating parameters, along with a discussion of aspects of the system security.
Classical and quantum simulations of warm dense carbon
NASA Astrophysics Data System (ADS)
Whitley, Heather; Sanchez, David; Hamel, Sebastien; Correa, Alfredo; Benedict, Lorin
We have applied classical and DFT-based molecular dynamics (MD) simulations to study the equation of state of carbon in the warm dense matter regime (ρ = 3.7 g/cc, 0.86 eV
MIP sensors--the electrochemical approach.
Malitesta, Cosimino; Mazzotta, Elisabetta; Picca, Rosaria A; Poma, Alessandro; Chianella, Iva; Piletsky, Sergey A
2012-02-01
This review highlights the importance of coupling molecular imprinting technology with methodology based on electrochemical techniques for the development of advanced sensing devices. In recent years, growing interest in molecularly imprinted polymers (MIPs) in the preparation of recognition elements has led researchers to design novel formats for improvement of MIP sensors. Among possible approaches proposed in the literature on this topic, we will focus on the electrosynthesis of MIPs and on less common hybrid technology (e.g. based on electrochemistry and classical MIPs, or nanotechnology). Starting from the early work reported in this field, an overview of the most innovative and successful examples will be reviewed.
An EQT-cDFT approach to determine thermodynamic properties of confined fluids.
Mashayak, S Y; Motevaselian, M H; Aluru, N R
2015-06-28
We present a continuum-based approach to predict the structure and thermodynamic properties of confined fluids at multiple length-scales, ranging from a few angstroms to macro-meters. The continuum approach is based on the empirical potential-based quasi-continuum theory (EQT) and classical density functional theory (cDFT). EQT is a simple and fast approach to predict inhomogeneous density and potential profiles of confined fluids. We use EQT potentials to construct a grand potential functional for cDFT. The EQT-cDFT-based grand potential can be used to predict various thermodynamic properties of confined fluids. In this work, we demonstrate the EQT-cDFT approach by simulating Lennard-Jones fluids, namely, methane and argon, confined inside slit-like channels of graphene. We show that the EQT-cDFT can accurately predict the structure and thermodynamic properties, such as density profiles, adsorption, local pressure tensor, surface tension, and solvation force, of confined fluids as compared to the molecular dynamics simulation results.
NASA Astrophysics Data System (ADS)
Darbandi, Masoud; Abrar, Bagher
2018-01-01
The spectral-line weighted-sum-of-gray-gases (SLW) model is considered as a modern global model, which can be used in predicting the thermal radiation heat transfer within the combustion fields. The past SLW model users have mostly employed the reference approach to calculate the local values of gray gases' absorption coefficient. This classical reference approach assumes that the absorption spectra of gases at different thermodynamic conditions are scalable with the absorption spectrum of gas at a reference thermodynamic state in the domain. However, this assumption cannot be reasonable in combustion fields, where the gas temperature is very different from the reference temperature. Consequently, the results of SLW model incorporated with the classical reference approach, say the classical SLW method, are highly sensitive to the reference temperature magnitude in non-isothermal combustion fields. To lessen this sensitivity, the current work combines the SLW model with a modified reference approach, which is a particular one among the eight possible reference approach forms reported recently by Solovjov, et al. [DOI: 10.1016/j.jqsrt.2017.01.034, 2017]. The combination is called "modified SLW method". This work shows that the modified reference approach can provide more accurate total emissivity calculation than the classical reference approach if it is coupled with the SLW method. This would be particularly helpful for more accurate calculation of radiation transfer in highly non-isothermal combustion fields. To approve this, we use both the classical and modified SLW methods and calculate the radiation transfer in such fields. It is shown that the modified SLW method can almost eliminate the sensitivity of achieved results to the chosen reference temperature in treating highly non-isothermal combustion fields.
Symmetrical Windowing for Quantum States in Quasi-Classical Trajectory Simulations
NASA Astrophysics Data System (ADS)
Cotton, Stephen Joshua
An approach has been developed for extracting approximate quantum state-to-state information from classical trajectory simulations which "quantizes" symmetrically both the initial and final classical actions associated with the degrees of freedom of interest using quantum number bins (or "window functions") which are significantly narrower than unit-width. This approach thus imposes a more stringent quantization condition on classical trajectory simulations than has been traditionally employed, while doing so in a manner that is time-symmetric and microscopically reversible. To demonstrate this "symmetric quasi-classical" (SQC) approach for a simple real system, collinear H + H2 reactive scattering calculations were performed [S.J. Cotton and W.H. Miller, J. Phys. Chem. A 117, 7190 (2013)] with SQC-quantization applied to the H 2 vibrational degree of freedom (DOF). It was seen that the use of window functions of approximately 1/2-unit width led to calculated reaction probabilities in very good agreement with quantum mechanical results over the threshold energy region, representing a significant improvement over what is obtained using the traditional quasi-classical procedure. The SQC approach was then applied [S.J. Cotton and W.H. Miller, J. Chem. Phys. 139, 234112 (2013)] to the much more interesting and challenging problem of incorporating non-adiabatic effects into what would otherwise be standard classical trajectory simulations. To do this, the classical Meyer-Miller (MM) Hamiltonian was used to model the electronic DOFs, with SQC-quantization applied to the classical "electronic" actions of the MM model---representing the occupations of the electronic states---in order to extract the electronic state population dynamics. It was demonstrated that if one ties the zero-point energy (ZPE) of the electronic DOFs to the SQC windowing function's width parameter this very simple SQC/MM approach is capable of quantitatively reproducing quantum mechanical results for a range of standard benchmark models of electronically non-adiabatic processes, including applications where "quantum" coherence effects are significant. Notably, among these benchmarks was the well-studied "spin-boson" model of condensed phase non-adiabatic dynamics, in both its symmetric and asymmetric forms---the latter of which many classical approaches fail to treat successfully. The SQC/MM approach to the treatment of non-adiabatic dynamics was next applied [S.J. Cotton, K. Igumenshchev, and W.H. Miller, J. Chem. Phys., 141, 084104 (2014)] to several recently proposed models of condensed phase electron transfer (ET) processes. For these problems, a flux-side correlation function framework modified for consistency with the SQC approach was developed for the calculation of thermal ET rate constants, and excellent accuracy was seen over wide ranges of non-adiabatic coupling strength and energetic bias/exothermicity. Significantly, the "inverted regime" in thermal rate constants (with increasing bias) known from Marcus Theory was reproduced quantitatively for these models---representing the successful treatment of another regime that classical approaches generally have difficulty in correctly describing. Relatedly, a model of photoinduced proton coupled electron transfer (PCET) was also addressed, and it was shown that the SQC/MM approach could reasonably model the explicit population dynamics of the photoexcited electron donor and acceptor states over the four parameter regimes considered. The potential utility of the SQC/MM technique lies in its stunning simplicity and the ease by which it may readily be incorporated into "ordinary" molecular dynamics (MD) simulations. In short, a typical MD simulation may be augmented to take non-adiabatic effects into account simply by introducing an auxiliary pair of classical "electronic" action-angle variables for each energetically viable Born-Oppenheimer surface, and time-evolving these auxiliary variables via Hamilton's equations (using the MM electronic Hamiltonian) in the same manner that the other classical variables---i.e., the coordinates of all the nuclei---are evolved forward in time. In a complex molecular system involving many hundreds or thousands of nuclear DOFs, the propagation of these extra "electronic" variables represents a modest increase in computational effort, and yet, the examples presented herein suggest that in many instances the SQC/MM approach will describe the true non-adiabatic quantum dynamics to a reasonable and useful degree of quantitative accuracy.
Methodological issues underlying multiple decrement life table analysis.
Mode, C J; Avery, R C; Littman, G S; Potter, R G
1977-02-01
In this paper, the actuarial method of multiple decrement life table analysis of censored, longitudinal data is examined. The discussion is organized in terms of the first segment of usage of an intrauterine device. Weaknesses of the actuarial approach are pointed out, and an alternative approach, based on the classical model of competing risks, is proposed. Finally, the actuarial and the alternative method of analyzing censored data are compared, using data from the Taichung Medical Study on Intrauterine Devices.
Franson Interference Generated by a Two-Level System
NASA Astrophysics Data System (ADS)
Peiris, M.; Konthasinghe, K.; Muller, A.
2017-01-01
We report a Franson interferometry experiment based on correlated photon pairs generated via frequency-filtered scattered light from a near-resonantly driven two-level semiconductor quantum dot. In contrast to spontaneous parametric down-conversion and four-wave mixing, this approach can produce single pairs of correlated photons. We have measured a Franson visibility as high as 66%, which goes beyond the classical limit of 50% and approaches the limit of violation of Bell's inequalities (70.7%).
Some practical approaches to a course on paraconsistent logic for engineers
NASA Astrophysics Data System (ADS)
Lambert-Torres, Germano; de Moraes, Carlos Henrique Valerio; Coutinho, Maurilio Pereira; Martins, Helga Gonzaga; Borges da Silva, Luiz Eduardo
2017-11-01
This paper describes a non-classical logic course primarily indicated for graduate students in electrical engineering and energy engineering. The content of this course is based on the vision that it is not enough for a student to indefinitely accumulate knowledge; it is necessary to explore all the occasions to update, deepen, and enrich that knowledge, adapting it to a complex world. Therefore, this course is not tied to theoretical formalities and tries at each moment to provide a practical view of the non-classical logic. In the real world, the inconsistencies are important and cannot be ignored because contradictory information brings relevant facts, sometimes modifying the entire result of the analysis. As consequence, the non-classical logics, such as annotated paraconsistent logic - APL, are efficiently framed in the approach of complex situations of the real world. In APL, the concepts of unknown, partial, ambiguous, and inconsistent knowledge are referred not to trivialise any system in analysis. This course presents theoretical and applicable aspects of APL, which are successfully used in decision-making structures. The course is divided into modules: Basic, 2vAPL, 3vAPL, 4vAPL, and Final Project.
Wetting of heterogeneous substrates. A classical density-functional-theory approach
NASA Astrophysics Data System (ADS)
Yatsyshin, Peter; Parry, Andrew O.; Rascón, Carlos; Duran-Olivencia, Miguel A.; Kalliadasis, Serafim
2017-11-01
Wetting is a nucleation of a third phase (liquid) on the interface between two different phases (solid and gas). In many experimentally accessible cases of wetting, the interplay between the substrate structure, and the fluid-fluid and fluid-substrate intermolecular interactions leads to the appearance of a whole ``zoo'' of exciting interface phase transitions, associated with the formation of nano-droplets/bubbles, and thin films. Practical applications of wetting at small scales are numerous and include the design of lab-on-a-chip devices and superhydrophobic surfaces. In this talk, we will use a fully microscopic approach to explore the phase space of a planar wall, decorated with patches of different hydrophobicity, and demonstrate the highly non-trivial behaviour of the liquid-gas interface near the substrate. We will present fluid density profiles, adsorption isotherms and wetting phase diagrams. Our analysis is based on a formulation of statistical mechanics, commonly known as classical density-functional theory. It provides a computationally-friendly and rigorous framework, suitable for probing small-scale physics of classical fluids and other soft-matter systems. EPSRC Grants No. EP/L027186,EP/K503733;ERC Advanced Grant No. 247031.
Recommender engine for continuous-time quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
Toropova, Alla P; Toropov, Andrey A
2013-11-01
The increasing use of nanomaterials incorporated into consumer products leads to the need for developing approaches to establish "quantitative structure-activity relationships" (QSARs) for various nanomaterials. However, the molecular structure as rule is not available for nanomaterials at least in its classic meaning. An possible alternative of classic QSAR (based on the molecular structure) is the using of data on physicochemical features of TiO(2) nanoparticles. The damage to cellular membranes (units L(-1)) by means of various TiO(2) nanoparticles is examined as the endpoint. Copyright © 2013 Elsevier Ltd. All rights reserved.
Some general remarks on hyperplasticity modelling and its extension to partially saturated soils
NASA Astrophysics Data System (ADS)
Lei, Xiaoqin; Wong, Henry; Fabbri, Antonin; Bui, Tuan Anh; Limam, Ali
2016-06-01
The essential ideas and equations of classic plasticity and hyperplasticity are successively recalled and compared, in order to highlight their differences and complementarities. The former is based on the mathematical framework proposed by Hill (The mathematical theory of plasticity. Oxford University Press, Oxford, 1950), whereas the latter is founded on the orthogonality hypothesis of Ziegler (An introduction to thermomechanics. Elsevier, North-Holland, 1983). The main drawback of classic plasticity is the possibility of violating the second principle of thermodynamics, while the relative ease to conjecture the yield function in order to approach experimental results is its main advantage. By opposition, the a priori satisfaction of thermodynamic principles constitutes the chief advantage of hyperplasticity theory. Noteworthy is also the fact that this latter approach allows a finer energy partition; in particular, the existence of frozen energy emerges as a natural consequence from its theoretical formulation. On the other hand, the relative difficulty to conjecture an efficient dissipation function to produce accurate predictions is its main drawback. The two theories are thus better viewed as two complementary approaches. Following this comparative study, a methodology to extend the hyperplasticity approach initially developed for dry or saturated materials to the case of partially saturated materials, accounting for interface energies and suction effects, is developed. A particular example based on the yield function of modified Cam-Clay model is then presented. It is shown that the approach developed leads to a model consistent with other existing works.
Blended E-Assessment: Migrating Classical Exams to the Digital World
ERIC Educational Resources Information Center
Llamas-Nistal, Martin; Fernandez-Iglesias, Manuel J.; Gonzalez-Tato, Juan; Mikic-Fonte, Fernando A.
2013-01-01
Existing e-assessment tools may not be a panacea to address all assessment situations, as students might find the traditional pen-and-paper approach to provide their answers more convenient than typing them online. For example, constructed response or essay-based assessments will require the active participation of students and lecturers to…
Controversy as a Mode of Invention: The Example of James and Freud.
ERIC Educational Resources Information Center
McClish, Glen
1991-01-01
Counteracts the overemphasis on introspection that potentially limits composition students' progress in argumentation by endorsing a renewal of classical rhetoric and invention. Explores texts by William James and Sigmund Freud, which are suitable works to provide the framework necessary for a confrontation-based classroom approach to invention.…
Chandrasekhar Limit: An Elementary Approach Based on Classical Physics and Quantum Theory
ERIC Educational Resources Information Center
Pinochet, Jorge; Van Sint Jan, Michael
2016-01-01
In a brief article published in 1931, Subrahmanyan Chandrasekhar made public an important astronomical discovery. In his article, the then young Indian astrophysicist introduced what is now known as the "Chandrasekhar limit." This limit establishes the maximum mass of a stellar remnant beyond which the repulsion force between electrons…
ERIC Educational Resources Information Center
Pratt, Cornelius B.
1994-01-01
Links ethical theories to the management of the product recall of the Perrier Group of America. Argues for a nonsituational theory-based eclectic approach to ethics in public relations to enable public relations practitioners, as strategic communication managers, to respond effectively to potentially unethical organizational actions. (SR)
From Norway to the USA: "Anitra's Dance."
ERIC Educational Resources Information Center
McDowell, Carol J.
2003-01-01
Describes an art lesson for middle school students that can be adapted for upper elementary or high school students. Explains that students compare two versions of the song "Anitra's Dance," a classical version by Edvard Grieg and a jazz version by Duke Ellington. States the lesson uses the Discipline-Based Music Education approach. (CMK)
Random Forest as a Predictive Analytics Alternative to Regression in Institutional Research
ERIC Educational Resources Information Center
He, Lingjun; Levine, Richard A.; Fan, Juanjuan; Beemer, Joshua; Stronach, Jeanne
2018-01-01
In institutional research, modern data mining approaches are seldom considered to address predictive analytics problems. The goal of this paper is to highlight the advantages of tree-based machine learning algorithms over classic (logistic) regression methods for data-informed decision making in higher education problems, and stress the success of…
Public/Private in Higher Education: A Synthesis of Economic and Political Approaches
ERIC Educational Resources Information Center
Marginson, Simon
2018-01-01
The public/private distinction is central to higher education but there is no consensus on "public." In neo-classical economic theory, Samuelson distinguishes non-market goods (public) that cannot be produced for profit, from market-based activity (private). This provides a basis for identifying the minimum necessary public expenditure,…
Social Justice Praxis in Education: Towards Sustainable Management Strategies
ERIC Educational Resources Information Center
van Deventer, Idilette; van der Westhuizen, Philip C.; Potgieter, Ferdinand J.
2015-01-01
Social justice, defined as an impetus towards a socially just educational world, is based on the assumption that all people, irrespective of belief or societal position, are entitled to be treated according to the values of human rights, human dignity and equality. Diverging from the classical positivist approach in social science research that…
Critical Review on Power in Organization: Empowerment in Human Resource Development
ERIC Educational Resources Information Center
Jo, Sung Jun; Park, Sunyoung
2016-01-01
Purpose: This paper aims to analyze current practices, discuss empowerment from the theoretical perspectives on power in organizations and suggest an empowerment model based on the type of organizational culture and the role of human resource development (HRD). Design/methodology/approach: By reviewing the classic viewpoint of power, Lukes'…
Students' Ideas about Prismatic Images: Teaching Experiments for an Image-Based Approach
ERIC Educational Resources Information Center
Grusche, Sascha
2017-01-01
Prismatic refraction is a classic topic in science education. To investigate how undergraduate students think about prismatic dispersion, and to see how they change their thinking when observing dispersed images, five teaching experiments were done and analysed according to the Model of Educational Reconstruction. For projection through a prism,…
Derived heuristics-based consistent optimization of material flow in a gold processing plant
NASA Astrophysics Data System (ADS)
Myburgh, Christie; Deb, Kalyanmoy
2018-01-01
Material flow in a chemical processing plant often follows complicated control laws and involves plant capacity constraints. Importantly, the process involves discrete scenarios which when modelled in a programming format involves if-then-else statements. Therefore, a formulation of an optimization problem of such processes becomes complicated with nonlinear and non-differentiable objective and constraint functions. In handling such problems using classical point-based approaches, users often have to resort to modifications and indirect ways of representing the problem to suit the restrictions associated with classical methods. In a particular gold processing plant optimization problem, these facts are demonstrated by showing results from MATLAB®'s well-known fmincon routine. Thereafter, a customized evolutionary optimization procedure which is capable of handling all complexities offered by the problem is developed. Although the evolutionary approach produced results with comparatively less variance over multiple runs, the performance has been enhanced by introducing derived heuristics associated with the problem. In this article, the development and usage of derived heuristics in a practical problem are presented and their importance in a quick convergence of the overall algorithm is demonstrated.
Classical Wave Model of Quantum-Like Processing in Brain
NASA Astrophysics Data System (ADS)
Khrennikov, A.
2011-01-01
We discuss the conjecture on quantum-like (QL) processing of information in the brain. It is not based on the physical quantum brain (e.g., Penrose) - quantum physical carriers of information. In our approach the brain created the QL representation (QLR) of information in Hilbert space. It uses quantum information rules in decision making. The existence of such QLR was (at least preliminary) confirmed by experimental data from cognitive psychology. The violation of the law of total probability in these experiments is an important sign of nonclassicality of data. In so called "constructive wave function approach" such data can be represented by complex amplitudes. We presented 1,2 the QL model of decision making. In this paper we speculate on a possible physical realization of QLR in the brain: a classical wave model producing QLR . It is based on variety of time scales in the brain. Each pair of scales (fine - the background fluctuations of electromagnetic field and rough - the cognitive image scale) induces the QL representation. The background field plays the crucial role in creation of "superstrong QL correlations" in the brain.
Utility of Classical α-Taxonomy for Biodiversity of Aquatic Nematodes
Decraemer, Wilfrida; Backeljau, Thierry
2015-01-01
“Classical α-taxonomy” has different interpretations. Therefore, within the framework of an integrated taxonomic approach it is not relevant to divide taxonomy in different components, each being allocated a different weight of importance. Preferably, taxonomy should be seen in a holistic way, including the act of delimiting and describing taxa, based on different features and available methods, and taxonomy can not be interpreted without looking at evolutionary relationships. The concept of diversity itself is quite diverse as is the measure of diversity. Taxonomic descriptions of free-living aquatic nematodes are very valuable as they provide basic phenotypic information that is necessary for the functional ecological, behavioral, and evolutionary interpretation of data gathered from molecular analyses and of the organism as a whole. In general, molecular taxonomic analyses have the advantage of being much faster and of being able to deal with a larger number of specimens but also possess the important advantage of dealing with a huge amount of features compared to the morphology-based approach. However, just as morphological studies, molecular analyses deal only with partial of an organism. PMID:25861112
Signal Processing for Time-Series Functions on a Graph
2018-02-01
as filtering to functions supported on graphs. These methods can be applied to scalar functions with a domain that can be described by a fixed...classical signal processing such as filtering to account for the graph domain. This work essentially divides into 2 basic approaches: graph Laplcian...based filtering and weighted adjacency matrix-based filtering . In Shuman et al.,11 and elaborated in Bronstein et al.,13 filtering operators are
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodríguez-Cantano, Rocío; Pérez de Tudela, Ricardo; Bartolomei, Massimiliano
Coronene-doped helium clusters have been studied by means of classical and quantum mechanical (QM) methods using a recently developed He–C{sub 24}H{sub 12} global potential based on the use of optimized atom-bond improved Lennard-Jones functions. Equilibrium energies and geometries at global and local minima for systems with up to 69 He atoms were calculated by means of an evolutive algorithm and a basin-hopping approach and compared with results from path integral Monte Carlo (PIMC) calculations at 2 K. A detailed analysis performed for the smallest sizes shows that the precise localization of the He atoms forming the first solvation layer overmore » the molecular substrate is affected by differences between relative potential minima. The comparison of the PIMC results with the predictions from the classical approaches and with diffusion Monte Carlo results allows to examine the importance of both the QM and thermal effects.« less
Real-time economic nonlinear model predictive control for wind turbine control
NASA Astrophysics Data System (ADS)
Gros, Sebastien; Schild, Axel
2017-12-01
Nonlinear model predictive control (NMPC) is a strong candidate to handle the control challenges emerging in the modern wind energy industry. Recent research suggested that wind turbine (WT) control based on economic NMPC (ENMPC) can improve the closed-loop performance and simplify the task of controller design when compared to a classical NMPC approach. This paper establishes a formal relationship between the ENMPC controller and the classic NMPC approach, and compares empirically their closed-loop nominal behaviour and performance. The robustness of the performance is assessed for an inaccurate modelling of the tower fore-aft main frequency. Additionally, though a perfect wind preview is assumed here, the effect of having a limited horizon of preview of the wind speed via the LIght Detection And Ranging (LIDAR) sensor is investigated. Finally, this paper provides new algorithmic solutions for deploying ENMPC for WT control, and report improved computational times.
Moment inference from tomograms
Day-Lewis, F. D.; Chen, Y.; Singha, K.
2007-01-01
Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.
Moment inference from tomograms
Day-Lewis, Frederick D.; Chen, Yongping; Singha, Kamini
2007-01-01
Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error.
NASA Astrophysics Data System (ADS)
Delle Site, Luigi
2018-01-01
A theoretical scheme for the treatment of an open molecular system with electrons and nuclei is proposed. The idea is based on the Grand Canonical description of a quantum region embedded in a classical reservoir of molecules. Electronic properties of the quantum region are calculated at constant electronic chemical potential equal to that of the corresponding (large) bulk system treated at full quantum level. Instead, the exchange of molecules between the quantum region and the classical environment occurs at the chemical potential of the macroscopic thermodynamic conditions. The Grand Canonical Adaptive Resolution Scheme is proposed for the treatment of the classical environment; such an approach can treat the exchange of molecules according to first principles of statistical mechanics and thermodynamic. The overall scheme is build on the basis of physical consistency, with the corresponding definition of numerical criteria of control of the approximations implied by the coupling. Given the wide range of expertise required, this work has the intention of providing guiding principles for the construction of a well founded computational protocol for actual multiscale simulations from the electronic to the mesoscopic scale.
NASA Astrophysics Data System (ADS)
Hooper, James; Ismail, Arif; Giorgi, Javier B.; Woo, Tom K.
2010-06-01
A genetic algorithm (GA)-inspired method to effectively map out low-energy configurations of doped metal oxide materials is presented. Specialized mating and mutation operations that do not alter the identity of the parent metal oxide have been incorporated to efficiently sample the metal dopant and oxygen vacancy sites. The search algorithms have been tested on lanthanide-doped ceria (L=Sm,Gd,Lu) with various dopant concentrations. Using both classical and first-principles density-functional-theory (DFT) potentials, we have shown the methodology reproduces the results of recent systematic searches of doped ceria at low concentrations (3.2% L2O3 ) and identifies low-energy structures of concentrated samarium-doped ceria (3.8% and 6.6% L2O3 ) which relate to the experimental and theoretical findings published thus far. We introduce a tandem classical/DFT GA algorithm in which an inexpensive classical potential is first used to generate a fit gene pool of structures to enhance the overall efficiency of the computationally demanding DFT-based GA search.
Sakurai, Atsunori; Tanimura, Yoshitaka
2011-04-28
To investigate the role of quantum effects in vibrational spectroscopies, we have carried out numerically exact calculations of linear and nonlinear response functions for an anharmonic potential system nonlinearly coupled to a harmonic oscillator bath. Although one cannot carry out the quantum calculations of the response functions with full molecular dynamics (MD) simulations for a realistic system which consists of many molecules, it is possible to grasp the essence of the quantum effects on the vibrational spectra by employing a model Hamiltonian that describes an intra- or intermolecular vibrational motion in a condensed phase. The present model fully includes vibrational relaxation, while the stochastic model often used to simulate infrared spectra does not. We have employed the reduced quantum hierarchy equations of motion approach in the Wigner space representation to deal with nonperturbative, non-Markovian, and nonsecular system-bath interactions. Taking the classical limit of the hierarchy equations of motion, we have obtained the classical equations of motion that describe the classical dynamics under the same physical conditions as in the quantum case. By comparing the classical and quantum mechanically calculated linear and multidimensional spectra, we found that the profiles of spectra for a fast modulation case were similar, but different for a slow modulation case. In both the classical and quantum cases, we identified the resonant oscillation peak in the spectra, but the quantum peak shifted to the red compared with the classical one if the potential is anharmonic. The prominent quantum effect is the 1-2 transition peak, which appears only in the quantum mechanically calculated spectra as a result of anharmonicity in the potential or nonlinearity of the system-bath coupling. While the contribution of the 1-2 transition is negligible in the fast modulation case, it becomes important in the slow modulation case as long as the amplitude of the frequency fluctuation is small. Thus, we observed a distinct difference between the classical and quantum mechanically calculated multidimensional spectra in the slow modulation case where spectral diffusion plays a role. This fact indicates that one may not reproduce the experimentally obtained multidimensional spectrum for high-frequency vibrational modes based on classical molecular dynamics simulations if the modulation that arises from surrounding molecules is weak and slow. A practical way to overcome the difference between the classical and quantum simulations was discussed.
Re'class'ification of 'quant'ified classical simulated annealing
NASA Astrophysics Data System (ADS)
Tanaka, Toshiyuki
2009-12-01
We discuss a classical reinterpretation of quantum-mechanics-based analysis of classical Markov chains with detailed balance, that is based on the quantum-classical correspondence. The classical reinterpretation is then used to demonstrate that it successfully reproduces a sufficient condition for cooling schedule in classical simulated annealing, which has the inverse-logarithmic scaling.
Investigations in quantum games using EPR-type set-ups
NASA Astrophysics Data System (ADS)
Iqbal, Azhar
2006-04-01
Research in quantum games has flourished during recent years. However, it seems that opinion remains divided about their true quantum character and content. For example, one argument says that quantum games are nothing but 'disguised' classical games and that to quantize a game is equivalent to replacing the original game by a different classical game. The present thesis contributes towards the ongoing debate about quantum nature of quantum games by developing two approaches addressing the related issues. Both approaches take Einstein-Podolsky-Rosen (EPR)-type experiments as the underlying physical set-ups to play two-player quantum games. In the first approach, the players' strategies are unit vectors in their respective planes, with the knowledge of coordinate axes being shared between them. Players perform measurements in an EPR-type setting and their payoffs are defined as functions of the correlations, i.e. without reference to classical or quantum mechanics. Classical bimatrix games are reproduced if the input states are classical and perfectly anti-correlated, as for a classical correlation game. However, for a quantum correlation game, with an entangled singlet state as input, qualitatively different solutions are obtained. The second approach uses the result that when the predictions of a Local Hidden Variable (LHV) model are made to violate the Bell inequalities the result is that some probability measures assume negative values. With the requirement that classical games result when the predictions of a LHV model do not violate the Bell inequalities, our analysis looks at the impact which the emergence of negative probabilities has on the solutions of two-player games which are physically implemented using the EPR-type experiments.
Kaufmann, Tobias; Holz, Elisa M; Kübler, Andrea
2013-01-01
This paper describes a case study with a patient in the classic locked-in state, who currently has no means of independent communication. Following a user-centered approach, we investigated event-related potentials (ERP) elicited in different modalities for use in brain-computer interface (BCI) systems. Such systems could provide her with an alternative communication channel. To investigate the most viable modality for achieving BCI based communication, classic oddball paradigms (1 rare and 1 frequent stimulus, ratio 1:5) in the visual, auditory and tactile modality were conducted (2 runs per modality). Classifiers were built on one run and tested offline on another run (and vice versa). In these paradigms, the tactile modality was clearly superior to other modalities, displaying high offline accuracy even when classification was performed on single trials only. Consequently, we tested the tactile paradigm online and the patient successfully selected targets without any error. Furthermore, we investigated use of the visual or tactile modality for different BCI systems with more than two selection options. In the visual modality, several BCI paradigms were tested offline. Neither matrix-based nor so-called gaze-independent paradigms constituted a means of control. These results may thus question the gaze-independence of current gaze-independent approaches to BCI. A tactile four-choice BCI resulted in high offline classification accuracies. Yet, online use raised various issues. Although performance was clearly above chance, practical daily life use appeared unlikely when compared to other communication approaches (e.g., partner scanning). Our results emphasize the need for user-centered design in BCI development including identification of the best stimulus modality for a particular user. Finally, the paper discusses feasibility of EEG-based BCI systems for patients in classic locked-in state and compares BCI to other AT solutions that we also tested during the study.
Fundamental theories of waves and particles formulated without classical mass
NASA Astrophysics Data System (ADS)
Fry, J. L.; Musielak, Z. E.
2010-12-01
Quantum and classical mechanics are two conceptually and mathematically different theories of physics, and yet they do use the same concept of classical mass that was originally introduced by Newton in his formulation of the laws of dynamics. In this paper, physical consequences of using the classical mass by both theories are explored, and a novel approach that allows formulating fundamental (Galilean invariant) theories of waves and particles without formally introducing the classical mass is presented. In this new formulation, the theories depend only on one common parameter called 'wave mass', which is deduced from experiments for selected elementary particles and for the classical mass of one kilogram. It is shown that quantum theory with the wave mass is independent of the Planck constant and that higher accuracy of performing calculations can be attained by such theory. Natural units in connection with the presented approach are also discussed and justification beyond dimensional analysis is given for the particular choice of such units.
{P}{T}-symmetric interpretation of the electromagnetic self-force
NASA Astrophysics Data System (ADS)
Bender, Carl M.; Gianfreda, Mariagiovanna
2015-08-01
In 1980 Englert examined the classic problem of the electromagnetic self-force on an oscillating charged particle. His approach, which was based on an earlier idea of Bateman, was to introduce a time-reversed (charge-conjugate) particle and to show that the two-particle system is Hamiltonian. Unfortunately, Englert’s model did not solve the problem of runaway modes, and the corresponding quantum theory had ghost states. It is shown here that Englert’s Hamiltonian is {P}{T} symmetric, and that the problems with his model arise because the {P}{T} symmetry is broken at both the classical and the quantum level. However, by allowing the charged particles to interact and by adjusting the coupling parameters to put the model into an unbroken {P}{T}-symmetric region, one eliminates the classical nonrelativistic runaway modes and obtains a corresponding nonrelativistic quantum system that is in equilibrium and ghost free.
Bennett, Kochise; Mukamel, Shaul
2014-01-28
The semi-classical theory of radiation-matter coupling misses local-field effects that may alter the pulse time-ordering and cascading that leads to the generation of new signals. These are then introduced macroscopically by solving Maxwell's equations. This procedure is convenient and intuitive but ad hoc. We show that both effects emerge naturally by including coupling to quantum modes of the radiation field that are initially in the vacuum state to second order. This approach is systematic and suggests a more general class of corrections that only arise in a QED framework. In the semi-classical theory, which only includes classical field modes, the susceptibility of a collection of N non-interacting molecules is additive and scales as N. Second-order coupling to a vacuum mode generates an effective retarded interaction that leads to cascading and local field effects both of which scale as N(2).
Towards the quantization of Eddington-inspired-Born-Infeld theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouhmadi-López, Mariam; Chen, Che-Yu, E-mail: mbl@ubi.pt, E-mail: b97202056@gmail.com
2016-11-01
The quantum effects close to the classical big rip singularity within the Eddington-inspired-Born-Infeld theory (EiBI) are investigated through quantum geometrodynamics. It is the first time that this approach is applied to a modified theory constructed upon Palatini formalism. The Wheeler-DeWitt (WDW) equation is obtained and solved based on an alternative action proposed in ref. [1], under two different factor ordering choices. This action is dynamically equivalent to the original EiBI action while it is free of square root of the spacetime curvature. We consider a homogeneous, isotropic and spatially flat universe, which is assumed to be dominated by a phantommore » perfect fluid whose equation of state is a constant. We obtain exact solutions of the WDW equation based on some specific conditions. In more general cases, we propose a qualitative argument with the help of a Wentzel-Kramers-Brillouin (WKB) approximation to get further solutions. Besides, we also construct an effective WDW equation by simply promoting the classical Friedmann equations. We find that for all the approaches considered, the DeWitt condition hinting singularity avoidance is satisfied. Therefore the big rip singularity is expected to be avoided through the quantum approach within the EiBI theory.« less
Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P.
2016-01-01
Purpose Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. Theory and Methods The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly-accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely-used calibrationless uniformly-undersampled trajectories. Results Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. Conclusion The SENSE-LORAKS framework provides promising new opportunities for highly-accelerated MRI. PMID:27037836
Tikhonov, Denis S; Sharapa, Dmitry I; Schwabedissen, Jan; Rybkin, Vladimir V
2016-10-12
In this study, we investigate the ability of classical molecular dynamics (MD) and Monte-Carlo (MC) simulations for modeling the intramolecular vibrational motion. These simulations were used to compute thermally-averaged geometrical structures and infrared vibrational intensities for a benchmark set previously studied by gas electron diffraction (GED): CS 2 , benzene, chloromethylthiocyanate, pyrazinamide and 9,12-I 2 -1,2-closo-C 2 B 10 H 10 . The MD sampling of NVT ensembles was performed using chains of Nose-Hoover thermostats (NH) as well as the generalized Langevin equation thermostat (GLE). The performance of the theoretical models based on the classical MD and MC simulations was compared with the experimental data and also with the alternative computational techniques: a conventional approach based on the Taylor expansion of potential energy surface, path-integral MD and MD with quantum-thermal bath (QTB) based on the generalized Langevin equation (GLE). A straightforward application of the classical simulations resulted, as expected, in poor accuracy of the calculated observables due to the complete neglect of quantum effects. However, the introduction of a posteriori quantum corrections significantly improved the situation. The application of these corrections for MD simulations of the systems with large-amplitude motions was demonstrated for chloromethylthiocyanate. The comparison of the theoretical vibrational spectra has revealed that the GLE thermostat used in this work is not applicable for this purpose. On the other hand, the NH chains yielded reasonably good results.
The recurrence coefficients of semi-classical Laguerre polynomials and the fourth Painlevé equation
NASA Astrophysics Data System (ADS)
Filipuk, Galina; Van Assche, Walter; Zhang, Lun
2012-05-01
We show that the coefficients of the three-term recurrence relation for orthogonal polynomials with respect to a semi-classical extension of the Laguerre weight satisfy the fourth Painlevé equation when viewed as functions of one of the parameters in the weight. We compare different approaches to derive this result, namely, the ladder operators approach, the isomonodromy deformations approach and combining the Toda system for the recurrence coefficients with a discrete equation. We also discuss a relation between the recurrence coefficients for the Freud weight and the semi-classical Laguerre weight and show how it arises from the Bäcklund transformation of the fourth Painlevé equation.
Review on the administration and effectiveness of team-based learning in medical education.
Hur, Yera; Cho, A Ra; Kim, Sun
2013-12-01
Team-based learning (TBL) is an active learning approach. In recent years, medical educators have been increasingly using TBL in their classes. We reviewed the concepts of TBL and discuss examples of international cases. Two types of TBL are administered: classic TBL and adapted TBL. Combining TBL and problem-based learning (PBL) might be a useful strategy for medical schools. TBL is an attainable and efficient educational approach in preparing large classes with regard to PBL. TBL improves student performance, team communication skills, leadership skills, problem solving skills, and cognitive conceptual structures and increases student engagement and satisfaction. This study suggests recommendations for administering TBL effectively in medical education.
Quantum and quasi-classical collisional dynamics of O{sub 2}–Ar at high temperatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulusoy, Inga S.; Center for Computational and Molecular Science and Technology, School of Chemistry and Biochemistry, Georgia Institute of Technology, Atlanta, Georgia 30332-0400; Andrienko, Daniil A.
A hypersonic vehicle traveling at a high speed disrupts the distribution of internal states in the ambient flow and introduces a nonequilibrium distribution in the post-shock conditions. We investigate the vibrational relaxation in diatom-atom collisions in the range of temperatures between 1000 and 10 000 K by comparing results of extensive fully quantum-mechanical and quasi-classical simulations with available experimental data. The present paper simulates the interaction of molecular oxygen with argon as the first step in developing the aerothermodynamics models based on first principles. We devise a routine to standardize such calculations also for other scattering systems. Our results demonstrate verymore » good agreement of vibrational relaxation time, derived from quantum-mechanical calculations with the experimental measurements conducted in shock tube facilities. At the same time, the quasi-classical simulations fail to accurately predict rates of vibrationally inelastic transitions at temperatures lower than 3000 K. This observation and the computational cost of adopted methods suggest that the next generation of high fidelity thermochemical models should be a combination of quantum and quasi-classical approaches.« less
Quantum and quasi-classical collisional dynamics of O2-Ar at high temperatures
NASA Astrophysics Data System (ADS)
Ulusoy, Inga S.; Andrienko, Daniil A.; Boyd, Iain D.; Hernandez, Rigoberto
2016-06-01
A hypersonic vehicle traveling at a high speed disrupts the distribution of internal states in the ambient flow and introduces a nonequilibrium distribution in the post-shock conditions. We investigate the vibrational relaxation in diatom-atom collisions in the range of temperatures between 1000 and 10 000 K by comparing results of extensive fully quantum-mechanical and quasi-classical simulations with available experimental data. The present paper simulates the interaction of molecular oxygen with argon as the first step in developing the aerothermodynamics models based on first principles. We devise a routine to standardize such calculations also for other scattering systems. Our results demonstrate very good agreement of vibrational relaxation time, derived from quantum-mechanical calculations with the experimental measurements conducted in shock tube facilities. At the same time, the quasi-classical simulations fail to accurately predict rates of vibrationally inelastic transitions at temperatures lower than 3000 K. This observation and the computational cost of adopted methods suggest that the next generation of high fidelity thermochemical models should be a combination of quantum and quasi-classical approaches.
The Effectiveness of Project Based Learning in Trigonometry
NASA Astrophysics Data System (ADS)
Gerhana, M. T. C.; Mardiyana, M.; Pramudya, I.
2017-09-01
This research aimed to explore the effectiveness of Project-Based Learning (PjBL) with scientific approach viewed from interpersonal intelligence toward students’ achievement learning in mathematics. This research employed quasi experimental research. The subjects of this research were grade X MIPA students in Sleman Yogyakarta. The result of the research showed that project-based learning model is more effective to generate students’ mathematics learning achievement that classical model with scientific approach. This is because in PjBL model students are more able to think actively and creatively. Students are faced with a pleasant atmosphere to solve a problem in everyday life. The use of project-based learning model is expected to be the choice of teachers to improve mathematics education.
Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.
Verde, Pablo E
2010-12-30
In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.
Evolutionary Effect on the Embodied Beauty of Landscape Architectures.
Zhang, Wei; Tang, Xiaoxiang; He, Xianyou; Chen, Guangyao
2018-01-01
According to the framework of evolutionary aesthetics, a sense of beauty is related to environmental adaptation and plasticity of human beings, which has adaptive value and biological foundations. Prior studies have demonstrated that organisms derive benefits from the landscape. In this study, we investigated whether the benefits of landscape might elicit a stronger sense of beauty and what the nature of this sense of beauty is. In two experiments, when viewing classical landscape and nonlandscape architectures photographs, participants rated the aesthetic scores (Experiment 1) and had a two-alternative forced choice aesthetic judgment by pressing the reaction button located near to (15 cm) or far from (45 cm) the presenting stimuli (Experiment 2). The results showed that reaction of aesthetic ratings for classical landscape architectures was faster than those of classical nonlandscape architectures. Furthermore, only the reaction of beautiful judgment of classical landscape architecture photograph was significantly faster when the reaction button was in the near position to the presenting photograph than those in the position of far away from the presenting photograph. This finding suggests a facilitated effect for the aesthetic perception of classical landscape architectures due to their corresponding components including water and green plants with strong evolutionary implications. Furthermore, this sense of beauty for classical landscape architectures might be the embodied approach to beauty based on the viewpoint of evolutionary aesthetics and embodied cognition.
Quantum Speed Limits across the Quantum-to-Classical Transition
NASA Astrophysics Data System (ADS)
Shanahan, B.; Chenu, A.; Margolus, N.; del Campo, A.
2018-02-01
Quantum speed limits set an upper bound to the rate at which a quantum system can evolve. Adopting a phase-space approach, we explore quantum speed limits across the quantum-to-classical transition and identify equivalent bounds in the classical world. As a result, and contrary to common belief, we show that speed limits exist for both quantum and classical systems. As in the quantum domain, classical speed limits are set by a given norm of the generator of time evolution.
Fourier analysis and signal processing by use of the Moebius inversion formula
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Yu, Xiaoli; Shih, Ming-Tang; Tufts, Donald W.; Truong, T. K.
1990-01-01
A novel Fourier technique for digital signal processing is developed. This approach to Fourier analysis is based on the number-theoretic method of the Moebius inversion of series. The Fourier transform method developed is shown also to yield the convolution of two signals. A computer simulation shows that this method for finding Fourier coefficients is quite suitable for digital signal processing. It competes with the classical FFT (fast Fourier transform) approach in terms of accuracy, complexity, and speed.
Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.
Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn
2016-04-20
Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.
Shpynov, S; Pozdnichenko, N; Gumenuk, A
2015-01-01
Genome sequences of 36 Rickettsia and Orientia were analyzed using Formal Order Analysis (FOA). This approach takes into account arrangement of nucleotides in each sequence. A numerical characteristic, the average distance (remoteness) - "g" was used to compare of genomes. Our results corroborated previous separation of three groups within the genus Rickettsia, including typhus group, classic spotted fever group, and the ancestral group and Orientia as a separate genus. Rickettsia felis URRWXCal2 and R. akari Hartford were not in the same group based on FOA, therefore designation of a so-called transitional Rickettsia group could not be confirmed with this approach. Copyright © 2015 Institut Pasteur. Published by Elsevier Masson SAS. All rights reserved.
Quantum approach to classical statistical mechanics.
Somma, R D; Batista, C D; Ortiz, G
2007-07-20
We present a new approach to study the thermodynamic properties of d-dimensional classical systems by reducing the problem to the computation of ground state properties of a d-dimensional quantum model. This classical-to-quantum mapping allows us to extend the scope of standard optimization methods by unifying them under a general framework. The quantum annealing method is naturally extended to simulate classical systems at finite temperatures. We derive the rates to assure convergence to the optimal thermodynamic state using the adiabatic theorem of quantum mechanics. For simulated and quantum annealing, we obtain the asymptotic rates of T(t) approximately (pN)/(k(B)logt) and gamma(t) approximately (Nt)(-c/N), for the temperature and magnetic field, respectively. Other annealing strategies are also discussed.
Some Practical Approaches to a Course on Paraconsistent Logic for Engineers
ERIC Educational Resources Information Center
Lambert-Torres, Germano; de Moraes, Carlos Henrique Valerio; Coutinho, Maurilio Pereira; Martins, Helga Gonzaga; Borges da Silva, Luiz Eduardo
2017-01-01
This paper describes a non-classical logic course primarily indicated for graduate students in electrical engineering and energy engineering. The content of this course is based on the vision that it is not enough for a student to indefinitely accumulate knowledge; it is necessary to explore all the occasions to update, deepen, and enrich that…
Materials Processing Research and Development
2010-08-01
3 2.2.1 Cavitation during Hot-Torsion Testing of Ti-6Al-4V .................................................... 3 2.2.2 The Effect of Strain...during Hot Torsion Testing of Ti-6Al-4V ........................................................................................ 4 2.2.4 Effect of...8 2.5.3 Systematic Approach to Microstructure Design of Ni-Base Alloys Using Classical Nucleation and Growth Relations Coupled with Phase
ERIC Educational Resources Information Center
Schlindwein, Ana Flora
2013-01-01
Adopting the multiliteracy concept and embracing the challenge of developing meaningful and captivating classes for Portuguese as Foreign Language in Brazil, this paper proposes an approach which includes the use of different technologies to learn and teach Portuguese, the reading of graphic novel adaptations of Brazilian literature classics and…
Nucleic acid detection using BRET-beacons based on bioluminescent protein-DNA hybrids.
Engelen, Wouter; van de Wiel, Kayleigh M; Meijer, Lenny H H; Saha, Bedabrata; Merkx, Maarten
2017-03-02
Bioluminescent molecular beacons have been developed using a modular design approach that relies on BRET between the bright luciferase NanoLuc and a Cy3 acceptor. While classical molecular beacons are hampered by background fluorescence and scattering, these BRET-beacons allow detection of low pM concentrations of nucleic acids directly in complex media.
gPhysics--Using Smart Glasses for Head-Centered, Context-Aware Learning in Physics Experiments
ERIC Educational Resources Information Center
Kuhn, Jochen; Lukowicz, Paul; Hirth, Michael; Poxrucker, Andreas; Weppner, Jens; Younas, Junaid
2016-01-01
Smart Glasses such as Google Glass are mobile computers combining classical Head-Mounted Displays (HMD) with several sensors. Therefore, contact-free, sensor-based experiments can be linked with relating, near-eye presented multiple representations. We will present a first approach on how Smart Glasses can be used as an experimental tool for…
Fu, Gregory C
2017-07-26
Classical methods for achieving nucleophilic substitutions of alkyl electrophiles (S N 1 and S N 2) have limited scope and are not generally amenable to enantioselective variants that employ readily available racemic electrophiles. Radical-based pathways catalyzed by chiral transition-metal complexes provide an attractive approach to addressing these limitations.
2017-01-01
Classical methods for achieving nucleophilic substitutions of alkyl electrophiles (SN1 and SN2) have limited scope and are not generally amenable to enantioselective variants that employ readily available racemic electrophiles. Radical-based pathways catalyzed by chiral transition-metal complexes provide an attractive approach to addressing these limitations. PMID:28776010
USDA-ARS?s Scientific Manuscript database
Classical, one-dimensional, mobile bed, sediment-transport models simulate vertical channel adjustment, raising or lowering cross-section node elevations to simulate erosion or deposition. This approach does not account for bank erosion processes including toe scour and mass failure. In many systems...
Pore size distribution calculation from 1H NMR signal and N2 adsorption-desorption techniques
NASA Astrophysics Data System (ADS)
Hassan, Jamal
2012-09-01
The pore size distribution (PSD) of nano-material MCM-41 is determined using two different approaches: N2 adsorption-desorption and 1H NMR signal of water confined in silica nano-pores of MCM-41. The first approach is based on the recently modified Kelvin equation [J.V. Rocha, D. Barrera, K. Sapag, Top. Catal. 54(2011) 121-134] which deals with the known underestimation in pore size distribution for the mesoporous materials such as MCM-41 by introducing a correction factor to the classical Kelvin equation. The second method employs the Gibbs-Thompson equation, using NMR, for melting point depression of liquid in confined geometries. The result shows that both approaches give similar pore size distribution to some extent, and also the NMR technique can be considered as an alternative direct method to obtain quantitative results especially for mesoporous materials. The pore diameter estimated for the nano-material used in this study was about 35 and 38 Å for the modified Kelvin and NMR methods respectively. A comparison between these methods and the classical Kelvin equation is also presented.
Espiritu, Michael J; Cabalteja, Chino C; Sugai, Christopher K; Bingham, Jon-Paul
2014-01-01
Bioactive peptides from Conus venom contain a natural abundance of post-translational modifications that affect their chemical diversity, structural stability, and neuroactive properties. These modifications have continually presented hurdles in their identification and characterization. Early endeavors in their analysis relied on classical biochemical techniques that have led to the progressive development and use of novel proteomic-based approaches. The critical importance of these post-translationally modified amino acids and their specific assignment cannot be understated, having impact on their folding, pharmacological selectivity, and potency. Such modifications at an amino acid level may also provide additional insight into the advancement of conopeptide drugs in the quest for precise pharmacological targeting. To achieve this end, a concerted effort between the classical and novel approaches is needed to completely elucidate the role of post-translational modifications in conopeptide structure and dynamics. This paper provides a reflection in the advancements observed in dealing with numerous and multiple post-translationally modified amino acids within conotoxins and conopeptides and provides a summary of the current techniques used in their identification.
Electrical resistivity and thermal conductivity of liquid aluminum in the two-temperature state
NASA Astrophysics Data System (ADS)
Petrov, Yu V.; Inogamov, N. A.; Mokshin, A. V.; Galimzyanov, B. N.
2018-01-01
The electrical resistivity and thermal conductivity of liquid aluminum in the two-temperature state is calculated by using the relaxation time approach and structural factor of ions obtained by molecular dynamics simulation. Resistivity witin the Ziman-Evans approach is also considered to be higher than in the approach with previously calculated conductivity via the relaxation time. Calculations based on the construction of the ion structural factor through the classical molecular dynamics and kinetic equation for electrons are more economical in terms of computing resources and give results close to the Kubo-Greenwood with the quantum molecular dynamics calculations.
Jagannadh, Veerendra Kalyan; Gopakumar, G; Subrahmanyam, Gorthi R K Sai; Gorthi, Sai Siva
2017-05-01
Each year, about 7-8 million deaths occur due to cancer around the world. More than half of the cancer-related deaths occur in the less-developed parts of the world. Cancer mortality rate can be reduced with early detection and subsequent treatment of the disease. In this paper, we introduce a microfluidic microscopy-based cost-effective and label-free approach for identification of cancerous cells. We outline a diagnostic framework for the same and detail an instrumentation layout. We have employed classical computer vision techniques such as 2D principal component analysis-based cell type representation followed by support vector machine-based classification. Analogous to criminal face recognition systems implemented with help of surveillance cameras, a signature-based approach for cancerous cell identification using microfluidic microscopy surveillance is demonstrated. Such a platform would facilitate affordable mass screening camps in the developing countries and therefore help decrease cancer mortality rate.
Use of edge-based finite elements for solving three dimensional scattering problems
NASA Technical Reports Server (NTRS)
Chatterjee, A.; Jin, J. M.; Volakis, John L.
1991-01-01
Edge based finite elements are free from drawbacks associated with node based vectorial finite elements and are, therefore, ideal for solving 3-D scattering problems. The finite element discretization using edge elements is checked by solving for the resonant frequencies of a closed inhomogeneously filled metallic cavity. Great improvements in accuracy are observed when compared to the classical node based approach with no penalty in terms of computational time and with the expected absence of spurious modes. A performance comparison between the edge based tetrahedra and rectangular brick elements is carried out and tetrahedral elements are found to be more accurate than rectangular bricks for a given storage intensity. A detailed formulation for the scattering problem with various approaches for terminating the finite element mesh is also presented.
Classical and quantum communication without a shared reference frame.
Bartlett, Stephen D; Rudolph, Terry; Spekkens, Robert W
2003-07-11
We show that communication without a shared reference frame is possible using entangled states. Both classical and quantum information can be communicated with perfect fidelity without a shared reference frame at a rate that asymptotically approaches one classical bit or one encoded qubit per transmitted qubit. We present an optical scheme to communicate classical bits without a shared reference frame using entangled photon pairs and linear optical Bell state measurements.
Struchen, R; Vial, F; Andersson, M G
2017-04-26
Delayed reporting of health data may hamper the early detection of infectious diseases in surveillance systems. Furthermore, combining multiple data streams, e.g. aiming at improving a system's sensitivity, can be challenging. In this study, we used a Bayesian framework where the result is presented as the value of evidence, i.e. the likelihood ratio for the evidence under outbreak versus baseline conditions. Based on a historical data set of routinely collected cattle mortality events, we evaluated outbreak detection performance (sensitivity, time to detection, in-control run length) under the Bayesian approach among three scenarios: presence of delayed data reporting, but not accounting for it; presence of delayed data reporting accounted for; and absence of delayed data reporting (i.e. an ideal system). Performance on larger and smaller outbreaks was compared with a classical approach, considering syndromes separately or combined. We found that the Bayesian approach performed better than the classical approach, especially for the smaller outbreaks. Furthermore, the Bayesian approach performed similarly well in the scenario where delayed reporting was accounted for to the scenario where it was absent. We argue that the value of evidence framework may be suitable for surveillance systems with multiple syndromes and delayed reporting of data.
A Stationary Wavelet Entropy-Based Clustering Approach Accurately Predicts Gene Expression
Nguyen, Nha; Vo, An; Choi, Inchan
2015-01-01
Abstract Studying epigenetic landscapes is important to understand the condition for gene regulation. Clustering is a useful approach to study epigenetic landscapes by grouping genes based on their epigenetic conditions. However, classical clustering approaches that often use a representative value of the signals in a fixed-sized window do not fully use the information written in the epigenetic landscapes. Clustering approaches to maximize the information of the epigenetic signals are necessary for better understanding gene regulatory environments. For effective clustering of multidimensional epigenetic signals, we developed a method called Dewer, which uses the entropy of stationary wavelet of epigenetic signals inside enriched regions for gene clustering. Interestingly, the gene expression levels were highly correlated with the entropy levels of epigenetic signals. Dewer separates genes better than a window-based approach in the assessment using gene expression and achieved a correlation coefficient above 0.9 without using any training procedure. Our results show that the changes of the epigenetic signals are useful to study gene regulation. PMID:25383910
2009-01-01
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input–output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input–output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down. PMID:20596382
Decomposability and scalability in space-based observatory scheduling
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Stephen F.
1992-01-01
In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.
Ahadian, Samad; Kawazoe, Yoshiyuki
2009-06-04
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input-output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input-output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down.
Fuzzy-Rough Nearest Neighbour Classification
NASA Astrophysics Data System (ADS)
Jensen, Richard; Cornelis, Chris
A new fuzzy-rough nearest neighbour (FRNN) classification algorithm is presented in this paper, as an alternative to Sarkar's fuzzy-rough ownership function (FRNN-O) approach. By contrast to the latter, our method uses the nearest neighbours to construct lower and upper approximations of decision classes, and classifies test instances based on their membership to these approximations. In the experimental analysis, we evaluate our approach with both classical fuzzy-rough approximations (based on an implicator and a t-norm), as well as with the recently introduced vaguely quantified rough sets. Preliminary results are very good, and in general FRNN outperforms FRNN-O, as well as the traditional fuzzy nearest neighbour (FNN) algorithm.
ERIC Educational Resources Information Center
MacMillan, Peter D.
2000-01-01
Compared classical test theory (CTT), generalizability theory (GT), and multifaceted Rasch model (MFRM) approaches to detecting and correcting for rater variability using responses of 4,930 high school students graded by 3 raters on 9 scales. The MFRM approach identified far more raters as different than did the CTT analysis. GT and Rasch…
Impact of Chinese Culture on Pre-service Science Teachers' Views of the Nature of Science
NASA Astrophysics Data System (ADS)
Wan, Dongsheng; Zhang, Hongshia; Wei, Bing
2018-04-01
This study examines Chinese pre-service teachers' (N = 30) views on the nature of science (NOS) and how Chinese culture influences their views. Participants were from two teachers' universities in eastern China. As an exploratory and interpretive study, a scenario-based interview approach was adopted. The results indicated that the participants held unique views about the five key aspects of NOS. Many participants have alternative and contemporary views of NOS, but few possess classical views. In fact, teachers adopted features of the Confucian Doctrine of the Mean either consciously or unconsciously to account for their views of NOS. This research reflects that the Doctrine of the Mean affected Chinese teachers' views of NOS, making them rather deficient in their understandings of classical NOS. Based on empirical data, it is argued that science teacher training in China should focus on the content and objectives of classical NOS, rather than just teaching contemporary views of NOS. Taking Chinese culture into consideration, science teacher education in China cannot entirely import the strategies of teaching the classical views of NOS from the developed world, but should develop, design and contextualize local strategies that are suitable for the training of Chinese science teachers. Some issues for further investigation of learners' views of NOS in non-Western contexts are suggested as implications from this study.
Atomic-Scale Lightning Rod Effect in Plasmonic Picocavities: A Classical View to a Quantum Effect.
Urbieta, Mattin; Barbry, Marc; Zhang, Yao; Koval, Peter; Sánchez-Portal, Daniel; Zabala, Nerea; Aizpurua, Javier
2018-01-23
Plasmonic gaps are known to produce nanoscale localization and enhancement of optical fields, providing small effective mode volumes of about a few hundred nm 3 . Atomistic quantum calculations based on time-dependent density functional theory reveal the effect of subnanometric localization of electromagnetic fields due to the presence of atomic-scale features at the interfaces of plasmonic gaps. Using a classical model, we explain this as a nonresonant lightning rod effect at the atomic scale that produces an extra enhancement over that of the plasmonic background. The near-field distribution of atomic-scale hot spots around atomic features is robust against dynamical screening and spill-out effects and follows the potential landscape determined by the electron density around the atomic sites. A detailed comparison of the field distribution around atomic hot spots from full quantum atomistic calculations and from the local classical approach considering the geometrical profile of the atoms' electronic density validates the use of a classical framework to determine the effective mode volume in these extreme subnanometric optical cavities. This finding is of practical importance for the community of surface-enhanced molecular spectroscopy and quantum nanophotonics, as it provides an adequate description of the local electromagnetic fields around atomic-scale features with use of simplified classical methods.
Classical plasma dynamics of Mie-oscillations in atomic clusters
NASA Astrophysics Data System (ADS)
Kull, H.-J.; El-Khawaldeh, A.
2018-04-01
Mie plasmons are of basic importance for the absorption of laser light by atomic clusters. In this work we first review the classical Rayleigh-theory of a dielectric sphere in an external electric field and Thomson’s plum-pudding model applied to atomic clusters. Both approaches allow for elementary discussions of Mie oscillations, however, they also indicate deficiencies in describing the damping mechanisms by electrons crossing the cluster surface. Nonlinear oscillator models have been widely studied to gain an understanding of damping and absorption by outer ionization of the cluster. In the present work, we attempt to address the issue of plasmon relaxation in atomic clusters in more detail based on classical particle simulations. In particular, we wish to study the role of thermal motion on plasmon relaxation, thereby extending nonlinear models of collective single-electron motion. Our simulations are particularly adopted to the regime of classical kinetics in weakly coupled plasmas and to cluster sizes extending the Debye-screening length. It will be illustrated how surface scattering leads to the relaxation of Mie oscillations in the presence of thermal motion and of electron spill-out at the cluster surface. This work is intended to give, from a classical perspective, further insight into recent work on plasmon relaxation in quantum plasmas [1].
One-loop quantum gravity repulsion in the early Universe.
Broda, Bogusław
2011-03-11
Perturbative quantum gravity formalism is applied to compute the lowest order corrections to the classical spatially flat cosmological Friedmann-Lemaître-Robertson-Walker solution (for the radiation). The presented approach is analogous to the approach applied to compute quantum corrections to the Coulomb potential in electrodynamics, or rather to the approach applied to compute quantum corrections to the Schwarzschild solution in gravity. In the framework of the standard perturbative quantum gravity, it is shown that the corrections to the classical deceleration, coming from the one-loop graviton vacuum polarization (self-energy), have (UV cutoff free) opposite to the classical repulsive properties which are not negligible in the very early Universe. The repulsive "quantum forces" resemble those known from loop quantum cosmology.
Tanaka, Kuniya; Murakami, Takashi; Matsuo, Kenichi; Hiroshima, Yukihiko; Endo, Itaru; Ichikawa, Yasushi; Taguri, Masataka; Koda, Keiji
2015-01-01
Although a 'liver-first' approach recently has been advocated in treating synchronous colorectal metastases, little is known about how results compare with those of the classical approach among patients with similar grades of liver metastases. Propensity-score matching was used to select study subjects. Oncologic outcomes were compared between 10 consecutive patients with unresectable advanced and aggressive synchronous colorectal liver metastases treated with the reverse strategy and 30 comparable classically treated patients. Numbers of recurrence sites and recurrent tumors irrespective of recurrence sites were greater in the reverse group then the classic group (p = 0.003 and p = 0.015, respectively). Rates of freedom from recurrence in the remaining liver and of freedom from disease also were poorer in the reverse group than in the classical group (p = 0.009 and p = 0.043, respectively). Among patients treated with 2-stage hepatectomy, frequency of microvascular invasion surrounding macroscopic metastases at second resection was higher in the reverse group than in the classical group (p = 0.011). Reverse approaches may be feasible in treating synchronous liver metastases, but that strategy should be limited to patients with less liver tumor burden. © 2015 S. Karger AG, Basel.
Tunable quantum interference in a 3D integrated circuit.
Chaboyer, Zachary; Meany, Thomas; Helt, L G; Withford, Michael J; Steel, M J
2015-04-27
Integrated photonics promises solutions to questions of stability, complexity, and size in quantum optics. Advances in tunable and non-planar integrated platforms, such as laser-inscribed photonics, continue to bring the realisation of quantum advantages in computation and metrology ever closer, perhaps most easily seen in multi-path interferometry. Here we demonstrate control of two-photon interference in a chip-scale 3D multi-path interferometer, showing a reduced periodicity and enhanced visibility compared to single photon measurements. Observed non-classical visibilities are widely tunable, and explained well by theoretical predictions based on classical measurements. With these predictions we extract Fisher information approaching a theoretical maximum. Our results open a path to quantum enhanced phase measurements.
Schwaibold, M; Schöchlin, J; Bolz, A
2002-01-01
For classification tasks in biosignal processing, several strategies and algorithms can be used. Knowledge-based systems allow prior knowledge about the decision process to be integrated, both by the developer and by self-learning capabilities. For the classification stages in a sleep stage detection framework, three inference strategies were compared regarding their specific strengths: a classical signal processing approach, artificial neural networks and neuro-fuzzy systems. Methodological aspects were assessed to attain optimum performance and maximum transparency for the user. Due to their effective and robust learning behavior, artificial neural networks could be recommended for pattern recognition, while neuro-fuzzy systems performed best for the processing of contextual information.
New insights into faster computation of uncertainties
NASA Astrophysics Data System (ADS)
Bhattacharya, Atreyee
2012-11-01
Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.
An entropy method for induced drag minimization
NASA Technical Reports Server (NTRS)
Greene, George C.
1989-01-01
A fundamentally new approach to the aircraft minimum induced drag problem is presented. The method, a 'viscous lifting line', is based on the minimum entropy production principle and does not require the planar wake assumption. An approximate, closed form solution is obtained for several wing configurations including a comparison of wing extension, winglets, and in-plane wing sweep, with and without a constraint on wing-root bending moment. Like the classical lifting-line theory, this theory predicts that induced drag is proportional to the square of the lift coefficient and inversely proportioinal to the wing aspect ratio. Unlike the classical theory, it predicts that induced drag is Reynolds number dependent and that the optimum spanwise circulation distribution is non-elliptic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Sang-Bong
1993-09-01
Quantum manifestation of classical chaos has been one of the extensively studied subjects for more than a decade. Yet clear understanding of its nature still remains to be an open question partly due to the lack of a canonical definition of quantum chaos. The classical definition seems to be unsuitable in quantum mechanics partly because of the Heisenberg quantum uncertainty. In this regard, quantum chaos is somewhat misleading and needs to be clarified at the very fundamental level of physics. Since it is well known that quantum mechanics is more fundamental than classical mechanics, the quantum description of classically chaoticmore » nature should be attainable in the limit of large quantum numbers. The focus of my research, therefore, lies on the correspondence principle for classically chaotic systems. The chaotic damped driven pendulum is mainly studied numerically using the split operator method that solves the time-dependent Schroedinger equation. For classically dissipative chaotic systems in which (multi)fractal strange attractors often emerge, several quantum dissipative mechanisms are also considered. For instance, Hoover`s and Kubo-Fox-Keizer`s approaches are studied with some computational analyses. But the notion of complex energy with non-Hermiticity is extensively applied. Moreover, the Wigner and Husimi distribution functions are examined with an equivalent classical distribution in phase-space, and dynamical properties of the wave packet in configuration and momentum spaces are also explored. The results indicate that quantum dynamics embraces classical dynamics although the classicalquantum correspondence fails to be observed in the classically chaotic regime. Even in the semi-classical limits, classically chaotic phenomena would eventually be suppressed by the quantum uncertainty.« less
NASA Astrophysics Data System (ADS)
Waqas, Abi; Melati, Daniele; Manfredi, Paolo; Grassi, Flavia; Melloni, Andrea
2018-02-01
The Building Block (BB) approach has recently emerged in photonic as a suitable strategy for the analysis and design of complex circuits. Each BB can be foundry related and contains a mathematical macro-model of its functionality. As well known, statistical variations in fabrication processes can have a strong effect on their functionality and ultimately affect the yield. In order to predict the statistical behavior of the circuit, proper analysis of the uncertainties effects is crucial. This paper presents a method to build a novel class of Stochastic Process Design Kits for the analysis of photonic circuits. The proposed design kits directly store the information on the stochastic behavior of each building block in the form of a generalized-polynomial-chaos-based augmented macro-model obtained by properly exploiting stochastic collocation and Galerkin methods. Using this approach, we demonstrate that the augmented macro-models of the BBs can be calculated once and stored in a BB (foundry dependent) library and then used for the analysis of any desired circuit. The main advantage of this approach, shown here for the first time in photonics, is that the stochastic moments of an arbitrary photonic circuit can be evaluated by a single simulation only, without the need for repeated simulations. The accuracy and the significant speed-up with respect to the classical Monte Carlo analysis are verified by means of classical photonic circuit example with multiple uncertain variables.
USDA-ARS?s Scientific Manuscript database
Controlling classical swine fever (CSF) involves vaccination in endemic regions and preemptive slaughter of infected swine herds during epidemics. Generally, live attenuated vaccines induce solid immunity. Using diverse approaches, reverse genetics has been useful in developing classical swine fever...
Could a Mobile-Assisted Learning System Support Flipped Classrooms for Classical Chinese Learning?
ERIC Educational Resources Information Center
Wang, Y.-H.
2016-01-01
In this study, the researcher aimed to develop a mobile-assisted learning system and to investigate whether it could promote teenage learners' classical Chinese learning through the flipped classroom approach. The researcher first proposed the structure of the Cross-device Mobile-Assisted Classical Chinese (CMACC) system according to the pilot…
Rediscovering the Classics: The Project Approach.
ERIC Educational Resources Information Center
Townsend, Ruth; Lubell, Marcia
Focusing on seven classics of literature that are most challenging for teachers and students, but which are also a part of the high school literary canon, this book shares ways to create a learner-centered classroom for the study of literature. For each of the seven classics, the book "walks teachers through" the teaching-learning…
NASA Astrophysics Data System (ADS)
Ivanov, Sergey V.; Buzykin, Oleg G.
2016-12-01
A classical approach is applied to calculate pressure broadening coefficients of CO2 vibration-rotational spectral lines perturbed by Ar. Three types of spectra are examined: electric dipole (infrared) absorption; isotropic and anisotropic Raman Q branches. Simple and explicit formulae of the classical impact theory are used along with exact 3D Hamilton equations for CO2-Ar molecular motion. The calculations utilize vibrationally independent most accurate ab initio potential energy surface (PES) of Hutson et al. expanded in Legendre polynomial series up to lmax = 24. New improved algorithm of classical rotational frequency selection is applied. The dependences of CO2 half-widths on rotational quantum number J up to J=100 are computed for the temperatures between 77 and 765 K and compared with available experimental data as well as with the results of fully quantum dynamical calculations performed on the same PES. To make the picture complete, the predictions of two independent variants of the semi-classical Robert-Bonamy formalism for dipole absorption lines are included. This method. however, has demonstrated poor accuracy almost for all temperatures. On the contrary, classical broadening coefficients are in excellent agreement both with measurements and with quantum results at all temperatures. The classical impact theory in its present variant is capable to produce quickly and accurately the pressure broadening coefficients of spectral lines of linear molecules for any J value (including high Js) using full-dimensional ab initio - based PES in the cases where other computational methods are either extremely time consuming (like the quantum close coupling method) or give erroneous results (like semi-classical methods).
Fluctuating local field method probed for a description of small classical correlated lattices
NASA Astrophysics Data System (ADS)
Rubtsov, Alexey N.
2018-05-01
Thermal-equilibrated finite classical lattices are considered as a minimal model of the systems showing an interplay between low-energy collective fluctuations and single-site degrees of freedom. Standard local field approach, as well as classical limit of the bosonic DMFT method, do not provide a satisfactory description of Ising and Heisenberg small lattices subjected to an external polarizing field. We show that a dramatic improvement can be achieved within a simple approach, in which the local field appears to be a fluctuating quantity related to the low-energy degree(s) of freedom.
Zhao, Weixiang; Davis, Cristina E.
2011-01-01
Objective This paper introduces a modified artificial immune system (AIS)-based pattern recognition method to enhance the recognition ability of the existing conventional AIS-based classification approach and demonstrates the superiority of the proposed new AIS-based method via two case studies of breast cancer diagnosis. Methods and materials Conventionally, the AIS approach is often coupled with the k nearest neighbor (k-NN) algorithm to form a classification method called AIS-kNN. In this paper we discuss the basic principle and possible problems of this conventional approach, and propose a new approach where AIS is integrated with the radial basis function – partial least square regression (AIS-RBFPLS). Additionally, both the two AIS-based approaches are compared with two classical and powerful machine learning methods, back-propagation neural network (BPNN) and orthogonal radial basis function network (Ortho-RBF network). Results The diagnosis results show that: (1) both the AIS-kNN and the AIS-RBFPLS proved to be a good machine leaning method for clinical diagnosis, but the proposed AIS-RBFPLS generated an even lower misclassification ratio, especially in the cases where the conventional AIS-kNN approach generated poor classification results because of possible improper AIS parameters. For example, based upon the AIS memory cells of “replacement threshold = 0.3”, the average misclassification ratios of two approaches for study 1 are 3.36% (AIS-RBFPLS) and 9.07% (AIS-kNN), and the misclassification ratios for study 2 are 19.18% (AIS-RBFPLS) and 28.36% (AIS-kNN); (2) the proposed AIS-RBFPLS presented its robustness in terms of the AIS-created memory cells, showing a smaller standard deviation of the results from the multiple trials than AIS-kNN. For example, using the result from the first set of AIS memory cells as an example, the standard deviations of the misclassification ratios for study 1 are 0.45% (AIS-RBFPLS) and 8.71% (AIS-kNN) and those for study 2 are 0.49% (AIS-RBFPLS) and 6.61% (AIS-kNN); and (3) the proposed AIS-RBFPLS classification approaches also yielded better diagnosis results than two classical neural network approaches of BPNN and Ortho-RBF network. Conclusion In summary, this paper proposed a new machine learning method for complex systems by integrating the AIS system with RBFPLS. This new method demonstrates its satisfactory effect on classification accuracy for clinical diagnosis, and also indicates its wide potential applications to other diagnosis and detection problems. PMID:21515033
Follicular patterned lesions of the thyroid gland: a practical algorithmic approach.
Chetty, Runjan
2011-09-01
Follicular patterned lesions of the thyroid are problematic and interpretation is often subjective. While thyroid experts are comfortable with their own criteria and thresholds, those encountering these lesions sporadically have a degree of uncertainty with a proportion of cases. The purpose of this review is to highlight the importance of proper diligent sampling of an encapsulated thyroid lesion (in totality in many cases), examination for capsular and vascular invasion, and finally the assessment of nuclear changes that are pathognomonic of papillary thyroid carcinoma (PTC). Based on these established criteria, an algorithmic approach is suggested using known, accepted terminology. The importance of unequivocal, clear-cut nuclear features of PTC as opposed to inconclusive features is stressed. If the nuclear features in an encapsulated, non-invasive follicular patterned lesion fall short of those encountered in classical PTC, but nonetheless are still worrying or concerning, the term 'uncertain malignant potential or behaviour, most likely benign' is suggested. Indubitable, classical PTC nuclei (whether diffuse or restricted to a single high-power field) are diagnostic of a PTC be it classical, non-invasive or invasive follicular variant PTC. Capsular and vascular invasion remain the only reliable predictors of outcome, as non-invasive, encapsulated follicular variant PTC, even with diffuse PTC nuclear change, behaves in an indolent fashion.
On some methods for assessing earthquake predictions
NASA Astrophysics Data System (ADS)
Molchan, G.; Romashkova, L.; Peresan, A.
2017-09-01
A regional approach to the problem of assessing earthquake predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted earthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.
Validity of the toposequence approach along a rainfall gradient at a desert fringe
NASA Astrophysics Data System (ADS)
Yair, Aaron
2017-04-01
According to the "classic" toposequence approach soil's properties are closely related to the position of a soil along a slope. The positional differences in soil properties are usually attributed to spatial differences in runoff; erosion and deposition processes. These processes reflect long term effects of the spatial redistribution of water, solids and soluble materials, which are of great importance in respect of nutrient cycling on the landscape scale, and the structuring of natural ecosystems. The "classic" toposequence approach has been seriously questioned by Sommer and Schlichting (1997). They were followed by many scientists of various disciplines (hydrology, ecology, paleopedology, paleoclimate etc). The present study covers three topo-sequences, located in southern Israel, along an average annual rainfall gradient of 90-300 mm. The classic toposequence approach does not apply to none of them, and the controlling factors vary from one site to another.
A Scalable Approach for Protein False Discovery Rate Estimation in Large Proteomic Data Sets
Savitski, Mikhail M.; Wilhelm, Mathias; Hahne, Hannes; Kuster, Bernhard; Bantscheff, Marcus
2015-01-01
Calculating the number of confidently identified proteins and estimating false discovery rate (FDR) is a challenge when analyzing very large proteomic data sets such as entire human proteomes. Biological and technical heterogeneity in proteomic experiments further add to the challenge and there are strong differences in opinion regarding the conceptual validity of a protein FDR and no consensus regarding the methodology for protein FDR determination. There are also limitations inherent to the widely used classic target–decoy strategy that particularly show when analyzing very large data sets and that lead to a strong over-representation of decoy identifications. In this study, we investigated the merits of the classic, as well as a novel target–decoy-based protein FDR estimation approach, taking advantage of a heterogeneous data collection comprised of ∼19,000 LC-MS/MS runs deposited in ProteomicsDB (https://www.proteomicsdb.org). The “picked” protein FDR approach treats target and decoy sequences of the same protein as a pair rather than as individual entities and chooses either the target or the decoy sequence depending on which receives the highest score. We investigated the performance of this approach in combination with q-value based peptide scoring to normalize sample-, instrument-, and search engine-specific differences. The “picked” target–decoy strategy performed best when protein scoring was based on the best peptide q-value for each protein yielding a stable number of true positive protein identifications over a wide range of q-value thresholds. We show that this simple and unbiased strategy eliminates a conceptual issue in the commonly used “classic” protein FDR approach that causes overprediction of false-positive protein identification in large data sets. The approach scales from small to very large data sets without losing performance, consistently increases the number of true-positive protein identifications and is readily implemented in proteomics analysis software. PMID:25987413
ERIC Educational Resources Information Center
Sheikh, Irfan; Bagley, Carl
2018-01-01
The article uncovers the complex process of educational policy enactment and the impact this process has on teachers as policy actors as they undertake the task of introducing a new mathematics curriculum in a Canadian secondary school. The three year study based on in-depth qualitative interviews adopts a classic grounded theory approach of…
Thomas, W. Kelley; Vida, J. T.; Frisse, Linda M.; Mundo, Manuel; Baldwin, James G.
1997-01-01
To effectively integrate DNA sequence analysis and classical nematode taxonomy, we must be able to obtain DNA sequences from formalin-fixed specimens. Microdissected sections of nematodes were removed from specimens fixed in formalin, using standard protocols and without destroying morphological features. The fixed sections provided sufficient template for multiple polymerase chain reaction-based DNA sequence analyses. PMID:19274156
Dense matter theory: A simple classical approach
NASA Astrophysics Data System (ADS)
Savić, P.; Čelebonović, V.
1994-07-01
In the sixties, the first author and by P. Savić and R. Kašanin started developing a mean-field theory of dense matter. It is based on the Coulomb interaction, supplemented by a microscopic selection rule and a set of experimentally founded postulates. Applications of the theory range from the calculation of models of planetary internal structure to DAC experiments.
Tree Biomass Estimation of Chinese fir (Cunninghamia lanceolata) Based on Bayesian Method
Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass. PMID:24278198
Tree biomass estimation of Chinese fir (Cunninghamia lanceolata) based on Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation W = a(D2H)b was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass.
Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis
NASA Astrophysics Data System (ADS)
Střelec, Luboš
2011-09-01
The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from normality. This study also discusses some results of simulation power studies of these tests for normality against selected alternatives. Based on outcome of the power simulation study, selected normality tests were consequently used to verify weak form of efficiency in Central Europe stock markets.
Lorentz-invariant three-vectors and alternative formulation of relativistic dynamics
NASA Astrophysics Data System (ADS)
Rȩbilas, Krzysztof
2010-03-01
Besides the well-known scalar invariants, there also exist vectorial invariants in special relativity. It is shown that the three-vector (dp⃗/dt)∥+γv(dp⃗/dt)⊥ is invariant under the Lorentz transformation. The subscripts ∥ and ⊥ denote the respective components with respect to the direction of the velocity of the body v⃗, and p⃗ is the relativistic momentum. We show that this vector is equal to a force F⃗R, which satisfies the classical Newtonian law F⃗R=ma⃗R in the instantaneous inertial rest frame of an accelerating body. Therefore, the relation F⃗R=(dp⃗/dt)∥+γv(dp⃗/dt)⊥, based on the Lorentz-invariant vectors, may be used as an invariant (not merely a covariant) relativistic equation of motion in any inertial system of reference. An alternative approach to classical electrodynamics based on the invariant three-vectors is proposed.
Efficient occupancy model-fitting for extensive citizen-science data.
Dennis, Emily B; Morgan, Byron J T; Freeman, Stephen N; Ridout, Martin S; Brereton, Tom M; Fox, Richard; Powney, Gary D; Roy, David B
2017-01-01
Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species' range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen scientists.
Efficient occupancy model-fitting for extensive citizen-science data
Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.
2017-01-01
Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen scientists. PMID:28328937
Mojdehi, Ahmad R; Holmes, Douglas P; Dillard, David A
2017-10-25
A generalized scaling law, based on the classical fracture mechanics approach, is developed to predict the bond strength of adhesive systems. The proposed scaling relationship depends on the rate of change of debond area with compliance, rather than the ratio of area to compliance. This distinction can have a profound impact on the expected bond strength of systems, particularly when the failure mechanism changes or the compliance of the load train increases. Based on the classical fracture mechanics approach for rate-independent materials, the load train compliance should not affect the force capacity of the adhesive system, whereas when the area to compliance ratio is used as the scaling parameter, it directly influences the bond strength, making it necessary to distinguish compliance contributions. To verify the scaling relationship, single lap shear tests were performed for a given pressure sensitive adhesive (PSA) tape specimens with different bond areas, number of backing layers, and load train compliance. The shear lag model was used to derive closed-form relationships for the system compliance and its derivative with respect to the debond area. Digital image correlation (DIC) is implemented to verify the non-uniform shear stress distribution obtained from the shear lag model in a lap shear geometry. The results obtained from this approach could lead to a better understanding of the relationship between bond strength and the geometry and mechanical properties of adhesive systems.
Metagenetic tools for the census of marine meiofaunal biodiversity: An overview.
Carugati, Laura; Corinaldesi, Cinzia; Dell'Anno, Antonio; Danovaro, Roberto
2015-12-01
Marine organisms belonging to meiofauna (size range: 20-500 μm) are amongst the most abundant and highly diversified metazoans on Earth including 22 over 35 known animal Phyla and accounting for more than 2/3 of the abundance of metazoan organisms. In any marine system, meiofauna play a key role in the functioning of the food webs and sustain important ecological processes. Estimates of meiofaunal biodiversity have been so far almost exclusively based on morphological analyses, but the very small size of these organisms and, in some cases, the insufficient morphological distinctive features limit considerably the census of the biodiversity of this component. Molecular approaches recently applied also to small invertebrates (including meiofauna) can offer a new momentum for the census of meiofaunal biodiversity. Here, we provide an overview on the application of metagenetic approaches based on the use of next generation sequencing platforms to study meiofaunal biodiversity, with a special focus on marine nematodes. Our overview shows that, although such approaches can represent a useful tool for the census of meiofaunal biodiversity, there are still different shortcomings and pitfalls that prevent their extensive use without the support of the classical taxonomic identification. Future investigations are needed to address these problems and to provide a good match between the contrasting findings emerging from classical taxonomic and molecular/bioinformatic tools. Copyright © 2015. Published by Elsevier B.V.
Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P
2017-03-01
Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely used calibrationless uniformly undersampled trajectories. Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. The SENSE-LORAKS framework provides promising new opportunities for highly accelerated MRI. Magn Reson Med 77:1021-1035, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Alegre-Cortés, J; Soto-Sánchez, C; Pizá, Á G; Albarracín, A L; Farfán, F D; Felice, C J; Fernández, E
2016-07-15
Linear analysis has classically provided powerful tools for understanding the behavior of neural populations, but the neuron responses to real-world stimulation are nonlinear under some conditions, and many neuronal components demonstrate strong nonlinear behavior. In spite of this, temporal and frequency dynamics of neural populations to sensory stimulation have been usually analyzed with linear approaches. In this paper, we propose the use of Noise-Assisted Multivariate Empirical Mode Decomposition (NA-MEMD), a data-driven template-free algorithm, plus the Hilbert transform as a suitable tool for analyzing population oscillatory dynamics in a multi-dimensional space with instantaneous frequency (IF) resolution. The proposed approach was able to extract oscillatory information of neurophysiological data of deep vibrissal nerve and visual cortex multiunit recordings that were not evidenced using linear approaches with fixed bases such as the Fourier analysis. Texture discrimination analysis performance was increased when Noise-Assisted Multivariate Empirical Mode plus Hilbert transform was implemented, compared to linear techniques. Cortical oscillatory population activity was analyzed with precise time-frequency resolution. Similarly, NA-MEMD provided increased time-frequency resolution of cortical oscillatory population activity. Noise-Assisted Multivariate Empirical Mode Decomposition plus Hilbert transform is an improved method to analyze neuronal population oscillatory dynamics overcoming linear and stationary assumptions of classical methods. Copyright © 2016 Elsevier B.V. All rights reserved.
Cima, Igor; Wen Yee, Chay; Iliescu, Florina S; Phyo, Wai Min; Lim, Kiat Hon; Iliescu, Ciprian; Tan, Min Han
2013-01-01
This review will cover the recent advances in label-free approaches to isolate and manipulate circulating tumor cells (CTCs). In essence, label-free approaches do not rely on antibodies or biological markers for labeling the cells of interest, but enrich them using the differential physical properties intrinsic to cancer and blood cells. We will discuss technologies that isolate cells based on their biomechanical and electrical properties. Label-free approaches to analyze CTCs have been recently invoked as a valid alternative to "marker-based" techniques, because classical epithelial and tumor markers are lost on some CTC populations and there is no comprehensive phenotypic definition for CTCs. We will highlight the advantages and drawbacks of these technologies and the status on their implementation in the clinics.
Causse, Elsa; Félonneau, Marie-Line
2014-01-01
Research on uniqueness is widely focused on cross-cultural comparisons and tends to postulate a certain form of within-culture homogeneity. Taking the opposite course of this classic posture, we aimed at testing an integrative approach enabling the study of within-culture variations of uniqueness. This approach considered different sources of variation: social status, gender, life contexts, and interpersonal comparison. Four hundred seventy-nine participants completed a measure based on descriptions of "self" and "other." Results showed important variations of uniqueness. An interaction between social status and life contexts revealed the expression of uniqueness in the low-status group. This study highlights the complexity of uniqueness that appears to be related to both cultural ideology and social hierarchy.
Cumulants, free cumulants and half-shuffles
Ebrahimi-Fard, Kurusch; Patras, Frédéric
2015-01-01
Free cumulants were introduced as the proper analogue of classical cumulants in the theory of free probability. There is a mix of similarities and differences, when one considers the two families of cumulants. Whereas the combinatorics of classical cumulants is well expressed in terms of set partitions, that of free cumulants is described and often introduced in terms of non-crossing set partitions. The formal series approach to classical and free cumulants also largely differs. The purpose of this study is to put forward a different approach to these phenomena. Namely, we show that cumulants, whether classical or free, can be understood in terms of the algebra and combinatorics underlying commutative as well as non-commutative (half-)shuffles and (half-) unshuffles. As a corollary, cumulants and free cumulants can be characterized through linear fixed point equations. We study the exponential solutions of these linear fixed point equations, which display well the commutative, respectively non-commutative, character of classical and free cumulants. PMID:27547078
A quantum probability framework for human probabilistic inference.
Trueblood, Jennifer S; Yearsley, James M; Pothos, Emmanuel M
2017-09-01
There is considerable variety in human inference (e.g., a doctor inferring the presence of a disease, a juror inferring the guilt of a defendant, or someone inferring future weight loss based on diet and exercise). As such, people display a wide range of behaviors when making inference judgments. Sometimes, people's judgments appear Bayesian (i.e., normative), but in other cases, judgments deviate from the normative prescription of classical probability theory. How can we combine both Bayesian and non-Bayesian influences in a principled way? We propose a unified explanation of human inference using quantum probability theory. In our approach, we postulate a hierarchy of mental representations, from 'fully' quantum to 'fully' classical, which could be adopted in different situations. In our hierarchy of models, moving from the lowest level to the highest involves changing assumptions about compatibility (i.e., how joint events are represented). Using results from 3 experiments, we show that our modeling approach explains 5 key phenomena in human inference including order effects, reciprocity (i.e., the inverse fallacy), memorylessness, violations of the Markov condition, and antidiscounting. As far as we are aware, no existing theory or model can explain all 5 phenomena. We also explore transitions in our hierarchy, examining how representations change from more quantum to more classical. We show that classical representations provide a better account of data as individuals gain familiarity with a task. We also show that representations vary between individuals, in a way that relates to a simple measure of cognitive style, the Cognitive Reflection Test. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A signed particle formulation of non-relativistic quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg
2015-09-15
A formulation of non-relativistic quantum mechanics in terms of Newtonian particles is presented in the shape of a set of three postulates. In this new theory, quantum systems are described by ensembles of signed particles which behave as field-less classical objects which carry a negative or positive sign and interact with an external potential by means of creation and annihilation events only. This approach is shown to be a generalization of the signed particle Wigner Monte Carlo method which reconstructs the time-dependent Wigner quasi-distribution function of a system and, therefore, the corresponding Schrödinger time-dependent wave-function. Its classical limit is discussedmore » and a physical interpretation, based on experimental evidences coming from quantum tomography, is suggested. Moreover, in order to show the advantages brought by this novel formulation, a straightforward extension to relativistic effects is discussed. To conclude, quantum tunnelling numerical experiments are performed to show the validity of the suggested approach.« less
NASA Astrophysics Data System (ADS)
Wan, S.; He, W.
2016-12-01
The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data." On the basis of the intelligent features of evolutionary modeling (EM), including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
Jagiello, Karolina; Grzonkowska, Monika; Swirog, Marta; ...
2016-08-29
In this contribution, the advantages and limitations of two computational techniques that can be used for the investigation of nanoparticles activity and toxicity: classic nano-QSAR (Quantitative Structure–Activity Relationships employed for nanomaterials) and 3D nano-QSAR (three-dimensional Quantitative Structure–Activity Relationships, such us Comparative Molecular Field Analysis, CoMFA/Comparative Molecular Similarity Indices Analysis, CoMSIA analysis employed for nanomaterials) have been briefly summarized. Both approaches were compared according to the selected criteria, including: efficiency, type of experimental data, class of nanomaterials, time required for calculations and computational cost, difficulties in the interpretation. Taking into account the advantages and limitations of each method, we provide themore » recommendations for nano-QSAR modellers and QSAR model users to be able to determine a proper and efficient methodology to investigate biological activity of nanoparticles in order to describe the underlying interactions in the most reliable and useful manner.« less
A Novel DEM Approach to Simulate Block Propagation on Forested Slopes
NASA Astrophysics Data System (ADS)
Toe, David; Bourrier, Franck; Dorren, Luuk; Berger, Frédéric
2018-03-01
In order to model rockfall on forested slopes, we developed a trajectory rockfall model based on the discrete element method (DEM). This model is able to take the complex mechanical processes at work during an impact into account (large deformations, complex contact conditions) and can explicitly simulate block/soil, block/tree contacts as well as contacts between neighbouring trees. In this paper, we describe the DEM model developed and we use it to assess the protective effect of different types of forest. In addition, we compared it with a more classical rockfall simulation model. The results highlight that forests can significantly reduce rockfall hazard and that the spatial structure of coppice forests has to be taken into account in rockfall simulations in order to avoid overestimating the protective role of these forest structures against rockfall hazard. In addition, the protective role of the forests is mainly influenced by the basal area. Finally, the advantages and limitations of the DEM model were compared with classical rockfall modelling approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jagiello, Karolina; Grzonkowska, Monika; Swirog, Marta
In this contribution, the advantages and limitations of two computational techniques that can be used for the investigation of nanoparticles activity and toxicity: classic nano-QSAR (Quantitative Structure–Activity Relationships employed for nanomaterials) and 3D nano-QSAR (three-dimensional Quantitative Structure–Activity Relationships, such us Comparative Molecular Field Analysis, CoMFA/Comparative Molecular Similarity Indices Analysis, CoMSIA analysis employed for nanomaterials) have been briefly summarized. Both approaches were compared according to the selected criteria, including: efficiency, type of experimental data, class of nanomaterials, time required for calculations and computational cost, difficulties in the interpretation. Taking into account the advantages and limitations of each method, we provide themore » recommendations for nano-QSAR modellers and QSAR model users to be able to determine a proper and efficient methodology to investigate biological activity of nanoparticles in order to describe the underlying interactions in the most reliable and useful manner.« less
Tinti, Michele; Paoluzi, Serena; Santonico, Elena; Masch, Antonia; Schutkowski, Mike
2017-01-01
Reversible tyrosine phosphorylation is a widespread post-translational modification mechanism underlying cell physiology. Thus, understanding the mechanisms responsible for substrate selection by kinases and phosphatases is central to our ability to model signal transduction at a system level. Classical protein-tyrosine phosphatases can exhibit substrate specificity in vivo by combining intrinsic enzymatic specificity with the network of protein-protein interactions, which positions the enzymes in close proximity to their substrates. Here we use a high throughput approach, based on high density phosphopeptide chips, to determine the in vitro substrate preference of 16 members of the protein-tyrosine phosphatase family. This approach helped identify one residue in the substrate binding pocket of the phosphatase domain that confers specificity for phosphopeptides in a specific sequence context. We also present a Bayesian model that combines intrinsic enzymatic specificity and interaction information in the context of the human protein interaction network to infer new phosphatase substrates at the proteome level. PMID:28159843
Optimal control of underactuated mechanical systems: A geometric approach
NASA Astrophysics Data System (ADS)
Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela
2010-08-01
In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.
Bernabeu-Mestre, Josep; Santos, Ana Paula Cid; Pellicer, Josep Xavier Esplugues; Galiana-Sánchez, María Eugenia
2008-01-01
Chlorosis and Neurasthenia are two classical examples of pathological dissociations and the difficulties involved in approaching their diagnosis using scientific-naturalistic criteria. In the realm of those difficulties, the study examines the androcentric viewpoint and the ideological perspective of Contemporary Spanish Medicine when addressing the feminine nature and women's pathologies. Moreover, based on the similarities with present-day pain and fatigue syndromes, the study underlines the need to review the clinical approach to these illnesses by attempting to overcome the existing biomedical limitations.
Secure Multiparty Quantum Computation for Summation and Multiplication.
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-21
As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics.
Secure Multiparty Quantum Computation for Summation and Multiplication
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-01
As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics. PMID:26792197
Translucent Radiosity: Efficiently Combining Diffuse Inter-Reflection and Subsurface Scattering.
Sheng, Yu; Shi, Yulong; Wang, Lili; Narasimhan, Srinivasa G
2014-07-01
It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.
Exact and efficient simulation of concordant computation
NASA Astrophysics Data System (ADS)
Cable, Hugo; Browne, Daniel E.
2015-11-01
Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.
NASA Astrophysics Data System (ADS)
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M. N.; Head-Gordon, Teresa; Skylaris, Chris-Kriton
2017-03-01
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.
Banerjee, T; Banerjee, S; Sett, S; Ghosh, S; Rakshit, T; Mukhopadhyay, R
2016-01-01
DNA threading intercalators are a unique class of intercalating agents, albeit little biophysical information is available on their intercalative actions. Herein, the intercalative effects of nogalamycin, which is a naturally-occurring DNA threading intercalator, have been investigated by high-resolution atomic force microscopy (AFM) and spectroscopy (AFS). The results have been compared with those of the well-known chemotherapeutic drug daunomycin, which is a non-threading classical intercalator bearing structural similarity to nogalamycin. A comparative AFM assessment revealed a greater increase in DNA contour length over the entire incubation period of 48 h for nogalamycin treatment, whereas the contour length increase manifested faster in case of daunomycin. The elastic response of single DNA molecules to an externally applied force was investigated by the single molecule AFS approach. Characteristic mechanical fingerprints in the overstretching behaviour clearly distinguished the nogalamycin/daunomycin-treated dsDNA from untreated dsDNA-the former appearing less elastic than the latter, and the nogalamycin-treated DNA distinguished from the daunomycin-treated DNA-the classically intercalated dsDNA appearing the least elastic. A single molecule AFS-based discrimination of threading intercalation from the classical type is being reported for the first time.
Sett, S.; Ghosh, S.; Rakshit, T.; Mukhopadhyay, R.
2016-01-01
DNA threading intercalators are a unique class of intercalating agents, albeit little biophysical information is available on their intercalative actions. Herein, the intercalative effects of nogalamycin, which is a naturally-occurring DNA threading intercalator, have been investigated by high-resolution atomic force microscopy (AFM) and spectroscopy (AFS). The results have been compared with those of the well-known chemotherapeutic drug daunomycin, which is a non-threading classical intercalator bearing structural similarity to nogalamycin. A comparative AFM assessment revealed a greater increase in DNA contour length over the entire incubation period of 48 h for nogalamycin treatment, whereas the contour length increase manifested faster in case of daunomycin. The elastic response of single DNA molecules to an externally applied force was investigated by the single molecule AFS approach. Characteristic mechanical fingerprints in the overstretching behaviour clearly distinguished the nogalamycin/daunomycin-treated dsDNA from untreated dsDNA—the former appearing less elastic than the latter, and the nogalamycin-treated DNA distinguished from the daunomycin-treated DNA—the classically intercalated dsDNA appearing the least elastic. A single molecule AFS-based discrimination of threading intercalation from the classical type is being reported for the first time. PMID:27183010
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M N; Head-Gordon, Teresa; Skylaris, Chris-Kriton
2017-03-28
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes-in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.
What is quantum in quantum randomness?
Grangier, P; Auffèves, A
2018-07-13
It is often said that quantum and classical randomness are of different nature, the former being ontological and the latter epistemological. However, so far the question of 'What is quantum in quantum randomness?', i.e. what is the impact of quantization and discreteness on the nature of randomness, remains to be answered. In a first part, we make explicit the differences between quantum and classical randomness within a recently proposed ontology for quantum mechanics based on contextual objectivity. In this view, quantum randomness is the result of contextuality and quantization. We show that this approach strongly impacts the purposes of quantum theory as well as its areas of application. In particular, it challenges current programmes inspired by classical reductionism, aiming at the emergence of the classical world from a large number of quantum systems. In a second part, we analyse quantum physics and thermodynamics as theories of randomness, unveiling their mutual influences. We finally consider new technological applications of quantum randomness that have opened up in the emerging field of quantum thermodynamics.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; ...
2017-03-28
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less
Functional integral for non-Lagrangian systems
NASA Astrophysics Data System (ADS)
Kochan, Denis
2010-02-01
A functional integral formulation of quantum mechanics for non-Lagrangian systems is presented. The approach, which we call “stringy quantization,” is based solely on classical equations of motion and is free of any ambiguity arising from Lagrangian and/or Hamiltonian formulation of the theory. The functionality of the proposed method is demonstrated on several examples. Special attention is paid to the stringy quantization of systems with a general A-power friction force -κq˙A. Results for A=1 are compared with those obtained in the approaches by Caldirola-Kanai, Bateman, and Kostin. Relations to the Caldeira-Leggett model and to the Feynman-Vernon approach are discussed as well.
A New Approach for Solving the Generalized Traveling Salesman Problem
NASA Astrophysics Data System (ADS)
Pop, P. C.; Matei, O.; Sabo, C.
The generalized traveling problem (GTSP) is an extension of the classical traveling salesman problem. The GTSP is known to be an NP-hard problem and has many interesting applications. In this paper we present a local-global approach for the generalized traveling salesman problem. Based on this approach we describe a novel hybrid metaheuristic algorithm for solving the problem using genetic algorithms. Computational results are reported for Euclidean TSPlib instances and compared with the existing ones. The obtained results point out that our hybrid algorithm is an appropriate method to explore the search space of this complex problem and leads to good solutions in a reasonable amount of time.
Factorization approach to superintegrable systems: Formalism and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballesteros, Á., E-mail: angelb@ubu.es; Herranz, F. J., E-mail: fjherranz@ubu.es; Kuru, Ş., E-mail: kuru@science.ankara.edu.tr
2017-03-15
The factorization technique for superintegrable Hamiltonian systems is revisited and applied in order to obtain additional (higher-order) constants of the motion. In particular, the factorization approach to the classical anisotropic oscillator on the Euclidean plane is reviewed, and new classical (super) integrable anisotropic oscillators on the sphere are constructed. The Tremblay–Turbiner–Winternitz system on the Euclidean plane is also studied from this viewpoint.
Illés, Tamás
2011-03-01
The EOS system is a new medical imaging device based on low-dose X-rays, gaseous detectors and dedicated software for 3D reconstruction. It was developed by Nobel prizewinner Georges Charpak. A new concept--the vertebral vector--is used to facilitate the interpretation of EOS data, especially in the horizontal plane. We studied 95 cases of idiopathic scoliosis before and after surgery by means of classical methods and using vertebral vectors, in order to compare the accuracy of the two approaches. The vertebral vector permits simultaneous analysis of the scoliotic curvature in the frontal, sagittal and horizontal planes, as precisely as classical methods. The use of the vertebral vector simplifies and facilitates the interpretation of the mass of information provided by EOS. After analyzing the horizontal data, the first goal of corrective intervention would be to reduce the lateral vertebral deviation. The reduction in vertebral rotation seems less important. This is a new element in the therapeutic management of spinal deformations.
Ab Initio Classical Dynamics Simulations of CO_2 Line-Mixing Effects in Infrared Bands
NASA Astrophysics Data System (ADS)
Lamouroux, Julien; Hartmann, Jean-Michel; Tran, Ha; Snels, Marcel; Stefani, Stefania; Piccioni, Giuseppe
2013-06-01
Ab initio calculations of line-mixing effects in CO_2 infrared bands are presented and compared with experiments. The predictions were carried using requantized Classical Dynamics Molecular Simulations (rCDMS) based on an approach previously developed and successfully tested for CO_2 isolated line shapes. Using classical dynamics equations, the force and torque applied to each molecule by the surrounding molecules (described by an ab initio intermolecular potential) are computed at each time step. This enables, using a requantization procedure, to predict dipole and isotropic polarizability auto-correlation functions whose Fourier-Laplace transforms yield the spectra. The quality of the rCDMS calculations is demonstrated by comparisons with measured spectra in the spectral regions of the 3ν_3 and 2ν_1+2ν_2+ν_3 Infrared bands. J.-M. Hartmann, H. Tran, N. H. Ngo, et al., Phys. Rev. Lett. A {87} (2013), 013403. H. Tran, C. Boulet, M. Snels, S. Stefani, J. Quant. Spectrosc. Radiat. Transfer {112} (2011), 925-936.
Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar
2002-05-01
Traditionally, quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasiprobability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using true probability distribution functions is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their coordinates and momenta, we derive a generalized quantum Langevin equation in c numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion, and Smoluchowski equations are the exact quantum analogs of their classical counterparts. The present work is independent of path integral techniques. The theory as developed here is a natural extension of its classical version and is valid for arbitrary temperature and friction (the Smoluchowski equation being considered in the overdamped limit).
Sarink, M J; Koelewijn, R; Slingerland, B C G C; Tielens, A G M; van Genderen, P J J; van Hellemond, J J
2018-06-28
Diagnosis of cystic echinococcosis (CE) is at present mainly based on imaging techniques. Serology has a complementary role, partly due to the small number of standardized and commercially available assays. Therefore we examined the clinical performance of the SERION ELISA classic Echinococcus IgG test. Using 10 U/ml as a cut-off point, and serum samples from 50 CE patients and 105 healthy controls, the sensitivity and specificity were 98.0% and 96.2%, respectively. If patients with other infectious diseases were used as negative controls, the specificity decreased to 76.9%, which causes poor positive predictive values. However, if results between 10 and 15 U/ml are classified as indecisive, the specificity of positive results (≥15 U/ml) increased to 92.5% without greatly affecting the sensitivity (92.0%). Using this approach in combination with imaging studies, the SERION ELISA classic Echinococcosis IgG test can be a useful aid in the diagnosis of CE.
A random walk approach to quantum algorithms.
Kendon, Vivien M
2006-12-15
The development of quantum algorithms based on quantum versions of random walks is placed in the context of the emerging field of quantum computing. Constructing a suitable quantum version of a random walk is not trivial; pure quantum dynamics is deterministic, so randomness only enters during the measurement phase, i.e. when converting the quantum information into classical information. The outcome of a quantum random walk is very different from the corresponding classical random walk owing to the interference between the different possible paths. The upshot is that quantum walkers find themselves further from their starting point than a classical walker on average, and this forms the basis of a quantum speed up, which can be exploited to solve problems faster. Surprisingly, the effect of making the walk slightly less than perfectly quantum can optimize the properties of the quantum walk for algorithmic applications. Looking to the future, even with a small quantum computer available, the development of quantum walk algorithms might proceed more rapidly than it has, especially for solving real problems.
Adhesive Dimerization of Human P-Cadherin Catalyzed by a Chaperone-like Mechanism.
Kudo, Shota; Caaveiro, Jose M M; Tsumoto, Kouhei
2016-09-06
Orderly assembly of classical cadherins governs cell adhesion and tissue maintenance. A key event is the strand-swap dimerization of the extracellular ectodomains of two cadherin molecules from apposing cells. Here we have determined crystal structures of P-cadherin in six different conformational states to elaborate a motion picture of its adhesive dimerization at the atomic level. The snapshots revealed that cell-adhesive dimerization is facilitated by several intermediate states collectively termed X-dimer in analogy to other classical cadherins. Based on previous studies and on the combined structural, kinetic, thermodynamic, biochemical, and cellular data reported herein, we propose that the adhesive dimerization of human P-cadherin is achieved by a stepwise mechanism analogous to that of assembly chaperones. This mechanism, applicable to type I classical cadherins, confers high specificity and fast association rates. We expect these findings to guide innovative therapeutic approaches targeting P-cadherin in cancer. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Jing; D'Avino, Gabriele; Duchemin, Ivan; Beljonne, David; Blase, Xavier
2018-01-01
We present a novel hybrid quantum/classical approach to the calculation of charged excitations in molecular solids based on the many-body Green's function G W formalism. Molecules described at the G W level are embedded into the crystalline environment modeled with an accurate classical polarizable scheme. This allows the calculation of electron addition and removal energies in the bulk and at crystal surfaces where charged excitations are probed in photoelectron experiments. By considering the paradigmatic case of pentacene and perfluoropentacene crystals, we discuss the different contributions from intermolecular interactions to electronic energy levels, distinguishing between polarization, which is accounted for combining quantum and classical polarizabilities, and crystal field effects, that can impact energy levels by up to ±0.6 eV. After introducing band dispersion, we achieve quantitative agreement (within 0.2 eV) on the ionization potential and electron affinity measured at pentacene and perfluoropentacene crystal surfaces characterized by standing molecules.
EPRL/FK asymptotics and the flatness problem
NASA Astrophysics Data System (ADS)
Oliveira, José Ricardo
2018-05-01
Spin foam models are an approach to quantum gravity based on the concept of sum over states, which aims to describe quantum spacetime dynamics in a way that its parent framework, loop quantum gravity, has not as of yet succeeded. Since these models’ relation to classical Einstein gravity is not explicit, an important test of their viabilitiy is the study of asymptotics—the classical theory should be obtained in a limit where quantum effects are negligible, taken to be the limit of large triangle areas in a triangulated manifold with boundary. In this paper we will briefly introduce the EPRL/FK spin foam model and known results about its asymptotics, proceeding then to describe a practical computation of spin foam and semiclassical geometric data for a simple triangulation with only one interior triangle. The results are used to comment on the ‘flatness problem’—a hypothesis raised by Bonzom (2009 Phys. Rev. D 80 064028) suggesting that EPRL/FK’s classical limit only describes flat geometries in vacuum.
Definition of a Robust Supervisory Control Scheme for Sodium-Cooled Fast Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponciroli, R.; Passerini, S.; Vilim, R. B.
In this work, an innovative control approach for metal-fueled Sodium-cooled Fast Reactors is proposed. With respect to the classical approach adopted for base-load Nuclear Power Plants, an alternative control strategy for operating the reactor at different power levels by respecting the system physical constraints is presented. In order to achieve a higher operational flexibility along with ensuring that the implemented control loops do not influence the system inherent passive safety features, a dedicated supervisory control scheme for the dynamic definition of the corresponding set-points to be supplied to the PID controllers is designed. In particular, the traditional approach based onmore » the adoption of tabulated lookup tables for the set-point definition is found not to be robust enough when failures of the implemented SISO (Single Input Single Output) actuators occur. Therefore, a feedback algorithm based on the Reference Governor approach, which allows for the optimization of reference signals according to the system operating conditions, is proposed.« less
Modified trans-oral approach with an inferiorly based flap.
Al-Holou, Wajd N; Park, Paul; Wang, Anthony C; Than, Khoi D; Marentette, Lawrence J
2010-04-01
The trans-oral approach allows direct access to pathologies of the anterior craniocervical junction. However, the classic midline incision of the posterior pharyngeal wall can be surgically burdensome and limits lateral exposure. We reviewed the medical records of nine patients undergoing the trans-oral approach. The sites of the pathology ranged from the clivus to C2, and surgical exposure ranged from the clivus to C3. Each operation utilized an inferiorly based flap. None of the patients experienced vascular or neurologic complications, and no patient had a cerebrospinal fluid fistula, pseudomeningocele, or meningitis postoperatively. The trans-oral approach with an inferiorly based flap can therefore be safely and effectively performed with minimal oropharyngeal and neurologic morbidity. Not only does a U-shaped flap allow adequate exposure from the lower half of the clivus to C3, a flap improves lateral exposure, provides a clear operating field, and allows superficial mucosal closure not directly overlying the operative field. (c) 2009 Elsevier Ltd. All rights reserved.
Martínez-Fernández, L; Pepino, A J; Segarra-Martí, J; Banyasz, A; Garavelli, M; Improta, R
2016-09-13
The optical spectra of 5-methylcytidine in three different solvents (tetrahydrofuran, acetonitrile, and water) is measured, showing that both the absorption and the emission maximum in water are significantly blue-shifted (0.08 eV). The absorption spectra are simulated based on CAM-B3LYP/TD-DFT calculations but including solvent effects with three different approaches: (i) a hybrid implicit/explicit full quantum mechanical approach, (ii) a mixed QM/MM static approach, and (iii) a QM/MM method exploiting the structures issuing from molecular dynamics classical simulations. Ab-initio Molecular dynamics simulations based on CAM-B3LYP functionals have also been performed. The adopted approaches all reproduce the main features of the experimental spectra, giving insights on the chemical-physical effects responsible for the solvent shifts in the spectra of 5-methylcytidine and providing the basis for discussing advantages and limitations of the adopted solvation models.
Lee, Mi Kyung; Coker, David F
2016-08-18
An accurate approach for computing intermolecular and intrachromophore contributions to spectral densities to describe the electronic-nuclear interactions relevant for modeling excitation energy transfer processes in light harvesting systems is presented. The approach is based on molecular dynamics (MD) calculations of classical correlation functions of long-range contributions to excitation energy fluctuations and a separate harmonic analysis and single-point gradient quantum calculations for electron-intrachromophore vibrational couplings. A simple model is also presented that enables detailed analysis of the shortcomings of standard MD-based excitation energy fluctuation correlation function approaches. The method introduced here avoids these problems, and its reliability is demonstrated in accurate predictions for bacteriochlorophyll molecules in the Fenna-Matthews-Olson pigment-protein complex, where excellent agreement with experimental spectral densities is found. This efficient approach can provide instantaneous spectral densities for treating the influence of fluctuations in environmental dissipation on fast electronic relaxation.
Real time UNIX in embedded control-a case study within the context of LynxOS
NASA Astrophysics Data System (ADS)
Kleines, H.; Zwoll, K.
1996-02-01
Intelligent communication controllers for a layered protocol profile are a typical example of an embedded control application, where the classical approach for the software development is based on a proprietary real-time operating system kernel under which the individual layers are implemented as tasks. Based on the exemplary implementation of a derivative of MAP 3.0, an unusual and innovative approach is presented, where the protocol software is implemented under the UNIX-compatible real-time operating system LynxOS. The overall design of the embedded control application is presented under a more general view and economical implications as well as aspects of the development environment and performance are discussed
Approximation of Nash equilibria and the network community structure detection problem
2017-01-01
Game theory based methods designed to solve the problem of community structure detection in complex networks have emerged in recent years as an alternative to classical and optimization based approaches. The Mixed Nash Extremal Optimization uses a generative relation for the characterization of Nash equilibria to identify the community structure of a network by converting the problem into a non-cooperative game. This paper proposes a method to enhance this algorithm by reducing the number of payoff function evaluations. Numerical experiments performed on synthetic and real-world networks show that this approach is efficient, with results better or just as good as other state-of-the-art methods. PMID:28467496
Linear and nonlinear spectroscopy from quantum master equations.
Fetherolf, Jonathan H; Berkelbach, Timothy C
2017-12-28
We investigate the accuracy of the second-order time-convolutionless (TCL2) quantum master equation for the calculation of linear and nonlinear spectroscopies of multichromophore systems. We show that even for systems with non-adiabatic coupling, the TCL2 master equation predicts linear absorption spectra that are accurate over an extremely broad range of parameters and well beyond what would be expected based on the perturbative nature of the approach; non-equilibrium population dynamics calculated with TCL2 for identical parameters are significantly less accurate. For third-order (two-dimensional) spectroscopy, the importance of population dynamics and the violation of the so-called quantum regression theorem degrade the accuracy of TCL2 dynamics. To correct these failures, we combine the TCL2 approach with a classical ensemble sampling of slow microscopic bath degrees of freedom, leading to an efficient hybrid quantum-classical scheme that displays excellent accuracy over a wide range of parameters. In the spectroscopic setting, the success of such a hybrid scheme can be understood through its separate treatment of homogeneous and inhomogeneous broadening. Importantly, the presented approach has the computational scaling of TCL2, with the modest addition of an embarrassingly parallel prefactor associated with ensemble sampling. The presented approach can be understood as a generalized inhomogeneous cumulant expansion technique, capable of treating multilevel systems with non-adiabatic dynamics.
Linear and nonlinear spectroscopy from quantum master equations
NASA Astrophysics Data System (ADS)
Fetherolf, Jonathan H.; Berkelbach, Timothy C.
2017-12-01
We investigate the accuracy of the second-order time-convolutionless (TCL2) quantum master equation for the calculation of linear and nonlinear spectroscopies of multichromophore systems. We show that even for systems with non-adiabatic coupling, the TCL2 master equation predicts linear absorption spectra that are accurate over an extremely broad range of parameters and well beyond what would be expected based on the perturbative nature of the approach; non-equilibrium population dynamics calculated with TCL2 for identical parameters are significantly less accurate. For third-order (two-dimensional) spectroscopy, the importance of population dynamics and the violation of the so-called quantum regression theorem degrade the accuracy of TCL2 dynamics. To correct these failures, we combine the TCL2 approach with a classical ensemble sampling of slow microscopic bath degrees of freedom, leading to an efficient hybrid quantum-classical scheme that displays excellent accuracy over a wide range of parameters. In the spectroscopic setting, the success of such a hybrid scheme can be understood through its separate treatment of homogeneous and inhomogeneous broadening. Importantly, the presented approach has the computational scaling of TCL2, with the modest addition of an embarrassingly parallel prefactor associated with ensemble sampling. The presented approach can be understood as a generalized inhomogeneous cumulant expansion technique, capable of treating multilevel systems with non-adiabatic dynamics.
The quantum realm of the ''Little Sibling'' of the Big Rip singularity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albarran, Imanol; Bouhmadi-López, Mariam; Cabral, Francisco
We analyse the quantum behaviour of the ''Little Sibling'' of the Big Rip singularity (LSBR) [1]. The quantisation is carried within the geometrodynamical approach given by the Wheeler-DeWitt (WDW) equation. The classical model is based on a Friedmann-Lemaître-Robertson-Walker Universe filled by a perfect fluid that can be mapped to a scalar field with phantom character. We analyse the WDW equation in two setups. In the first step, we consider the scale factor as the single degree of freedom, which from a classical perspective parametrises both the geometry and the matter content given by the perfect fluid. We then solve themore » WDW equation within a WKB approximation, for two factor ordering choices. On the second approach, we consider the WDW equation with two degrees of freedom: the scale factor and a scalar field. We solve the WDW equation, with the Laplace-Beltrami factor-ordering, using a Born-Oppenheimer approximation. In both approaches, we impose the DeWitt (DW) condition as a potential criterion for singularity avoidance. We conclude that in all the cases analysed the DW condition can be verified, which might be an indication that the LSBR can be avoided or smoothed in the quantum approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao Xiaoqiang; Wang Hongfu; Zhang Shou
We present an approach for implementation of a 1->3 orbital state quantum cloning machine based on the quantum Zeno dynamics via manipulating three rf superconducting quantum interference device (SQUID) qubits to resonantly interact with a superconducting cavity assisted by classical fields. Through appropriate modulation of the coupling constants between rf SQUIDs and classical fields, the quantum cloning machine can be realized within one step. We also discuss the effects of decoherence such as spontaneous emission and the loss of cavity in virtue of master equation. The numerical simulation result reveals that the quantum cloning machine is especially robust against themore » cavity decay, since all qubits evolve in the decoherence-free subspace with respect to cavity decay due to the quantum Zeno dynamics.« less
A multilevel probabilistic beam search algorithm for the shortest common supersequence problem.
Gallardo, José E
2012-01-01
The shortest common supersequence problem is a classical problem with many applications in different fields such as planning, Artificial Intelligence and especially in Bioinformatics. Due to its NP-hardness, we can not expect to efficiently solve this problem using conventional exact techniques. This paper presents a heuristic to tackle this problem based on the use at different levels of a probabilistic variant of a classical heuristic known as Beam Search. The proposed algorithm is empirically analysed and compared to current approaches in the literature. Experiments show that it provides better quality solutions in a reasonable time for medium and large instances of the problem. For very large instances, our heuristic also provides better solutions, but required execution times may increase considerably.
Quantum correction to classical gravitational interaction between two polarizable objects
NASA Astrophysics Data System (ADS)
Wu, Puxun; Hu, Jiawei; Yu, Hongwei
2016-12-01
When gravity is quantized, there inevitably exist quantum gravitational vacuum fluctuations which induce quadrupole moments in gravitationally polarizable objects and produce a quantum correction to the classical Newtonian interaction between them. Here, based upon linearized quantum gravity and the leading-order perturbation theory, we study, from a quantum field-theoretic prospect, this quantum correction between a pair of gravitationally polarizable objects treated as two-level harmonic oscillators. We find that the interaction potential behaves like r-11 in the retarded regime and r-10 in the near regime. Our result agrees with what were recently obtained in different approaches. Our study seems to indicate that linearized quantum gravity is robust in dealing with quantum gravitational effects at low energies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horn, Paul R., E-mail: prhorn@berkeley.edu; Mao, Yuezhi; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu
In energy decomposition analysis of Kohn-Sham density functional theory calculations, the so-called frozen (or pre-polarization) interaction energy contains contributions from permanent electrostatics, dispersion, and Pauli repulsion. The standard classical approach to separate them suffers from several well-known limitations. We introduce an alternative scheme that employs valid antisymmetric electronic wavefunctions throughout and is based on the identification of individual fragment contributions to the initial supersystem wavefunction as determined by an energetic optimality criterion. The density deformations identified with individual fragments upon formation of the initial supersystem wavefunction are analyzed along with the distance dependence of the new and classical terms formore » test cases that include the neon dimer, ammonia borane, water-Na{sup +}, water-Cl{sup −}, and the naphthalene dimer.« less
Depeursinge, Adrien; Chin, Anne S.; Leung, Ann N.; Terrone, Donato; Bristow, Michael; Rosen, Glenn; Rubin, Daniel L.
2014-01-01
Objectives We propose a novel computational approach for the automated classification of classic versus atypical usual interstitial pneumonia (UIP). Materials and Methods 33 patients with UIP were enrolled in this study. They were classified as classic versus atypical UIP by a consensus of two thoracic radiologists with more than 15 years of experience using the American Thoracic Society evidence–based guidelines for CT diagnosis of UIP. Two cardiothoracic fellows with one year of subspecialty training provided independent readings. The system is based on regional characterization of the morphological tissue properties of lung using volumetric texture analysis of multiple detector CT images. A simple digital atlas with 36 lung subregions is used to locate texture properties, from which the responses of multi-directional Riesz wavelets are obtained. Machine learning is used to aggregate and to map the regional texture attributes to a simple score that can be used to stratify patients with UIP into classic and atypical subtypes. Results We compared the predictions based on regional volumetric texture analysis with the ground truth established by expert consensus. The area under the receiver operating characteristic curve of the proposed score was estimated to be 0.81 using a leave-one-patient-out cross-validation, with high specificity for classic UIP. The performance of our automated method was found to be similar to that of the two fellows and to the agreement between experienced chest radiologists reported in the literature. However, the errors of our method and the fellows occurred on different cases, which suggests that combining human and computerized evaluations may be synergistic. Conclusions Our results are encouraging and suggest that an automated system may be useful in routine clinical practice as a diagnostic aid for identifying patients with complex lung disease such as classic UIP, obviating the need for invasive surgical lung biopsy and its associated risks. PMID:25551822
Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta
2017-09-19
Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.
Cotton, Stephen J.; Miller, William H.
2016-10-14
Previous work has shown how a symmetrical quasi-classical (SQC) windowing procedure can be used to quantize the initial and final electronic degrees of freedom in the Meyer-Miller (MM) classical vibronic (i.e, nuclear + electronic) Hamiltonian, and that the approach provides a very good description of electronically non-adiabatic processes within a standard classical molecular dynamics framework for a number of benchmark problems. This study explores application of the SQC/MM approach to the case of very weak non-adiabatic coupling between the electronic states, showing (as anticipated) how the standard SQC/MM approach used to date fails in this limit, and then devises amore » new SQC windowing scheme to deal with it. Finally, application of this new SQC model to a variety of realistic benchmark systems shows that the new model not only treats the weak coupling case extremely well, but it is also seen to describe the “normal” regime (of electronic transition probabilities ≳ 0.1) even more accurately than the previous “standard” model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cotton, Stephen J.; Miller, William H.
Previous work has shown how a symmetrical quasi-classical (SQC) windowing procedure can be used to quantize the initial and final electronic degrees of freedom in the Meyer-Miller (MM) classical vibronic (i.e, nuclear + electronic) Hamiltonian, and that the approach provides a very good description of electronically non-adiabatic processes within a standard classical molecular dynamics framework for a number of benchmark problems. This study explores application of the SQC/MM approach to the case of very weak non-adiabatic coupling between the electronic states, showing (as anticipated) how the standard SQC/MM approach used to date fails in this limit, and then devises amore » new SQC windowing scheme to deal with it. Finally, application of this new SQC model to a variety of realistic benchmark systems shows that the new model not only treats the weak coupling case extremely well, but it is also seen to describe the “normal” regime (of electronic transition probabilities ≳ 0.1) even more accurately than the previous “standard” model.« less
Relational Contract: Applicable to Department of Defense Contracts
1989-12-01
examine the evolution of contract law and, in particular, the role of contractual incompleteness in exchange relationships. 2.1.1. The Classical Approach...Classical contract law facilitates exchange by separately detailing all aspects of the contracting process 9 at the outset by prespecification of all...modifications after contractual performance has begun. According to Williamson (1979), classical contract law implements prespecification through legal
Harrigan, George G; Harrison, Jay M
2012-01-01
New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Making Classical Conditioning Understandable through a Demonstration Technique.
ERIC Educational Resources Information Center
Gibb, Gerald D.
1983-01-01
One lemon, an assortment of other fruits and vegetables, a tennis ball, and a Galvanic Skin Response meter are needed to implement this approach to teaching about classical conditioning in introductory psychology courses. (RM)
2013-01-01
The Diagnostic and Statistical Manual of Mental Disorders (DSM) is universally acknowledged as the prominent reference textbook for the diagnosis and assessment of psychiatric diseases. However, since the publication of its first version in 1952, controversies have been raised concerning its reliability and validity and the need for other novel clinical tools has emerged. Currently the DSM is in its fourth edition and a new fifth edition is expected for release in 2013, in an intense intellectual debate and in a call for new proposals. Since 1952, psychiatry has undergone many changes and is emerging as unique field in the medical area in which a novel approach is being demanded for properly treating patients: not the classical “one-size-fits-all” approach, but a more targeted and tailored diagnosis and therapeutics, taking into account the complex interactions among genes and their products, environment, culture and the psychological apparatus of the subject. OMICS sciences, being based on high-throughput technologies, are systems biology related fields (like genomics, proteomics, transcriptomics and so on). In the frame of the P5 medicine (personalized, participatory, predictive, preventive, psycho-cognitive), they could establish links between psychiatric diseases, which are disorders with a final common symptomatology with vastly heterogeneous biological, environmental and sociological underpinnings, and by understanding the psychiatric diseases beyond their classic symptomatic or syndromal definitions using OMICS research, one can have a broader picture and unprecedented links and reclassification of psychiatric nosology. Importantly, by understanding the basis of heterogeneity in diseases through OMICS research, one could also personalize treatment of psychiatric illnesses. In this manuscript, we discuss a gap in the current psychiatric research, namely the missing logical link among OMICS, personalized medicine and reclassification of diseases. Moreover, we explore the importance of incorporating OMICS-based quantitative dimensional criteria, besides the classical qualitative and categorical approach. PMID:23849623
Optical rectenna operation: where Maxwell meets Einstein
NASA Astrophysics Data System (ADS)
Joshi, Saumil; Moddel, Garret
2016-07-01
Optical rectennas are antenna-coupled diode rectifiers that receive and convert optical-frequency electromagnetic radiation into DC output. The analysis of rectennas is carried out either classically using Maxwell’s wave-like approach, or quantum-mechanically using Einstein’s particle-like approach for electromagnetic radiation. One of the characteristics of classical operation is that multiple photons transfer their energy to individual electrons, whereas in quantum operation each photon transfers its energy to each electron. We analyze the correspondence between the two approaches by comparing rectenna response first to monochromatic illumination obtained using photon-assisted tunnelling theory and classical theory. Applied to broadband rectenna operation, this correspondence provides clues to designing a rectenna solar cell that has the potential to exceed the 44% quantum-limited conversion efficiency. The comparison of operating regimes shows how optical rectenna operation differs from microwave rectenna operation.
Valeriani, Federica; Agodi, Antonella; Casini, Beatrice; Cristina, Maria Luisa; D'Errico, Marcello Mario; Gianfranceschi, Gianluca; Liguori, Giorgio; Liguori, Renato; Mucci, Nicolina; Mura, Ida; Pasquarella, Cesira; Piana, Andrea; Sotgiu, Giovanni; Privitera, Gaetano; Protano, Carmela; Quattrocchi, Annalisa; Ripabelli, Giancarlo; Rossini, Angelo; Spagnolo, Anna Maria; Tamburro, Manuela; Tardivo, Stefano; Veronesi, Licia; Vitali, Matteo; Romano Spica, Vincenzo
2018-02-01
Reprocessing of endoscopes is key to preventing cross-infection after colonoscopy. Culture-based methods are recommended for monitoring, but alternative and rapid approaches are needed to improve surveillance and reduce turnover times. A molecular strategy based on detection of residual traces from gut microbiota was developed and tested using a multicenter survey. A simplified sampling and DNA extraction protocol using nylon-tipped flocked swabs was optimized. A multiplex real-time polymerase chain reaction (PCR) test was developed that targeted 6 bacteria genes that were amplified in 3 mixes. The method was validated by interlaboratory tests involving 5 reference laboratories. Colonoscopy devices (n = 111) were sampled in 10 Italian hospitals. Culture-based microbiology and metagenomic tests were performed to verify PCR data. The sampling method was easily applied in all 10 endoscopy units and the optimized DNA extraction and amplification protocol was successfully performed by all of the involved laboratories. This PCR-based method allowed identification of both contaminated (n = 59) and fully reprocessed endoscopes (n = 52) with high sensibility (98%) and specificity (98%), within 3-4 hours, in contrast to the 24-72 hours needed for a classic microbiology test. Results were confirmed by next-generation sequencing and classic microbiology. A novel approach for monitoring reprocessing of colonoscopy devices was developed and successfully applied in a multicenter survey. The general principle of tracing biological fluids through microflora DNA amplification was successfully applied and may represent a promising approach for hospital hygiene. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Issues of Dynamic Coalition Formation Among Rational Agents
2002-04-01
approaches of forming stable coalitions among rational agents. Issues and problems of dynamic coalition environments are discussed in section 3 while...2.2. 2.1.2 Coalition Algorithm, Coalition Formation Environment and Model Rational agents which are involved in a co-operative game (A,v) are...publicly available simulation environment for coalition formation among rational information agents based on selected classic coalition theories is, for
Baryogenesis via leptonic CP-violating phase transition
NASA Astrophysics Data System (ADS)
Pascoli, Silvia; Turner, Jessica; Zhou, Ye-Ling
2018-05-01
We propose a new mechanism to generate a lepton asymmetry based on the vacuum CP-violating phase transition (CPPT). This approach differs from classical thermal leptogenesis as a specific seesaw model, and its UV completion, need not be specified. The lepton asymmetry is generated via the dynamically realised coupling of the Weinberg operator during the phase transition. This mechanism provides a connection with low-energy neutrino observables.
Rosetta Phase II: Measuring and Interpreting Cultural Differences in Cognition
2008-07-31
approaches are used to capture culture. First, anthropology and psychiatry adopt research methods that focus on specific groups or individuals...Classical anthropology provides information about behaviors, customs, social roles, and social rules based on extended and intense observation of single...This training goes beyond rules and procedures so that military personnel can see events through the eyes of adversaries or host nationals. They must
ATAC Autocuer Modeling Analysis.
1981-01-01
the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of
A Fluid Structure Algorithm with Lagrange Multipliers to Model Free Swimming
NASA Astrophysics Data System (ADS)
Sahin, Mehmet; Dilek, Ezgi
2017-11-01
A new monolithic approach is prosed to solve the fluid-structure interaction (FSI) problem with Lagrange multipliers in order to model free swimming/flying. In the present approach, the fluid domain is modeled by the incompressible Navier-Stokes equations and discretized using an Arbitrary Lagrangian-Eulerian (ALE) formulation based on the stable side-centered unstructured finite volume method. The solid domain is modeled by the constitutive laws for the nonlinear Saint Venant-Kirchhoff material and the classical Galerkin finite element method is used to discretize the governing equations in a Lagrangian frame. In order to impose the body motion/deformation, the distance between the constraint pair nodes is imposed using the Lagrange multipliers, which is independent from the frame of reference. The resulting algebraic linear equations are solved in a fully coupled manner using a dual approach (null space method). The present numerical algorithm is initially validated for the classical FSI benchmark problems and then applied to the free swimming of three linked ellipses. The authors are grateful for the use of the computing resources provided by the National Center for High Performance Computing (UYBHM) under Grant Number 10752009 and the computing facilities at TUBITAK-ULAKBIM, High Performance and Grid Computing Center.
Multi-Temporal Land Cover Classification with Long Short-Term Memory Neural Networks
NASA Astrophysics Data System (ADS)
Rußwurm, M.; Körner, M.
2017-05-01
Land cover classification (LCC) is a central and wide field of research in earth observation and has already put forth a variety of classification techniques. Many approaches are based on classification techniques considering observation at certain points in time. However, some land cover classes, such as crops, change their spectral characteristics due to environmental influences and can thus not be monitored effectively with classical mono-temporal approaches. Nevertheless, these temporal observations should be utilized to benefit the classification process. After extensive research has been conducted on modeling temporal dynamics by spectro-temporal profiles using vegetation indices, we propose a deep learning approach to utilize these temporal characteristics for classification tasks. In this work, we show how long short-term memory (LSTM) neural networks can be employed for crop identification purposes with SENTINEL 2A observations from large study areas and label information provided by local authorities. We compare these temporal neural network models, i.e., LSTM and recurrent neural network (RNN), with a classical non-temporal convolutional neural network (CNN) model and an additional support vector machine (SVM) baseline. With our rather straightforward LSTM variant, we exceeded state-of-the-art classification performance, thus opening promising potential for further research.
Two-layer symbolic representation for stochastic models with phase-type distributed events
NASA Astrophysics Data System (ADS)
Longo, Francesco; Scarpa, Marco
2015-07-01
Among the techniques that have been proposed for the analysis of non-Markovian models, the state space expansion approach showed great flexibility in terms of modelling capacities.The principal drawback is the explosion of the state space. This paper proposes a two-layer symbolic method for efficiently storing the expanded reachability graph of a non-Markovian model in the case in which continuous phase-type distributions are associated with the firing times of system events, and different memory policies are considered. At the lower layer, the reachability graph is symbolically represented in the form of a set of Kronecker matrices, while, at the higher layer, all the information needed to correctly manage event memory is stored in a multi-terminal multi-valued decision diagram. Such an information is collected by applying a symbolic algorithm, which is based on a couple of theorems. The efficiency of the proposed approach, in terms of memory occupation and execution time, is shown by applying it to a set of non-Markovian stochastic Petri nets and comparing it with a classical explicit expansion algorithm. Moreover, a comparison with a classical symbolic approach is performed whenever possible.
Fining of Red Wine Monitored by Multiple Light Scattering.
Ferrentino, Giovanna; Ramezani, Mohsen; Morozova, Ksenia; Hafner, Daniela; Pedri, Ulrich; Pixner, Konrad; Scampicchio, Matteo
2017-07-12
This work describes a new approach based on multiple light scattering to study red wine clarification processes. The whole spectral signal (1933 backscattering points along the length of each sample vial) were fitted by a multivariate kinetic model that was built with a three-step mechanism, implying (1) adsorption of wine colloids to fining agents, (2) aggregation into larger particles, and (3) sedimentation. Each step is characterized by a reaction rate constant. According to the first reaction, the results showed that gelatin was the most efficient fining agent, concerning the main objective, which was the clarification of the wine, and consequently the increase in its limpidity. Such a trend was also discussed in relation to the results achieved by nephelometry, total phenols, ζ-potential, color, sensory, and electronic nose analyses. Also, higher concentrations of the fining agent (from 5 to 30 g/100 L) or higher temperatures (from 10 to 20 °C) sped up the process. Finally, the advantage of using the whole spectral signal vs classical univariate approaches was demonstrated by comparing the uncertainty associated with the rate constants of the proposed kinetic model. Overall, multiple light scattering technique showed a great potential for studying fining processes compared to classical univariate approaches.
Engel, Hamutal; Doron, Dvir; Kohen, Amnon; Major, Dan Thomas
2012-04-10
The inclusion of nuclear quantum effects such as zero-point energy and tunneling is of great importance in studying condensed phase chemical reactions involving the transfer of protons, hydrogen atoms, and hydride ions. In the current work, we derive an efficient quantum simulation approach for the computation of the momentum distribution in condensed phase chemical reactions. The method is based on a quantum-classical approach wherein quantum and classical simulations are performed separately. The classical simulations use standard sampling techniques, whereas the quantum simulations employ an open polymer chain path integral formulation which is computed using an efficient Monte Carlo staging algorithm. The approach is validated by applying it to a one-dimensional harmonic oscillator and symmetric double-well potential. Subsequently, the method is applied to the dihydrofolate reductase (DHFR) catalyzed reduction of 7,8-dihydrofolate by nicotinamide adenine dinucleotide phosphate hydride (NADPH) to yield S-5,6,7,8-tetrahydrofolate and NADP(+). The key chemical step in the catalytic cycle of DHFR involves a stereospecific hydride transfer. In order to estimate the amount of quantum delocalization, we compute the position and momentum distributions for the transferring hydride ion in the reactant state (RS) and transition state (TS) using a recently developed hybrid semiempirical quantum mechanics-molecular mechanics potential energy surface. Additionally, we examine the effect of compression of the donor-acceptor distance (DAD) in the TS on the momentum distribution. The present results suggest differential quantum delocalization in the RS and TS, as well as reduced tunneling upon DAD compression.
A multiple biomarker risk score for guiding clinical decisions using a decision curve approach.
Hughes, Maria F; Saarela, Olli; Blankenberg, Stefan; Zeller, Tanja; Havulinna, Aki S; Kuulasmaa, Kari; Yarnell, John; Schnabel, Renate B; Tiret, Laurence; Salomaa, Veikko; Evans, Alun; Kee, Frank
2012-08-01
We assessed whether a cardiovascular risk model based on classic risk factors (e.g. cholesterol, blood pressure) could refine disease prediction if it included novel biomarkers (C-reactive protein, N-terminal pro-B-type natriuretic peptide, troponin I) using a decision curve approach which can incorporate clinical consequences. We evaluated whether a model including biomarkers and classic risk factors could improve prediction of 10 year risk of cardiovascular disease (CVD; chronic heart disease and ischaemic stroke) against a classic risk factor model using a decision curve approach in two prospective MORGAM cohorts. This included 7739 men and women with 457 CVD cases from the FINRISK97 cohort; and 2524 men with 259 CVD cases from PRIME Belfast. The biomarker model improved disease prediction in FINRISK across the high-risk group (20-40%) but not in the intermediate risk group, at the 23% risk threshold net benefit was 0.0033 (95% CI 0.0013-0.0052). However, in PRIME Belfast the net benefit of decisions guided by the decision curve was improved across intermediate risk thresholds (10-20%). At p(t) = 10% in PRIME, the net benefit was 0.0059 (95% CI 0.0007-0.0112) with a net increase in 6 true positive cases per 1000 people screened and net decrease of 53 false positive cases per 1000 potentially leading to 5% fewer treatments in patients not destined for an event. The biomarker model improves 10-year CVD prediction at intermediate and high-risk thresholds and in particular, could be clinically useful at advising middle-aged European males of their CVD risk.
A novel approach to the theory of homogeneous and heterogeneous nucleation.
Ruckenstein, Eli; Berim, Gersh O; Narsimhan, Ganesan
2015-01-01
A new approach to the theory of nucleation, formulated relatively recently by Ruckenstein, Narsimhan, and Nowakowski (see Refs. [7-16]) and developed further by Ruckenstein and other colleagues, is presented. In contrast to the classical nucleation theory, which is based on calculating the free energy of formation of a cluster of the new phase as a function of its size on the basis of macroscopic thermodynamics, the proposed theory uses the kinetic theory of fluids to calculate the condensation (W(+)) and dissociation (W(-)) rates on and from the surface of the cluster, respectively. The dissociation rate of a monomer from a cluster is evaluated from the average time spent by a surface monomer in the potential well as obtained from the solution of the Fokker-Planck equation in the phase space of position and momentum for liquid-to-solid transition and the phase space of energy for vapor-to-liquid transition. The condensation rates are calculated using traditional expressions. The knowledge of those two rates allows one to calculate the size of the critical cluster from the equality W(+)=W(-) as well as the rate of nucleation. The developed microscopic approach allows one to avoid the controversial application of classical thermodynamics to the description of nuclei which contain a few molecules. The new theory was applied to a number of cases, such as the liquid-to-solid and vapor-to-liquid phase transitions, binary nucleation, heterogeneous nucleation, nucleation on soluble particles and protein folding. The theory predicts higher nucleation rates at high saturation ratios (small critical clusters) than the classical nucleation theory for both solid-to-liquid as well as vapor-to-liquid transitions. As expected, at low saturation ratios for which the size of the critical cluster is large, the results of the new theory are consistent with those of the classical one. The present approach was combined with the density functional theory to account for the density profile in the cluster. This approach was also applied to protein folding, viewed as the evolution of a cluster of native residues of spherical shape within a protein molecule, which could explain protein folding/unfolding and their dependence on temperature. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Masciantonio, Rudolph; And Others
A humanistic approach to the study of classical Greek and Greek culture at the secondary school level is detailed in this guide. References to the student programed text and other multisensory instructional materials used in the system focus on instructional objectives geared to students who are not necessarily college-bound. The standard Attic…
Communication: Symmetrical quasi-classical analysis of linear optical spectroscopy
NASA Astrophysics Data System (ADS)
Provazza, Justin; Coker, David F.
2018-05-01
The symmetrical quasi-classical approach for propagation of a many degree of freedom density matrix is explored in the context of computing linear spectra. Calculations on a simple two state model for which exact results are available suggest that the approach gives a qualitative description of peak positions, relative amplitudes, and line broadening. Short time details in the computed dipole autocorrelation function result in exaggerated tails in the spectrum.
New approaches to addiction treatment based on learning and memory.
Kiefer, Falk; Dinter, Christina
2013-01-01
Preclinical studies suggest that physiological learning processes are similar to changes observed in addicts at the molecular, neuronal, and structural levels. Based on the importance of classical and instrumental conditioning in the development and maintenance of addictive disorders, many have suggested cue-exposure-based extinction training of conditioned, drug-related responses as a potential new treatment of addiction. It may also be possible to facilitate this extinction training with pharmacological compounds that strengthen memory consolidation during cue exposure. Another potential therapeutic intervention would be based on the so-called reconsolidation theory. According to this hypothesis, already-consolidated memories return to a labile state when reactivated, allowing them to undergo another phase of consolidation-reconsolidation, which can be pharmacologically manipulated. These approaches suggest that the extinction of drug-related memories may represent a viable treatment strategy in the future treatment of addiction.
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
Menon, Shakti N; Hall, Cameron L; McCue, Scott W; McElwain, D L Sean
2017-10-01
The mechanical behaviour of solid biological tissues has long been described using models based on classical continuum mechanics. However, the classical continuum theories of elasticity and viscoelasticity cannot easily capture the continual remodelling and associated structural changes in biological tissues. Furthermore, models drawn from plasticity theory are difficult to apply and interpret in this context, where there is no equivalent of a yield stress or flow rule. In this work, we describe a novel one-dimensional mathematical model of tissue remodelling based on the multiplicative decomposition of the deformation gradient. We express the mechanical effects of remodelling as an evolution equation for the effective strain, a measure of the difference between the current state and a hypothetical mechanically relaxed state of the tissue. This morphoelastic model combines the simplicity and interpretability of classical viscoelastic models with the versatility of plasticity theory. A novel feature of our model is that while most models describe growth as a continuous quantity, here we begin with discrete cells and develop a continuum representation of lattice remodelling based on an appropriate limit of the behaviour of discrete cells. To demonstrate the utility of our approach, we use this framework to capture qualitative aspects of the continual remodelling observed in fibroblast-populated collagen lattices, in particular its contraction and its subsequent sudden re-expansion when remodelling is interrupted.
Topological and Orthomodular Modeling of Context in Behavioral Science
NASA Astrophysics Data System (ADS)
Narens, Louis
2017-02-01
Two non-boolean methods are discussed for modeling context in behavioral data and theory. The first is based on intuitionistic logic, which is similar to classical logic except that not every event has a complement. Its probability theory is also similar to classical probability theory except that the definition of probability function needs to be generalized to unions of events instead of applying only to unions of disjoint events. The generalization is needed, because intuitionistic event spaces may not contain enough disjoint events for the classical definition to be effective. The second method develops a version of quantum logic for its underlying probability theory. It differs from Hilbert space logic used in quantum mechanics as a foundation for quantum probability theory in variety of ways. John von Neumann and others have commented about the lack of a relative frequency approach and a rational foundation for this probability theory. This article argues that its version of quantum probability theory does not have such issues. The method based on intuitionistic logic is useful for modeling cognitive interpretations that vary with context, for example, the mood of the decision maker, the context produced by the influence of other items in a choice experiment, etc. The method based on this article's quantum logic is useful for modeling probabilities across contexts, for example, how probabilities of events from different experiments are related.
Bayesian Exploratory Factor Analysis
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. PMID:25431517
The Tensile Strength of Liquid Nitrogen
NASA Astrophysics Data System (ADS)
Huang, Jian
1992-01-01
The tensile strength of liquids has been a puzzling subject. On the one hand, the classical nucleation theory has met great success in predicting the nucleation rates of superheated liquids. On the other hand, most of reported experimental values of the tensile strength for different liquids are far below the prediction from the classical nucleation theory. In this study, homogeneous nucleation in liquid nitrogen and its tensile strength have been investigated. Different approaches for determining the pressure amplitude were studied carefully. It is shown that Raman-Nath theory, as modified by the introduction of an effective interaction length, can be used to determine the pressure amplitude in the focal plane of a focusing ultrasonic transducer. The results obtained from different diffraction orders are consistent and in good agreement with other approaches including Debye's theory and solving the KZK equation. The measurement of the tensile strength was carried out in a high pressure stainless steel dewar. A High intensity ultrasonic wave was focused into a small volume of liquid nitrogen in a short time period. A probe laser beam passes through the focal region of a concave spherical transducer with small aperture angle and the transmitted light is detected with a photodiode. The pressure amplitude at the focus is calculated based on the acoustic power radiated into the liquid. In the experiment, the electrical signal on the transducer is gated at its resonance frequency with gate widths of 20 mus to 0.2 ms and temperature range from 77 K to near 100 K. The calculated pressure amplitude is in agreement with the prediction of classical nucleation theory for the nucleation rates from 10^6 to 10^ {11} (bubbles/cm^3 sec). This work provides the experimental evidence that the validity of the classical nucleation theory can be extended to the region of the negative pressure up to -90 atm. This is only the second cryogenic liquid to reach the tensile strength predicted from the classical nucleation theory.
Classical electromagnetic fields from quantum sources in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Holliday, Robert; McCarty, Ryan; Peroutka, Balthazar; Tuchin, Kirill
2017-01-01
Electromagnetic fields are generated in high energy nuclear collisions by spectator valence protons. These fields are traditionally computed by integrating the Maxwell equations with point sources. One might expect that such an approach is valid at distances much larger than the proton size and thus such a classical approach should work well for almost the entire interaction region in the case of heavy nuclei. We argue that, in fact, the contrary is true: due to the quantum diffusion of the proton wave function, the classical approximation breaks down at distances of the order of the system size. We compute the electromagnetic field created by a charged particle described initially as a Gaussian wave packet of width 1 fm and evolving in vacuum according to the Klein-Gordon equation. We completely neglect the medium effects. We show that the dynamics, magnitude and even sign of the electromagnetic field created by classical and quantum sources are different.
Navigating the grounded theory terrain. Part 1.
Hunter, Andrew; Murphy, Kathy; Grealish, Annmarie; Casey, Dympna; Keady, John
2011-01-01
The decision to use grounded theory is not an easy one and this article aims to illustrate and explore the methodological complexity and decision-making process. It explores the decision making of one researcher in the first two years of a grounded theory PhD study looking at the psychosocial training needs of nurses and healthcare assistants working with people with dementia in residential care. It aims to map out three different approaches to grounded theory: classic, Straussian and constructivist. In nursing research, grounded theory is often referred to but it is not always well understood. This confusion is due in part to the history of grounded theory methodology, which is one of development and divergent approaches. Common elements across grounded theory approaches are briefly outlined, along with the key differences of the divergent approaches. Methodological literature pertaining to the three chosen grounded theory approaches is considered and presented to illustrate the options and support the choice made. The process of deciding on classical grounded theory as the version best suited to this research is presented. The methodological and personal factors that directed the decision are outlined. The relative strengths of Straussian and constructivist grounded theories are reviewed. All three grounded theory approaches considered offer the researcher a structured, rigorous methodology, but researchers need to understand their choices and make those choices based on a range of methodological and personal factors. In the second article, the final methodological decision will be outlined and its research application described.
Frequency-Offset Cartesian Feedback Based on Polyphase Difference Amplifiers
Zanchi, Marta G.; Pauly, John M.; Scott, Greig C.
2010-01-01
A modified Cartesian feedback method called “frequency-offset Cartesian feedback” and based on polyphase difference amplifiers is described that significantly reduces the problems associated with quadrature errors and DC-offsets in classic Cartesian feedback power amplifier control systems. In this method, the reference input and feedback signals are down-converted and compared at a low intermediate frequency (IF) instead of at DC. The polyphase difference amplifiers create a complex control bandwidth centered at this low IF, which is typically offset from DC by 200–1500 kHz. Consequently, the loop gain peak does not overlap DC where voltage offsets, drift, and local oscillator leakage create errors. Moreover, quadrature mismatch errors are significantly attenuated in the control bandwidth. Since the polyphase amplifiers selectively amplify the complex signals characterized by a +90° phase relationship representing positive frequency signals, the control system operates somewhat like single sideband (SSB) modulation. However, the approach still allows the same modulation bandwidth control as classic Cartesian feedback. In this paper, the behavior of the polyphase difference amplifier is described through both the results of simulations, based on a theoretical analysis of their architecture, and experiments. We then describe our first printed circuit board prototype of a frequency-offset Cartesian feedback transmitter and its performance in open and closed loop configuration. This approach should be especially useful in magnetic resonance imaging transmit array systems. PMID:20814450
Microbiome and Culture Based Analysis of Chronic Rhinosinusitis Compared to Healthy Sinus Mucosa.
Koeller, Kerstin; Herlemann, Daniel P R; Schuldt, Tobias; Ovari, Attila; Guder, Ellen; Podbielski, Andreas; Kreikemeyer, Bernd; Olzowy, Bernhard
2018-01-01
The role of bacteria in chronic rhinosinusitis (CRS) is still not well understood. Whole microbiome analysis adds new aspects to our current understanding that is mainly based on isolated bacteria. It is still unclear how the results of microbiome analysis and the classical culture based approaches interrelate. To address this, middle meatus swabs and tissue samples were obtained during sinus surgery in 5 patients with CRS with nasal polyps (CRSwNP), 5 patients with diffuse CRS without nasal polyps (CRSsNP), 5 patients with unilateral purulent maxillary CRS (upm CRS) and 3 patients with healthy sinus mucosa. Swabs were cultured, and associated bacteria were identified. Additionally, parts of each tissue sample also underwent culture approaches, and in parallel DNA was extracted for 16S rRNA gene amplicon-based microbiome analysis. From tissue samples 4.2 ± 1.2 distinct species per patient were cultured, from swabs 5.4 ± 1.6. The most frequently cultured species from the swabs were Propionibacterium acnes, Staphylococcus epidermidis, Corynebacterium spp. and Staphylococcus aureus . The 16S-RNA gene analysis revealed no clear differentiation of the bacterial community of healthy compared to CRS samples of unilateral purulent maxillary CRS and CRSwNP. However, the bacterial community of CRSsNP differed significantly from the healthy controls. In the CRSsNP samples Flavobacterium, Pseudomonas, Pedobacter, Porphyromonas, Stenotrophomonas , and Brevundimonas were significantly enriched compared to the healthy controls. Species isolated from culture did not generally correspond with the most abundant genera in microbiome analysis. Only Fusobacteria, Parvimonas , and Prevotella found in 2 unilateral purulent maxillary CRS samples by the cultivation dependent approach were also found in the cultivation independent approach in high abundance, suggesting a classic infectious pathogenesis of odontogenic origin in these two specific cases. Alterations of the bacterial community might be a more crucial factor for the development of CRSsNP compared to CRSwNP. Further studies are needed to investigate the relation between bacterial community characteristics and the development of CRSsNP.
Microbiome and Culture Based Analysis of Chronic Rhinosinusitis Compared to Healthy Sinus Mucosa
Koeller, Kerstin; Herlemann, Daniel P. R.; Schuldt, Tobias; Ovari, Attila; Guder, Ellen; Podbielski, Andreas; Kreikemeyer, Bernd; Olzowy, Bernhard
2018-01-01
The role of bacteria in chronic rhinosinusitis (CRS) is still not well understood. Whole microbiome analysis adds new aspects to our current understanding that is mainly based on isolated bacteria. It is still unclear how the results of microbiome analysis and the classical culture based approaches interrelate. To address this, middle meatus swabs and tissue samples were obtained during sinus surgery in 5 patients with CRS with nasal polyps (CRSwNP), 5 patients with diffuse CRS without nasal polyps (CRSsNP), 5 patients with unilateral purulent maxillary CRS (upm CRS) and 3 patients with healthy sinus mucosa. Swabs were cultured, and associated bacteria were identified. Additionally, parts of each tissue sample also underwent culture approaches, and in parallel DNA was extracted for 16S rRNA gene amplicon-based microbiome analysis. From tissue samples 4.2 ± 1.2 distinct species per patient were cultured, from swabs 5.4 ± 1.6. The most frequently cultured species from the swabs were Propionibacterium acnes, Staphylococcus epidermidis, Corynebacterium spp. and Staphylococcus aureus. The 16S-RNA gene analysis revealed no clear differentiation of the bacterial community of healthy compared to CRS samples of unilateral purulent maxillary CRS and CRSwNP. However, the bacterial community of CRSsNP differed significantly from the healthy controls. In the CRSsNP samples Flavobacterium, Pseudomonas, Pedobacter, Porphyromonas, Stenotrophomonas, and Brevundimonas were significantly enriched compared to the healthy controls. Species isolated from culture did not generally correspond with the most abundant genera in microbiome analysis. Only Fusobacteria, Parvimonas, and Prevotella found in 2 unilateral purulent maxillary CRS samples by the cultivation dependent approach were also found in the cultivation independent approach in high abundance, suggesting a classic infectious pathogenesis of odontogenic origin in these two specific cases. Alterations of the bacterial community might be a more crucial factor for the development of CRSsNP compared to CRSwNP. Further studies are needed to investigate the relation between bacterial community characteristics and the development of CRSsNP. PMID:29755418
Classical conformal blocks and accessory parameters from isomonodromic deformations
NASA Astrophysics Data System (ADS)
Lencsés, Máté; Novaes, Fábio
2018-04-01
Classical conformal blocks appear in the large central charge limit of 2D Virasoro conformal blocks. In the AdS3 /CFT2 correspondence, they are related to classical bulk actions and used to calculate entanglement entropy and geodesic lengths. In this work, we discuss the identification of classical conformal blocks and the Painlevé VI action showing how isomonodromic deformations naturally appear in this context. We recover the accessory parameter expansion of Heun's equation from the isomonodromic τ -function. We also discuss how the c = 1 expansion of the τ -function leads to a novel approach to calculate the 4-point classical conformal block.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulz, Douglas A.
2007-10-08
A biometric system suitable for validating user identity using only mouse movements and no specialized equipment is presented. Mouse curves (mouse movements with little or no pause between them) are individually classied and used to develop classication histograms, which are representative of an individual's typical mouse use. These classication histograms can then be compared to validate identity. This classication approach is suitable for providing continuous identity validation during an entire user session.
Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions.
Akkus, Zeynettin; Galimzianova, Alfiia; Hoogi, Assaf; Rubin, Daniel L; Erickson, Bradley J
2017-08-01
Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.
Model-Based Design of Tree WSNs for Decentralized Detection.
Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam
2015-08-20
The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches.
Parallel approach for bioinspired algorithms
NASA Astrophysics Data System (ADS)
Zaporozhets, Dmitry; Zaruba, Daria; Kulieva, Nina
2018-05-01
In the paper, a probabilistic parallel approach based on the population heuristic, such as a genetic algorithm, is suggested. The authors proposed using a multithreading approach at the micro level at which new alternative solutions are generated. On each iteration, several threads that independently used the same population to generate new solutions can be started. After the work of all threads, a selection operator combines obtained results in the new population. To confirm the effectiveness of the suggested approach, the authors have developed software on the basis of which experimental computations can be carried out. The authors have considered a classic optimization problem – finding a Hamiltonian cycle in a graph. Experiments show that due to the parallel approach at the micro level, increment of running speed can be obtained on graphs with 250 and more vertices.
Hybrid Visible Light and Ultrasound-Based Sensor for Distance Estimation
Rabadan, Jose; Guerra, Victor; Rodríguez, Rafael; Rufo, Julio; Luna-Rivera, Martin; Perez-Jimenez, Rafael
2017-01-01
Distance estimation plays an important role in location-based services, which has become very popular in recent years. In this paper, a new short range cricket sensor-based approach is proposed for indoor location applications. This solution uses Time Difference of Arrival (TDoA) between an optical and an ultrasound signal which are transmitted simultaneously, to estimate the distance from the base station to the mobile receiver. The measurement of the TDoA at the mobile receiver endpoint is proportional to the distance. The use of optical and ultrasound signals instead of the conventional radio wave signal makes the proposed approach suitable for environments with high levels of electromagnetic interference or where the propagation of radio frequencies is entirely restricted. Furthermore, unlike classical cricket systems, a double-way measurement procedure is introduced, allowing both the base station and mobile node to perform distance estimation simultaneously. PMID:28208584
Bernstein, Leslie R; Trahiotis, Constantine
2017-02-01
Interaural cross-correlation-based models of binaural processing have accounted successfully for a wide variety of binaural phenomena, including binaural detection, binaural discrimination, and measures of extents of laterality based on interaural temporal disparities, interaural intensitive disparities, and their combination. This report focuses on quantitative accounts of data obtained from binaural detection experiments published over five decades. Particular emphasis is placed on stimulus contexts for which commonly used correlation-based approaches fail to provide adequate explanations of the data. One such context concerns binaural detection of signals masked by certain noises that are narrow-band and/or interaurally partially correlated. It is shown that a cross-correlation-based model that includes stages of peripheral auditory processing can, when coupled with an appropriate decision variable, account well for a wide variety of classic and recently published binaural detection data including those that have, heretofore, proven to be problematic.
NASA Astrophysics Data System (ADS)
Aydogan, D.
2007-04-01
An image processing technique called the cellular neural network (CNN) approach is used in this study to locate geological features giving rise to gravity anomalies such as faults or the boundary of two geologic zones. CNN is a stochastic image processing technique based on template optimization using the neighborhood relationships of cells. These cells can be characterized by a functional block diagram that is typical of neural network theory. The functionality of CNN is described in its entirety by a number of small matrices (A, B and I) called the cloning template. CNN can also be considered to be a nonlinear convolution of these matrices. This template describes the strength of the nearest neighbor interconnections in the network. The recurrent perceptron learning algorithm (RPLA) is used in optimization of cloning template. The CNN and standard Canny algorithms were first tested on two sets of synthetic gravity data with the aim of checking the reliability of the proposed approach. The CNN method was compared with classical derivative techniques by applying the cross-correlation method (CC) to the same anomaly map as this latter approach can detect some features that are difficult to identify on the Bouguer anomaly maps. This approach was then applied to the Bouguer anomaly map of Biga and its surrounding area, in Turkey. Structural features in the area between Bandirma, Biga, Yenice and Gonen in the southwest Marmara region are investigated by applying the CNN and CC to the Bouguer anomaly map. Faults identified by these algorithms are generally in accordance with previously mapped surface faults. These examples show that the geologic boundaries can be detected from Bouguer anomaly maps using the cloning template approach. A visual evaluation of the outputs of the CNN and CC approaches is carried out, and the results are compared with each other. This approach provides quantitative solutions based on just a few assumptions, which makes the method more powerful than the classical methods.
Grid occupancy estimation for environment perception based on belief functions and PCR6
NASA Astrophysics Data System (ADS)
Moras, Julien; Dezert, Jean; Pannetier, Benjamin
2015-05-01
In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.
Augmented neural networks and problem structure-based heuristics for the bin-packing problem
NASA Astrophysics Data System (ADS)
Kasap, Nihat; Agarwal, Anurag
2012-08-01
In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.
ERIC Educational Resources Information Center
Masciantonio, Rudolph
This is a students' programmed text for Level Alpha of a humanistic approach to the instruction of Classical Greek and Greek culture in secondary schools. The goals of the program are to help students become aware of: (1) the impact of Hellenic civilization on contemporary society, including the impact of the Greek language on English; (2) the…
ERIC Educational Resources Information Center
Masciantonio, Rudolph
This is a teacher's guide for Level Beta of a humanistic approach to instruction of Classical Greek and Greek culture in secondary schools. The goals of the program are to help students become aware of: (1) the impact of Hellenic civilization on contemporary society, including the impact of the Greek language on English; (2) the similarities and…
ERIC Educational Resources Information Center
Masciantonio, Rudolph
This is a student's programmed text for Level Beta of a humanistic approach to instruction of Classical Greek and Greek culture in secondary schools. The goals of the program are to help students become aware of: (1) the impact of Hellenic civilization on contemporary society, including the impact of the Greek language on English; (2) the…
Quantum theory for 1D X-ray free electron laser
NASA Astrophysics Data System (ADS)
Anisimov, Petr M.
2018-06-01
Classical 1D X-ray Free Electron Laser (X-ray FEL) theory has stood the test of time by guiding FEL design and development prior to any full-scale analysis. Future X-ray FELs and inverse-Compton sources, where photon recoil approaches an electron energy spread value, push the classical theory to its limits of applicability. After substantial efforts by the community to find what those limits are, there is no universally agreed upon quantum approach to design and development of future X-ray sources. We offer a new approach to formulate the quantum theory for 1D X-ray FELs that has an obvious connection to the classical theory, which allows for immediate transfer of knowledge between the two regimes. We exploit this connection in order to draw quantum mechanical conclusions about the quantum nature of electrons and generated radiation in terms of FEL variables.
Dose Equivalents for Second-Generation Antipsychotic Drugs: The Classical Mean Dose Method
Leucht, Stefan; Samara, Myrto; Heres, Stephan; Patel, Maxine X.; Furukawa, Toshi; Cipriani, Andrea; Geddes, John; Davis, John M.
2015-01-01
Background: The concept of dose equivalence is important for many purposes. The classical approach published by Davis in 1974 subsequently dominated textbooks for several decades. It was based on the assumption that the mean doses found in flexible-dose trials reflect the average optimum dose which can be used for the calculation of dose equivalence. We are the first to apply the method to second-generation antipsychotics. Methods: We searched for randomized, double-blind, flexible-dose trials in acutely ill patients with schizophrenia that examined 13 oral second-generation antipsychotics, haloperidol, and chlorpromazine (last search June 2014). We calculated the mean doses of each drug weighted by sample size and divided them by the weighted mean olanzapine dose to obtain olanzapine equivalents. Results: We included 75 studies with 16 555 participants. The doses equivalent to 1 mg/d olanzapine were: amisulpride 38.3 mg/d, aripiprazole 1.4 mg/d, asenapine 0.9 mg/d, chlorpromazine 38.9 mg/d, clozapine 30.6 mg/d, haloperidol 0.7 mg/d, quetiapine 32.3mg/d, risperidone 0.4mg/d, sertindole 1.1 mg/d, ziprasidone 7.9 mg/d, zotepine 13.2 mg/d. For iloperidone, lurasidone, and paliperidone no data were available. Conclusions: The classical mean dose method is not reliant on the limited availability of fixed-dose data at the lower end of the effective dose range, which is the major limitation of “minimum effective dose methods” and “dose-response curve methods.” In contrast, the mean doses found by the current approach may have in part depended on the dose ranges chosen for the original trials. Ultimate conclusions on dose equivalence of antipsychotics will need to be based on a review of various methods. PMID:25841041
The ?new? economics of education: Towards a ?unified? macro/micro-educational planning policy
NASA Astrophysics Data System (ADS)
Kraft, Richard H.; Nakib, Yasser
1991-09-01
What has become the "classical" theory of the economics of education, first systematically laid down in the 1960s, is based on analysis of measurable variables. The concept of investment in human capital supposes that higher funding for education will increase productivity and income. Estimates of cost-effectiveness and returns to investment based on notional income foregone, have been features of this approach. Now, with a growing realization of the failure of the classical theory to deal with "realities" in the education market and to offer effective policy recommendations, other ideologies have again become more visible. The lack of attention given to the labor demand side of the education-earnings equation, and the inability of theoretical models to capture all complex variables in all sectors of the labor market, have been criticized. Also, it has again been recognized that education has a socialization role. The phenomena of undereducation and over-education have been investigated, and attention has been given to the implications of social conflicts and structural changes in the labor market. If human competences are to be developed, it is necessary to look beyond the classical model of the economics of education to microeconomic analysis and to economic and social conditions which will act as incentives.
NASA Astrophysics Data System (ADS)
Oberlack, Martin; Nold, Andreas; Sanjon, Cedric Wilfried; Wang, Yongqi; Hau, Jan
2016-11-01
Classical hydrodynamic stability theory for laminar shear flows, no matter if considering long-term stability or transient growth, is based on the normal-mode ansatz, or, in other words, on an exponential function in space (stream-wise direction) and time. Recently, it became clear that the normal mode ansatz and the resulting Orr-Sommerfeld equation is based on essentially three fundamental symmetries of the linearized Euler and Navier-Stokes equations: translation in space and time and scaling of the dependent variable. Further, Kelvin-mode of linear shear flows seemed to be an exception in this context as it admits a fourth symmetry resulting in the classical Kelvin mode which is rather different from normal-mode. However, very recently it was discovered that most of the classical canonical shear flows such as linear shear, Couette, plane and round Poiseuille, Taylor-Couette, Lamb-Ossen vortex or asymptotic suction boundary layer admit more symmetries. This, in turn, led to new problem specific non-modal ansatz functions. In contrast to the exponential growth rate in time of the modal-ansatz, the new non-modal ansatz functions usually lead to an algebraic growth or decay rate, while for the asymptotic suction boundary layer a double-exponential growth or decay is observed.
Robust Stability Analysis of the Space Launch System Control Design: A Singular Value Approach
NASA Technical Reports Server (NTRS)
Pei, Jing; Newsome, Jerry R.
2015-01-01
Classical stability analysis consists of breaking the feedback loops one at a time and determining separately how much gain or phase variations would destabilize the stable nominal feedback system. For typical launch vehicle control design, classical control techniques are generally employed. In addition to stability margins, frequency domain Monte Carlo methods are used to evaluate the robustness of the design. However, such techniques were developed for Single-Input-Single-Output (SISO) systems and do not take into consideration the off-diagonal terms in the transfer function matrix of Multi-Input-Multi-Output (MIMO) systems. Robust stability analysis techniques such as H(sub infinity) and mu are applicable to MIMO systems but have not been adopted as standard practices within the launch vehicle controls community. This paper took advantage of a simple singular-value-based MIMO stability margin evaluation method based on work done by Mukhopadhyay and Newsom and applied it to the SLS high-fidelity dynamics model. The method computes a simultaneous multi-loop gain and phase margin that could be related back to classical margins. The results presented in this paper suggest that for the SLS system, traditional SISO stability margins are similar to the MIMO margins. This additional level of verification provides confidence in the robustness of the control design.
Plasmonic refractive index sensing using strongly coupled metal nanoantennas: nonlocal limitations.
Wang, Hancong
2018-06-25
Localized surface plasmon resonance based on coupled metallic nanoparticles has been extensively studied in the refractive index sensing and the detection of molecules. The amount of resonance peak-shift depends on the refractive index of surrounding medium and the geometry/symmetry of plasmonic oligomers. It has recently been found that as the feature size or the gap distance of plasmonic nanostructures approaches several nanometers, quantum effects can change the plasmon coupling in nanoparticles. However, most of the research on plasmonic sensing has been done based on classical local calculations even for the interparticle gap below ~3 nm, in which the nonlocal screening plays an important role. Here, we theoretically investigate the nonlocal effect on the evolution of various plasmon resonance modes in strongly coupled nanoparticle dimer and trimer antennas with the gap down to 1 nm. Then, the refractive index sensing in these nonlocal systems is evaluated and compared with the results in classical calculations. We find that in the nonlocal regime, both refractive index sensibility factor and figure of merit are actually smaller than their classical counterparts mainly due to the saturation of plasmon shifts. These results would be beneficial for the understanding of interaction between light and nonlocal plasmonic nanostructures and the development of plasmonic devices such as nanosensors and nanoantennas.
Experimental Validation of the Transverse Shear Behavior of a Nomex Core for Sandwich Panels
NASA Astrophysics Data System (ADS)
Farooqi, M. I.; Nasir, M. A.; Ali, H. M.; Ali, Y.
2017-05-01
This work deals with determination of the transverse shear moduli of a Nomex® honeycomb core of sandwich panels. Their out-of-plane shear characteristics depend on the transverse shear moduli of the honeycomb core. These moduli were determined experimentally, numerically, and analytically. Numerical simulations were performed by using a unit cell model and three analytical approaches. Analytical calculations showed that two of the approaches provided reasonable predictions for the transverse shear modulus as compared with experimental results. However, the approach based upon the classical lamination theory showed large deviations from experimental data. Numerical simulations also showed a trend similar to that resulting from the analytical models.
Blind detection of giant pulses: GPU implementation
NASA Astrophysics Data System (ADS)
Ait-Allal, Dalal; Weber, Rodolphe; Dumez-Viou, Cédric; Cognard, Ismael; Theureau, Gilles
2012-01-01
Radio astronomical pulsar observations require specific instrumentation and dedicated signal processing to cope with the dispersion caused by the interstellar medium. Moreover, the quality of observations can be limited by radio frequency interference (RFI) generated by Telecommunications activity. This article presents the innovative pulsar instrumentation based on graphical processing units (GPU) which has been designed at the Nançay Radio Astronomical Observatory. In addition, for giant pulsar search, we propose a new approach which combines a hardware-efficient search method and some RFI mitigation capabilities. Although this approach is less sensitive than the classical approach, its advantage is that no a priori information on the pulsar parameters is required. The validation of a GPU implementation is under way.
Diagrammar in classical scalar field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cattaruzza, E., E-mail: Enrico.Cattaruzza@gmail.com; Gozzi, E., E-mail: gozzi@ts.infn.it; INFN, Sezione di Trieste
2011-09-15
In this paper we analyze perturbatively a g{phi}{sup 4}classical field theory with and without temperature. In order to do that, we make use of a path-integral approach developed some time ago for classical theories. It turns out that the diagrams appearing at the classical level are many more than at the quantum level due to the presence of extra auxiliary fields in the classical formalism. We shall show that a universal supersymmetry present in the classical path-integral mentioned above is responsible for the cancelation of various diagrams. The same supersymmetry allows the introduction of super-fields and super-diagrams which considerably simplifymore » the calculations and make the classical perturbative calculations almost 'identical' formally to the quantum ones. Using the super-diagrams technique, we develop the classical perturbation theory up to third order. We conclude the paper with a perturbative check of the fluctuation-dissipation theorem. - Highlights: > We provide the Feynman diagrams of perturbation theory for a classical field theory. > We give a super-formalism which links the quantum diagrams to the classical ones. > We check perturbatively the fluctuation-dissipation theorem.« less
Online Performance-Improvement Algorithms
1994-08-01
fault rate as the request sequence length approaches infinity. Their algorithms are based on an innovative use of the classical Ziv - Lempel [85] data ...Report CS-TR-348-91. [85] J. Ziv and A. Lempel . Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory, 24:530-53`, 1978. 94...Deferred Data Structuring Recall that our incremental multi-trip algorithm spreads the building of the fence-tree over several trips in order to
Spread-Spectrum Carrier Estimation With Unknown Doppler Shift
NASA Technical Reports Server (NTRS)
DeLeon, Phillip L.; Scaife, Bradley J.
1998-01-01
We present a method for the frequency estimation of a BPSK modulated, spread-spectrum carrier with unknown Doppler shift. The approach relies on a classic periodogram in conjunction with a spectral matched filter. Simulation results indicate accurate carrier estimation with processing gains near 40. A DSP-based prototype has been implemented for real-time carrier estimation for use in New Mexico State University's proposal for NASA's Demand Assignment Multiple Access service.
Challenges of Electronic Medical Surveillance Systems
2004-06-01
More sophisticated approaches, such as regression models and classical autoregressive moving average ( ARIMA ) models that make estimates based on...with those predicted by a mathematical model . The primary benefit of ARIMA models is their ability to correct for local trends in the data so that...works well, for example, during a particularly severe flu season, where prolonged periods of high visit rates are adjusted to by the ARIMA model , thus
Visual Tracking Using 3D Data and Region-Based Active Contours
2016-09-28
adaptive control strategies which explicitly take uncertainty into account. Filtering methods ranging from the classical Kalman filters valid for...linear systems to the much more general particle filters also fit into this framework in a very natural manner. In particular, the particle filtering ...the number of samples required for accurate filtering increases with the dimension of the system noise. In our approach, we approximate curve
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
A General Model for Performance Evaluation in DS-CDMA Systems with Variable Spreading Factors
NASA Astrophysics Data System (ADS)
Chiaraluce, Franco; Gambi, Ennio; Righi, Giorgia
This paper extends previous analytical approaches for the study of CDMA systems to the relevant case of multipath environments where users can operate at different bit rates. This scenario is of interest for the Wideband CDMA strategy employed in UMTS, and the model permits the performance comparison of classic and more innovative spreading signals. The method is based on the characteristic function approach, that allows to model accurately the various kinds of interferences. Some numerical examples are given with reference to the ITU-R M. 1225 Recommendations, but the analysis could be extended to different channel descriptions.
Quantum annealing with parametrically driven nonlinear oscillators
NASA Astrophysics Data System (ADS)
Puri, Shruti
While progress has been made towards building Ising machines to solve hard combinatorial optimization problems, quantum speedups have so far been elusive. Furthermore, protecting annealers against decoherence and achieving long-range connectivity remain important outstanding challenges. With the hope of overcoming these challenges, I introduce a new paradigm for quantum annealing that relies on continuous variable states. Unlike the more conventional approach based on two-level systems, in this approach, quantum information is encoded in two coherent states that are stabilized by parametrically driving a nonlinear resonator. I will show that a fully connected Ising problem can be mapped onto a network of such resonators, and outline an annealing protocol based on adiabatic quantum computing. During the protocol, the resonators in the network evolve from vacuum to coherent states representing the ground state configuration of the encoded problem. In short, the system evolves between two classical states following non-classical dynamics. As will be supported by numerical results, this new annealing paradigm leads to superior noise resilience. Finally, I will discuss a realistic circuit QED realization of an all-to-all connected network of parametrically driven nonlinear resonators. The continuous variable nature of the states in the large Hilbert space of the resonator provides new opportunities for exploring quantum phase transitions and non-stoquastic dynamics during the annealing schedule.
Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering
NASA Technical Reports Server (NTRS)
Tilton, James C.
2002-01-01
This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.
Rectennas at optical frequencies: How to analyze the response
NASA Astrophysics Data System (ADS)
Joshi, Saumil; Moddel, Garret
2015-08-01
Optical rectennas, antenna-coupled diode rectifiers that receive optical-frequency electromagnetic radiation and convert it to DC output, have been proposed for use in harvesting electromagnetic radiation from a blackbody source. The operation of these devices is qualitatively different from that of lower-frequency rectennas, and their design requires a new approach. To that end, we present a method to determine the rectenna response to high frequency illumination. It combines classical circuit analysis with classical and quantum-based photon-assisted tunneling response of a high-speed diode. We demonstrate the method by calculating the rectenna response for low and high frequency monochromatic illumination, and for radiation from a blackbody source. Such a blackbody source can be a hot body generating waste heat, or radiation from the sun.
Region growing using superpixels with learned shape prior
NASA Astrophysics Data System (ADS)
Borovec, Jiří; Kybic, Jan; Sugimoto, Akihiro
2017-11-01
Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed method differs from classical region growing in three important aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speed-up. Second, our method uses learned statistical shape properties that encourage plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as an energy minimization and is solved either greedily or iteratively using graph cuts. We demonstrate the performance of the proposed method and compare it with alternative approaches on the task of segmenting individual eggs in microscopy images of Drosophila ovaries.
NASA Astrophysics Data System (ADS)
Glushak, P. A.; Markiv, B. B.; Tokarchuk, M. V.
2018-01-01
We present a generalization of Zubarev's nonequilibrium statistical operator method based on the principle of maximum Renyi entropy. In the framework of this approach, we obtain transport equations for the basic set of parameters of the reduced description of nonequilibrium processes in a classical system of interacting particles using Liouville equations with fractional derivatives. For a classical systems of particles in a medium with a fractal structure, we obtain a non-Markovian diffusion equation with fractional spatial derivatives. For a concrete model of the frequency dependence of a memory function, we obtain generalized Kettano-type diffusion equation with the spatial and temporal fractality taken into account. We present a generalization of nonequilibrium thermofield dynamics in Zubarev's nonequilibrium statistical operator method in the framework of Renyi statistics.
Montaux-Lambert, Antoine; Mercère, Pascal; Primot, Jérôme
2015-11-02
An interferogram conditioning procedure, for subsequent phase retrieval by Fourier demodulation, is presented here as a fast iterative approach aiming at fulfilling the classical boundary conditions imposed by Fourier transform techniques. Interference fringe patterns with typical edge discontinuities were simulated in order to reveal the edge artifacts that classically appear in traditional Fourier analysis, and were consecutively used to demonstrate the correction efficiency of the proposed conditioning technique. Optimization of the algorithm parameters is also presented and discussed. Finally, the procedure was applied to grating-based interferometric measurements performed in the hard X-ray regime. The proposed algorithm enables nearly edge-artifact-free retrieval of the phase derivatives. A similar enhancement of the retrieved absorption and fringe visibility images is also achieved.
Collisional excitation of HC3N by para- and ortho-H2
NASA Astrophysics Data System (ADS)
Faure, Alexandre; Lique, François; Wiesenfeld, Laurent
2016-08-01
New calculations for rotational excitation of cyanoacetylene by collisions with hydrogen molecules are performed to include the lowest 38 rotational levels of HC3N and kinetic temperatures to 300 K. Calculations are based on the interaction potential of Wernli et al. whose accuracy is checked against spectroscopic measurements of the HC3N-H2 complex. The quantum coupled-channel approach is employed and complemented by quasi-classical trajectory calculations. Rate coefficients for ortho-H2 are provided for the first time. Hyperfine resolved rate coefficients are also deduced. Collisional propensity rules are discussed and comparisons between quantum and classical rate coefficients are presented. This collisional data should prove useful in interpreting HC3N observations in the cold and warm ISM, as well as in protoplanetary discs.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1992-01-01
The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.
NASA Astrophysics Data System (ADS)
Errico, F.; Ichchou, M.; De Rosa, S.; Bareille, O.; Franco, F.
2018-06-01
The stochastic response of periodic flat and axial-symmetric structures, subjected to random and spatially-correlated loads, is here analysed through an approach based on the combination of a wave finite element and a transfer matrix method. Although giving a lower computational cost, the present approach keeps the same accuracy of classic finite element methods. When dealing with homogeneous structures, the accuracy is also extended to higher frequencies, without increasing the time of calculation. Depending on the complexity of the structure and the frequency range, the computational cost can be reduced more than two orders of magnitude. The presented methodology is validated both for simple and complex structural shapes, under deterministic and random loads.
Non-linear non-local molecular electrodynamics with nano-optical fields.
Chernyak, Vladimir Y; Saurabh, Prasoon; Mukamel, Shaul
2015-10-28
The interaction of optical fields sculpted on the nano-scale with matter may not be described by the dipole approximation since the fields may vary appreciably across the molecular length scale. Rather than incrementally adding higher multipoles, it is advantageous and more physically transparent to describe the optical process using non-local response functions that intrinsically include all multipoles. We present a semi-classical approach for calculating non-local response functions based on the minimal coupling Hamiltonian. The first, second, and third order response functions are expressed in terms of correlation functions of the charge and the current densities. This approach is based on the gauge invariant current rather than the polarization, and on the vector potential rather than the electric and magnetic fields.
Real time UNIX in embedded control -- A case study within context of LynxOS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleines, H.; Zwoll, K.
1996-02-01
Intelligent communication controllers for a layered protocol profile are a typical example of an embedded control application, where the classical approach for the software development is based on a proprietary real-time operating system kernel under which the individual layers are implemented as tasks. Based on the exemplary implementation of a derivative of MAP 3.0, an unusual and innovative approach is presented, where the protocol software is implemented under the UNIX-compatible real-time operating system LynxOS. The overall design of the embedded control application is presented under a more general view and economical implications as well as aspects of the development environmentmore » and performance are discussed.« less
Treating adult survivors of childhood emotional abuse and neglect: A new framework.
Grossman, Frances K; Spinazzola, Joseph; Zucker, Marla; Hopper, Elizabeth
2017-01-01
This article provides the outline of a new framework for treating adult survivors of childhood emotional abuse and neglect. Component-based psychotherapy (CBP) is an evidence-informed model that bridges, synthesizes, and expands upon several existing schools, or theories, of treatment for adult survivors of traumatic stress. These include approaches to therapy that stem from more classic traditions in psychology, such as psychoanalysis, to more modern approaches including those informed by feminist thought. Moreover, CBP places particular emphasis on integration of key concepts from evidence-based treatment models developed in the past few decades predicated upon thinking and research on the effects of traumatic stress and processes of recovery for survivors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru
2007-10-01
Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.
Wickering, Ellis; Gaspard, Nicolas; Zafar, Sahar; Moura, Valdery J; Biswal, Siddharth; Bechek, Sophia; OʼConnor, Kathryn; Rosenthal, Eric S; Westover, M Brandon
2016-06-01
The purpose of this study is to evaluate automated implementations of continuous EEG monitoring-based detection of delayed cerebral ischemia based on methods used in classical retrospective studies. We studied 95 patients with either Fisher 3 or Hunt Hess 4 to 5 aneurysmal subarachnoid hemorrhage who were admitted to the Neurosciences ICU and underwent continuous EEG monitoring. We implemented several variations of two classical algorithms for automated detection of delayed cerebral ischemia based on decreases in alpha-delta ratio and relative alpha variability. Of 95 patients, 43 (45%) developed delayed cerebral ischemia. Our automated implementation of the classical alpha-delta ratio-based trending method resulted in a sensitivity and specificity (Se,Sp) of (80,27)%, compared with the values of (100,76)% reported in the classic study using similar methods in a nonautomated fashion. Our automated implementation of the classical relative alpha variability-based trending method yielded (Se,Sp) values of (65,43)%, compared with (100,46)% reported in the classic study using nonautomated analysis. Our findings suggest that improved methods to detect decreases in alpha-delta ratio and relative alpha variability are needed before an automated EEG-based early delayed cerebral ischemia detection system is ready for clinical use.
Functional Basis for Efficient Physical Layer Classical Control in Quantum Processors
NASA Astrophysics Data System (ADS)
Ball, Harrison; Nguyen, Trung; Leong, Philip H. W.; Biercuk, Michael J.
2016-12-01
The rapid progress seen in the development of quantum-coherent devices for information processing has motivated serious consideration of quantum computer architecture and organization. One topic which remains open for investigation and optimization relates to the design of the classical-quantum interface, where control operations on individual qubits are applied according to higher-level algorithms; accommodating competing demands on performance and scalability remains a major outstanding challenge. In this work, we present a resource-efficient, scalable framework for the implementation of embedded physical layer classical controllers for quantum-information systems. Design drivers and key functionalities are introduced, leading to the selection of Walsh functions as an effective functional basis for both programing and controller hardware implementation. This approach leverages the simplicity of real-time Walsh-function generation in classical digital hardware, and the fact that a wide variety of physical layer controls, such as dynamic error suppression, are known to fall within the Walsh family. We experimentally implement a real-time field-programmable-gate-array-based Walsh controller producing Walsh timing signals and Walsh-synthesized analog waveforms appropriate for critical tasks in error-resistant quantum control and noise characterization. These demonstrations represent the first step towards a unified framework for the realization of physical layer controls compatible with large-scale quantum-information processing.
Bellanger, Martine M; Jourdain, Alain
2004-01-01
This article aims to evaluate the results of two different approaches underlying the attempts to reduce health inequalities in France. In the 'instrumental' approach, resource allocation is based on an indicator to assess the well-being or the quality of life associated with healthcare provision, the argument being that additional resources would respond to needs that could then be treated quickly and efficiently. This governs the distribution of regional hospital budgets. In the second approach, health professionals and users in a given region are involved in a consensus process to define those priorities to be included in programme formulation. This 'procedural' approach is employed in the case of the regional health programmes. In this second approach, the evaluation of the results runs parallel with an analysis of the process using Rawlsian principles, whereas the first approach is based on the classical economic model.At this stage, a pragmatic analysis based on both the comparison of regional hospital budgets during the period 1992-2003 (calculated using a 'RAWP [resource allocation working party]-like' formula) and the evolution of regional health policies through the evaluation of programmes for the prevention of suicide, alcohol-related diseases and cancers provides a partial assessment of the impact of the two types of approaches, the second having a greater effect on the reduction of regional inequalities.
Collignon, Bertrand; Séguret, Axel; Halloy, José
2016-01-01
Collective motion is one of the most ubiquitous behaviours displayed by social organisms and has led to the development of numerous models. Recent advances in the understanding of sensory system and information processing by animals impels one to revise classical assumptions made in decisional algorithms. In this context, we present a model describing the three-dimensional visual sensory system of fish that adjust their trajectory according to their perception field. Furthermore, we introduce a stochastic process based on a probability distribution function to move in targeted directions rather than on a summation of influential vectors as is classically assumed by most models. In parallel, we present experimental results of zebrafish (alone or in group of 10) swimming in both homogeneous and heterogeneous environments. We use these experimental data to set the parameter values of our model and show that this perception-based approach can simulate the collective motion of species showing cohesive behaviour in heterogeneous environments. Finally, we discuss the advances of this multilayer model and its possible outcomes in biological, physical and robotic sciences. PMID:26909173
A manifold learning approach to data-driven computational materials and processes
NASA Astrophysics Data System (ADS)
Ibañez, Ruben; Abisset-Chavanne, Emmanuelle; Aguado, Jose Vicente; Gonzalez, David; Cueto, Elias; Duval, Jean Louis; Chinesta, Francisco
2017-10-01
Standard simulation in classical mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy, …), whereas the second one consists of models that scientists have extracted from collected, natural or synthetic data. In this work we propose a new method, able to directly link data to computers in order to perform numerical simulations. These simulations will employ universal laws while minimizing the need of explicit, often phenomenological, models. They are based on manifold learning methodologies.
A semiparametric graphical modelling approach for large-scale equity selection
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507
Some conditions of compliance and resistance among hypnotic subjects.
Levitt, E E; Baker, E L; Fish, R C
1990-04-01
Five experimental approaches to the resolution of the century-old Bernheim/Janet dispute and the issue of involuntariness or coercion (the classical suggestion effect) are presented. Four experiments are reported that follow one of the approaches: attempts to induce hypnotic subjects to resist suggestions made in trance. The design is one in which a "resistance instructor" proposes a reward for the resisting subject. Tentative inferences from the results are that the classical suggestion effect is found with a small number of subjects; for a larger number of subjects there is no classical suggestion effect, and for many subjects the outcome is equivocal. Relational factors in the hypnotic dyad influence responsiveness in the subject, the effect being least for those whose susceptibility is high.
The ReaxFF reactive force-field: Development, applications, and future directions
Senftle, Thomas; Hong, Sungwook; Islam, Md Mahbubul; ...
2016-03-04
The reactive force-field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties. Methods based on the principles of quantum mechanics (QM), while offering valuable theoretical guidance at the electronic level, are often too computationally intense for simulations that consider the full dynamic evolution of a system. Alternatively, empirical interatomic potentials that are based on classical principles require significantly fewer computational resources, which enables simulations to better describe dynamic processes over longer timeframes and on larger scales. Such methods, however, typically require a predefined connectivity between atoms, precluding simulations that involve reactive events. The ReaxFFmore » method was developed to help bridge this gap. Approaching the gap from the classical side, ReaxFF casts the empirical interatomic potential within a bond-order formalism, thus implicitly describing chemical bonding without expensive QM calculations. As a result, this article provides an overview of the development, application, and future directions of the ReaxFF method.« less
Quantum theory of multiscale coarse-graining.
Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A
2018-03-14
Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.
Dvořák, Martin; Svobodová, Jana; Dubský, Pavel; Riesová, Martina; Vigh, Gyula; Gaš, Bohuslav
2015-03-01
Although the classical formula of peak resolution was derived to characterize the extent of separation only for Gaussian peaks of equal areas, it is often used even when the peaks follow non-Gaussian distributions and/or have unequal areas. This practice can result in misleading information about the extent of separation in terms of the severity of peak overlap. We propose here the use of the equivalent peak resolution value, a term based on relative peak overlap, to characterize the extent of separation that had been achieved. The definition of equivalent peak resolution is not constrained either by the form(s) of the concentration distribution function(s) of the peaks (Gaussian or non-Gaussian) or the relative area of the peaks. The equivalent peak resolution value and the classically defined peak resolution value are numerically identical when the separated peaks are Gaussian and have identical areas and SDs. Using our new freeware program, Resolution Analyzer, one can calculate both the classically defined and the equivalent peak resolution values. With the help of this tool, we demonstrate here that the classical peak resolution values mischaracterize the extent of peak overlap even when the peaks are Gaussian but have different areas. We show that under ideal conditions of the separation process, the relative peak overlap value is easily accessible by fitting the overall peak profile as the sum of two Gaussian functions. The applicability of the new approach is demonstrated on real separations. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nimbalkar, Sachin U.; Wenning, Thomas J.; Guo, Wei
In the United States, manufacturing facilities account for about 32% of total domestic energy consumption in 2014. Robust energy tracking methodologies are critical to understanding energy performance in manufacturing facilities. Due to its simplicity and intuitiveness, the classic energy intensity method (i.e. the ratio of total energy use over total production) is the most widely adopted. However, the classic energy intensity method does not take into account the variation of other relevant parameters (i.e. product type, feed stock type, weather, etc.). Furthermore, the energy intensity method assumes that the facilities’ base energy consumption (energy use at zero production) is zero,more » which rarely holds true. Therefore, it is commonly recommended to utilize regression models rather than the energy intensity approach for tracking improvements at the facility level. Unfortunately, many energy managers have difficulties understanding why regression models are statistically better than utilizing the classic energy intensity method. While anecdotes and qualitative information may convince some, many have major reservations about the accuracy of regression models and whether it is worth the time and effort to gather data and build quality regression models. This paper will explain why regression models are theoretically and quantitatively more accurate for tracking energy performance improvements. Based on the analysis of data from 114 manufacturing plants over 12 years, this paper will present quantitative results on the importance of utilizing regression models over the energy intensity methodology. This paper will also document scenarios where regression models do not have significant relevance over the energy intensity method.« less
NASA Astrophysics Data System (ADS)
Jahangoshai Rezaee, Mustafa; Jozmaleki, Mehrdad; Valipour, Mahsa
2018-01-01
One of the main features to invest in stock exchange companies is their financial performance. On the other hand, conventional evaluation methods such as data envelopment analysis are not only a retrospective process, but are also a process, which are incomplete and ineffective approaches to evaluate the companies in the future. To remove this problem, it is required to plan an expert system for evaluating organizations when the online data are received from stock exchange market. This paper deals with an approach for predicting the online financial performance of companies when data are received in different time's intervals. The proposed approach is based on integrating fuzzy C-means (FCM), data envelopment analysis (DEA) and artificial neural network (ANN). The classical FCM method is unable to update the number of clusters and their members when the data are changed or the new data are received. Hence, this method is developed in order to make dynamic features for the number of clusters and clusters members in classical FCM. Then, DEA is used to evaluate DMUs by using financial ratios to provide targets in neural network. Finally, the designed network is trained and prepared for predicting companies' future performance. The data on Tehran Stock Market companies for six consecutive years (2007-2012) are used to show the abilities of the proposed approach.
Telesign: a videophone system for sign language distant communication
NASA Astrophysics Data System (ADS)
Mozelle, Gerard; Preteux, Francoise J.; Viallet, Jean-Emmanuel
1998-09-01
This paper presents a low bit rate videophone system for deaf people communicating by means of sign language. Classic video conferencing systems have focused on head and shoulders sequences which are not well-suited for sign language video transmission since hearing impaired people also use their hands and arms to communicate. To address the above-mentioned functionality, we have developed a two-step content-based video coding system based on: (1) A segmentation step. Four or five video objects (VO) are extracted using a cooperative approach between color-based and morphological segmentation. (2) VO coding are achieved by using a standardized MPEG-4 video toolbox. Results of encoded sign language video sequences, presented for three target bit rates (32 kbits/s, 48 kbits/s and 64 kbits/s), demonstrate the efficiency of the approach presented in this paper.
Investigation of Low-Reynolds-Number Rocket Nozzle Design Using PNS-Based Optimization Procedure
NASA Technical Reports Server (NTRS)
Hussaini, M. Moin; Korte, John J.
1996-01-01
An optimization approach to rocket nozzle design, based on computational fluid dynamics (CFD) methodology, is investigated for low-Reynolds-number cases. This study is undertaken to determine the benefits of this approach over those of classical design processes such as Rao's method. A CFD-based optimization procedure, using the parabolized Navier-Stokes (PNS) equations, is used to design conical and contoured axisymmetric nozzles. The advantage of this procedure is that it accounts for viscosity during the design process; other processes make an approximated boundary-layer correction after an inviscid design is created. Results showed significant improvement in the nozzle thrust coefficient over that of the baseline case; however, the unusual nozzle design necessitates further investigation of the accuracy of the PNS equations for modeling expanding flows with thick laminar boundary layers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavanaugh, J.E.; McQuarrie, A.D.; Shumway, R.H.
Conventional methods for discriminating between earthquakes and explosions at regional distances have concentrated on extracting specific features such as amplitude and spectral ratios from the waveforms of the P and S phases. We consider here an optimum nonparametric classification procedure derived from the classical approach to discriminating between two Gaussian processes with unequal spectra. Two robust variations based on the minimum discrimination information statistic and Renyi's entropy are also considered. We compare the optimum classification procedure with various amplitude and spectral ratio discriminants and show that its performance is superior when applied to a small population of 8 land-based earthquakesmore » and 8 mining explosions recorded in Scandinavia. Several parametric characterizations of the notion of complexity based on modeling earthquakes and explosions as autoregressive or modulated autoregressive processes are also proposed and their performance compared with the nonparametric and feature extraction approaches.« less
Liu, Yang; Huang, Yin; Ma, Jianyi; Li, Jun
2018-02-15
Collision energy transfer plays an important role in gas phase reaction kinetics and relaxation of excited molecules. However, empirical treatments are generally adopted for the collisional energy transfer in the master equation based approach. In this work, classical trajectory approach is employed to investigate the collision energy transfer dynamics in the C 2 H 2 -Ne system. The entire potential energy surface is described as the sum of the C 2 H 2 potential and interaction potential between C 2 H 2 and Ne. It is highlighted that both parts of the entire potential are highly accurate. In particular, the interaction potential is fit to ∼41 300 configurations determined at the level of CCSD(T)-F12a/cc-pCVTZ-F12 with the counterpoise correction. Collision energy transfer dynamics are then carried out on this benchmark potential and the widely used Lennard-Jones and Buckingham interaction potentials. Energy transfers and related probability densities at different collisional energies are reported and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somnath, Suhas; Collins, Liam; Matheson, Michael A.
We develop and implement a multifrequency spectroscopy and spectroscopic imaging mode, referred to as general dynamic mode (GDM), that captures the complete spatially- and stimulus dependent information on nonlinear cantilever dynamics in scanning probe microscopy (SPM). GDM acquires the cantilever response including harmonics and mode mixing products across the entire broadband cantilever spectrum as a function of excitation frequency. GDM spectra substitute the classical measurements in SPM, e.g. amplitude and phase in lock-in detection. Here, GDM is used to investigate the response of a purely capacitively driven cantilever. We use information theory techniques to mine the data and verify themore » findings with governing equations and classical lock-in based approaches. We explore the dependence of the cantilever dynamics on the tip–sample distance, AC and DC driving bias. This approach can be applied to investigate the dynamic behavior of other systems within and beyond dynamic SPM. In conclusion, GDM is expected to be useful for separating the contribution of different physical phenomena in the cantilever response and understanding the role of cantilever dynamics in dynamic AFM techniques.« less
Using extant literature in a grounded theory study: a personal account.
Yarwood-Ross, Lee; Jack, Kirsten
2015-03-01
To provide a personal account of the factors in a doctoral study that led to the adoption of classic grounded theory principles relating to the use of literature. Novice researchers considering grounded theory methodology will become aware of the contentious issue of how and when extant literature should be incorporated into a study. The three main grounded theory approaches are classic, Straussian and constructivist, and the seminal texts provide conflicting beliefs surrounding the use of literature. A classic approach avoids a pre-study literature review to minimise preconceptions and emphasises the constant comparison method, while the Straussian and constructivist approaches focus more on the beneficial aspects of an initial literature review and researcher reflexivity. The debate also extends into the wider academic community, where no consensus exists. This is a methodological paper detailing the authors' engagement in the debate surrounding the role of the literature in a grounded theory study. In the authors' experience, researchers can best understand the use of literature in grounded theory through immersion in the seminal texts, engaging with wider academic literature, and examining their preconceptions of the substantive area. The authors concluded that classic grounded theory principles were appropriate in the context of their doctoral study. Novice researchers will have their own sets of circumstances when preparing their studies and should become aware of the different perspectives to make decisions that they can ultimately justify. This paper can be used by other novice researchers as an example of the decision-making process that led to delaying a pre-study literature review and identifies the resources used to write a research proposal when using a classic grounded theory approach.
An application of Chan-Vese method used to determine the ROI area in CT lung screening
NASA Astrophysics Data System (ADS)
Prokop, Paweł; Surtel, Wojciech
2016-09-01
The article presents two approaches of determining the ROI area in CT lung screening. First approach is based on a classic method of framing the image in order to determine the ROI by using a MaZda tool. Second approach is based on segmentation of CT images of the lungs and reducing the redundant information from the image. Of the two approaches of an Active Contour, it was decided to choose the Chan-Vese method. In order to determine the effectiveness of the approach, it was performed an analysis of received ROI texture and extraction of textural features. In order to determine the effectiveness of the method, it was performed an analysis of the received ROI textures and extraction of the texture features, by using a Mazda tool. The results were compared and presented in the form of the radar graphs. The second approach proved to be effective and appropriate and consequently it is used for further analysis of CT images, in the computer-aided diagnosis of sarcoidosis.
NASA Astrophysics Data System (ADS)
Korovin, Iakov S.; Tkachenko, Maxim G.
2018-03-01
In this paper we present a heuristic approach, improving the efficiency of methods, used for creation of efficient architecture of water distribution networks. The essence of the approach is a procedure of search space reduction the by limiting the range of available pipe diameters that can be used for each edge of the network graph. In order to proceed the reduction, two opposite boundary scenarios for the distribution of flows are analysed, after which the resulting range is further narrowed by applying a flow rate limitation for each edge of the network. The first boundary scenario provides the most uniform distribution of the flow in the network, the opposite scenario created the net with the highest possible flow level. The parameters of both distributions are calculated by optimizing systems of quadratic functions in a confined space, which can be effectively performed with small time costs. This approach was used to modify the genetic algorithm (GA). The proposed GA provides a variable number of variants of each gene, according to the number of diameters in list, taking into account flow restrictions. The proposed approach was implemented to the evaluation of a well-known test network - the Hanoi water distribution network [1], the results of research were compared with a classical GA with an unlimited search space. On the test data, the proposed trip significantly reduced the search space and provided faster and more obvious convergence in comparison with the classical version of GA.
Physical break-down of the classical view on cancer cell invasion and metastasis.
Mierke, Claudia T
2013-03-01
Eight classical hallmarks of cancer have been proposed and are well-defined by using biochemical or molecular genetic methods, but are not yet precisely defined by cellular biophysical processes. To define the malignant transformation of neoplasms and finally reveal the functional pathway, which enables cancer cells to promote cancer progression, these classical hallmarks of cancer require the inclusion of specific biomechanical properties of cancer cells and their microenvironment such as the extracellular matrix and embedded cells such as fibroblasts, macrophages or endothelial cells. Nonetheless a main novel ninth hallmark of cancer is still elusive in classical tumor biological reviews, which is the aspect of physics in cancer disease by the natural selection of an aggressive (highly invasive) subtype of cancer cells. The physical aspects can be analyzed by using state-of-the-art biophysical methods. Thus, this review will present current cancer research in a different light and will focus on novel physical methods to investigate the aggressiveness of cancer cells from a biophysicist's point of view. This may lead to novel insights into cancer disease and will overcome classical views on cancer. In addition, this review will discuss how physics of cancer can help to reveal whether cancer cells will invade connective tissue and metastasize. In particular, this review will point out how physics can improve, break-down or support classical approaches to examine tumor growth even across primary tumor boundaries, the invasion of single or collective cancer cells, transendothelial migration of cancer cells and metastasis in targeted organs. Finally, this review will show how physical measurements can be integrated into classical tumor biological analysis approaches. The insights into physical interactions between cancer cells, the primary tumor and the microenvironment may help to solve some "old" questions in cancer disease progression and may finally lead to novel approaches for development and improvement of cancer diagnostics and therapies. Copyright © 2013 Elsevier GmbH. All rights reserved.
Adaptive distance metric learning for diffusion tensor image segmentation.
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.
Airline Passenger Profiling Based on Fuzzy Deep Machine Learning.
Zheng, Yu-Jun; Sheng, Wei-Guo; Sun, Xing-Ming; Chen, Sheng-Yong
2017-12-01
Passenger profiling plays a vital part of commercial aviation security, but classical methods become very inefficient in handling the rapidly increasing amounts of electronic records. This paper proposes a deep learning approach to passenger profiling. The center of our approach is a Pythagorean fuzzy deep Boltzmann machine (PFDBM), whose parameters are expressed by Pythagorean fuzzy numbers such that each neuron can learn how a feature affects the production of the correct output from both the positive and negative sides. We propose a hybrid algorithm combining a gradient-based method and an evolutionary algorithm for training the PFDBM. Based on the novel learning model, we develop a deep neural network (DNN) for classifying normal passengers and potential attackers, and further develop an integrated DNN for identifying group attackers whose individual features are insufficient to reveal the abnormality. Experiments on data sets from Air China show that our approach provides much higher learning ability and classification accuracy than existing profilers. It is expected that the fuzzy deep learning approach can be adapted for a variety of complex pattern analysis tasks.
Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation
Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.
2014-01-01
High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858
Janssens, K; Van Brecht, A; Zerihun Desta, T; Boonen, C; Berckmans, D
2004-06-01
The present paper outlines a modeling approach, which has been developed to model the internal dynamics of heat and moisture transfer in an imperfectly mixed ventilated airspace. The modeling approach, which combines the classical heat and moisture balance differential equations with the use of experimental time-series data, provides a physically meaningful description of the process and is very useful for model-based control purposes. The paper illustrates how the modeling approach has been applied to a ventilated laboratory test room with internal heat and moisture production. The results are evaluated and some valuable suggestions for future research are forwarded. The modeling approach outlined in this study provides an ideal form for advanced model-based control system design. The relatively low number of parameters makes it well suited for model-based control purposes, as a limited number of identification experiments is sufficient to determine these parameters. The model concept provides information about the air quality and airflow pattern in an arbitrary building. By using this model as a simulation tool, the indoor air quality and airflow pattern can be optimized.
Classical and numerical approaches to determining V-section band clamp axial stiffness
NASA Astrophysics Data System (ADS)
Barrans, Simon M.; Khodabakhshi, Goodarz; Muller, Matthias
2015-01-01
V-band clamp joints are used in a wide range of applications to connect circular flanges, for ducts, pipes and the turbocharger housing. Previous studies and research on V-bands are either purely empirical or analytical with limited applicability on the variety of V-band design and working conditions. In this paper models of the V-band are developed based on the classical theory of solid mechanics and the finite element method to study the behaviour of theV-bands under axial loading conditions. The good agreement between results from the developed FEA and the classical model support the suitability of the latter to modelV-band joints with diameters greater than 110mm under axial loading. The results from both models suggest that the axial stiffness for thisV-band cross section reaches a peak value for V-bands with radius of approximately 150 mmacross a wide range of coefficients of friction. Also, it is shown that the coefficient of friction and the wedge angle have a significant effect on the axial stiffness of V-bands.
Classical emergence of intrinsic spin-orbit interaction of light at the nanoscale
NASA Astrophysics Data System (ADS)
Vázquez-Lozano, J. Enrique; Martínez, Alejandro
2018-03-01
Traditionally, in macroscopic geometrical optics intrinsic polarization and spatial degrees of freedom of light can be treated independently. However, at the subwavelength scale these properties appear to be coupled together, giving rise to the spin-orbit interaction (SOI) of light. In this work we address theoretically the classical emergence of the optical SOI at the nanoscale. By means of a full-vector analysis involving spherical vector waves we show that the spin-orbit factorizability condition, accounting for the mutual influence between the amplitude (spin) and phase (orbit), is fulfilled only in the far-field limit. On the other side, in the near-field region, an additional relative phase introduces an extra term that hinders the factorization and reveals an intricate dynamical behavior according to the SOI regime. As a result, we find a suitable theoretical framework able to capture analytically the main features of intrinsic SOI of light. Besides allowing for a better understanding into the mechanism leading to its classical emergence at the nanoscale, our approach may be useful to design experimental setups that enhance the response of SOI-based effects.
Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.
Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit
2017-06-01
We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.
Universal Local Symmetries and Nonsuperposition in Classical Mechanics
NASA Astrophysics Data System (ADS)
Gozzi, Ennio; Pagani, Carlo
2010-10-01
In the Hilbert space formulation of classical mechanics, pioneered by Koopman and von Neumann, there are potentially more observables than in the standard approach to classical mechanics. In this Letter, we show that actually many of those extra observables are not invariant under a set of universal local symmetries which appear once the Koopman and von Neumann formulation is extended to include the evolution of differential forms. Because of their noninvariance, those extra observables have to be removed. This removal makes the superposition of states in the Koopman and von Neumann formulation, and as a consequence also in classical mechanics, impossible.
A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image
NASA Astrophysics Data System (ADS)
Barat, Christian; Phlypo, Ronald
2010-12-01
We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.
New approach to wireless data communication in a propagation environment
NASA Astrophysics Data System (ADS)
Hunek, Wojciech P.; Majewski, Paweł
2017-10-01
This paper presents a new idea of perfect signal reconstruction in multivariable wireless communications systems including a different number of transmitting and receiving antennas. The proposed approach is based on the polynomial matrix S-inverse associated with Smith factorization. Crucially, the above mentioned inverse implements the so-called degrees of freedom. It has been confirmed by simulation study that the degrees of freedom allow to minimalize the negative impact of the propagation environment in terms of increasing the robustness of whole signal reconstruction process. Now, the parasitic drawbacks in form of dynamic ISI and ICI effects can be eliminated in framework described by polynomial calculus. Therefore, the new method guarantees not only reducing the financial impact but, more importantly, provides potentially the lower consumption energy systems than other classical ones. In order to show the potential of new approach, the simulation studies were performed by author's simulator based on well-known OFDM technique.
Model-Based Design of Tree WSNs for Decentralized Detection †
Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam
2015-01-01
The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches. PMID:26307989
Solarin, Sakiru Adebola; Gil-Alana, Luis Alberiko; Al-Mulali, Usama
2018-04-13
In this article, we have examined the hypothesis of convergence of renewable energy consumption in 27 OECD countries. However, instead of relying on classical techniques, which are based on the dichotomy between stationarity I(0) and nonstationarity I(1), we consider a more flexible approach based on fractional integration. We employ both parametric and semiparametric techniques. Using parametric methods, evidence of convergence is found in the cases of Mexico, Switzerland and Sweden along with the USA, Portugal, the Czech Republic, South Korea and Spain, and employing semiparametric approaches, we found evidence of convergence in all these eight countries along with Australia, France, Japan, Greece, Italy and Poland. For the remaining 13 countries, even though the orders of integration of the series are smaller than one in all cases except Germany, the confidence intervals are so wide that we cannot reject the hypothesis of unit roots thus not finding support for the hypothesis of convergence.
Liu, Jian; Miller, William H
2011-03-14
We show the exact expression of the quantum mechanical time correlation function in the phase space formulation of quantum mechanics. The trajectory-based dynamics that conserves the quantum canonical distribution-equilibrium Liouville dynamics (ELD) proposed in Paper I is then used to approximately evaluate the exact expression. It gives exact thermal correlation functions (of even nonlinear operators, i.e., nonlinear functions of position or momentum operators) in the classical, high temperature, and harmonic limits. Various methods have been presented for the implementation of ELD. Numerical tests of the ELD approach in the Wigner or Husimi phase space have been made for a harmonic oscillator and two strongly anharmonic model problems, for each potential autocorrelation functions of both linear and nonlinear operators have been calculated. It suggests ELD can be a potentially useful approach for describing quantum effects for complex systems in condense phase.
Identification of Microorganisms by Modern Analytical Techniques.
Buszewski, Bogusław; Rogowska, Agnieszka; Pomastowski, Paweł; Złoch, Michał; Railean-Plugaru, Viorica
2017-11-01
Rapid detection and identification of microorganisms is a challenging and important aspect in a wide range of fields, from medical to industrial, affecting human lives. Unfortunately, classical methods of microorganism identification are based on time-consuming and labor-intensive approaches. Screening techniques require the rapid and cheap grouping of bacterial isolates; however, modern bioanalytics demand comprehensive bacterial studies at a molecular level. Modern approaches for the rapid identification of bacteria use molecular techniques, such as 16S ribosomal RNA gene sequencing based on polymerase chain reaction or electromigration, especially capillary zone electrophoresis and capillary isoelectric focusing. However, there are still several challenges with the analysis of microbial complexes using electromigration technology, such as uncontrolled aggregation and/or adhesion to the capillary surface. Thus, an approach using capillary electrophoresis of microbial aggregates with UV and matrix-assisted laser desorption ionization time-of-flight MS detection is presented.
New Approaches to Minimum-Energy Design of Integer- and Fractional-Order Perfect Control Algorithms
NASA Astrophysics Data System (ADS)
Hunek, Wojciech P.; Wach, Łukasz
2017-10-01
In this paper the new methods concerning the energy-based minimization of the perfect control inputs is presented. For that reason the multivariable integer- and fractional-order models are applied which can be used for describing a various real world processes. Up to now, the classical approaches have been used in forms of minimum-norm/least squares inverses. Notwithstanding, the above-mentioned tool do not guarantee the optimal control corresponding to optimal input energy. Therefore the new class of inversebased methods has been introduced, in particular the new σ- and H-inverse of nonsquare parameter and polynomial matrices. Thus a proposed solution remarkably outperforms the typical ones in systems where the control runs can be understood in terms of different physical quantities, for example heat and mass transfer, electricity etc. A simulation study performed in Matlab/Simulink environment confirms the big potential of the new energy-based approaches.
The choice of sample size: a mixed Bayesian / frequentist approach.
Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John
2009-04-01
Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.
Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software
NASA Technical Reports Server (NTRS)
Tilton, James C.
2003-01-01
A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic region growing.
Cloud photogrammetry with dense stereo for fisheye cameras
NASA Astrophysics Data System (ADS)
Beekmans, Christoph; Schneider, Johannes; Läbe, Thomas; Lennefer, Martin; Stachniss, Cyrill; Simmer, Clemens
2016-11-01
We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.
NASA Astrophysics Data System (ADS)
Sudarmin; Febu, R.; Nuswowati, M.; Sumarni, W.
2017-04-01
Ethnoscience approach is an interesting research today. The purpose of this research is to develop approaches ethnoscience and modules ethnoscience theme additives based ethnoscience; as well as assess the feasibility and effectiveness of module theme additives based ethnoscience to improve learning outcomes and the entrepreneurial character of students. This type of research is the Research and Development (R & D). In this research consist of four stages, namely define, design, development and implementation. The subjects of this study were students of the School of MTs Maarif NU Brebes. Data were analyzed by descriptive qualitative and quantitative. The results showed that ethnoscience approach and the module theme substance additives used declared worthy of National Education Standards Agency (BNSP) with an average percentage of validation on the feasibility aspect of the content, language feasibility, and feasibility of presenting respectively for 94.3%, 86 % and 92% and a very decent entry criteria. The effect of the application modules substance additive based ethnoscience can improve on the cognitive learning classical amounted to 90.63%, and increased learning outcomes category was based on the scores of N-gain. Influence ethnoscience approach application and module theme substances additives based ethnoscience able to improve the entrepreneurial character of students. Based on the results of this study concluded that the ethnoscience approach and module theme substance additives based ethnoscience effective to improve learning outcomes and students’ entrepreneurship.
Quantum enhanced superresolution microscopy (Conference Presentation)
NASA Astrophysics Data System (ADS)
Oron, Dan; Tenne, Ron; Israel, Yonatan; Silberberg, Yaron
2017-02-01
Far-field optical microscopy beyond the Abbe diffraction limit, making use of nonlinear excitation (e.g. STED), or temporal fluctuations in fluorescence (PALM, STORM, SOFI) is already a reality. In contrast, overcoming the diffraction limit using non-classical properties of light is very difficult to achieve due to the fragility of quantum states of light. Here, we experimentally demonstrate superresolution microscopy based on quantum properties of light naturally emitted by fluorophores used as markers in fluorescence microscopy. Our approach is based on photon antibunching, the tendency of fluorophores to emit photons one by one rather than in bursts. Although a distinctively quantum phenomenon, antibunching is readily observed in most common fluorophores even at room temperature. This nonclassical resource can be utilized directly to enhance the imaging resolution, since the non-classical far-field intensity correlations induced by antibunching carry high spatial frequency information on the spatial distribution of emitters. Detecting photon statistics simultaneously in the entire field of view, we were able to detect non-classical correlations of the second and third order, and reconstructed images with resolution significantly beyond the diffraction limit. Alternatively, we demonstrate the utilization of antibunching for augmenting the capabilities of localization-based superresolution imaging in the presence of multiple emitters, using a novel detector comprised of an array of single photon detectors connected to a densely packed fiber bundle. These features allow us to enhance the spatial and temporal resolution with which multiple emitters can be imaged compared with other techniques that rely on CCD cameras.
Quantum stochastic walks on networks for decision-making.
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-03-31
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce's response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process' degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.
Quantum stochastic walks on networks for decision-making
NASA Astrophysics Data System (ADS)
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-03-01
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce’s response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process’ degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.
Quantum stochastic walks on networks for decision-making
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-01-01
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce’s response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process’ degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making. PMID:27030372
Speech watermarking: an approach for the forensic analysis of digital telephonic recordings.
Faundez-Zanuy, Marcos; Lucena-Molina, Jose J; Hagmüller, Martin
2010-07-01
In this article, the authors discuss the problem of forensic authentication of digital audio recordings. Although forensic audio has been addressed in several articles, the existing approaches are focused on analog magnetic recordings, which are less prevalent because of the large amount of digital recorders available on the market (optical, solid state, hard disks, etc.). An approach based on digital signal processing that consists of spread spectrum techniques for speech watermarking is presented. This approach presents the advantage that the authentication is based on the signal itself rather than the recording format. Thus, it is valid for usual recording devices in police-controlled telephone intercepts. In addition, our proposal allows for the introduction of relevant information such as the recording date and time and all the relevant data (this is not always possible with classical systems). Our experimental results reveal that the speech watermarking procedure does not interfere in a significant way with the posterior forensic speaker identification.
Transformation based endorsement systems
NASA Technical Reports Server (NTRS)
Sudkamp, Thomas
1988-01-01
Evidential reasoning techniques classically represent support for a hypothesis by a numeric value or an evidential interval. The combination of support is performed by an arithmetic rule which often requires restrictions to be placed on the set of possibilities. These assumptions usually require the hypotheses to be exhausitive and mutually exclusive. Endorsement based classification systems represent support for the alternatives symbolically rather than numerically. A framework for constructing endorsement systems is presented in which transformations are defined to generate and update the knowledge base. The interaction of the knowledge base and transformations produces a non-monotonic reasoning system. Two endorsement based reasoning systems are presented to demonstrate the flexibility of the transformational approach for reasoning with ambiguous and inconsistent information.
Attitude control of the space construction base: A modular approach
NASA Technical Reports Server (NTRS)
Oconnor, D. A.
1982-01-01
A planar model of a space base and one module is considered. For this simplified system, a feedback controller which is compatible with the modular construction method is described. The systems dynamics are decomposed into two parts corresponding to base and module. The information structure of the problem is non-classical in that not all system information is supplied to each controller. The base controller is designed to accommodate structural changes that occur as the module is added and the module controller is designed to regulate its own states and follow commands from the base. Overall stability of the system is checked by Liapunov analysis and controller effectiveness is verified by computer simulation.
Conversion of amino-acid sequence in proteins to classical music: search for auditory patterns
2007-01-01
We have converted genome-encoded protein sequences into musical notes to reveal auditory patterns without compromising musicality. We derived a reduced range of 13 base notes by pairing similar amino acids and distinguishing them using variations of three-note chords and codon distribution to dictate rhythm. The conversion will help make genomic coding sequences more approachable for the general public, young children, and vision-impaired scientists. PMID:17477882
A New Approach to Parallel Dynamic Partitioning for Adaptive Unstructured Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.
1999-01-01
Classical mesh partitioning algorithms were designed for rather static situations, and their straightforward application in a dynamical framework may lead to unsatisfactory results, e.g., excessive data migration among processors. Furthermore, special attention should be paid to their amenability to parallelization. In this paper, a novel parallel method for the dynamic partitioning of adaptive unstructured meshes is described. It is based on a linear representation of the mesh using self-avoiding walks.
Gold nanoparticle-mediated laser stimulation causes a complex stress signal in neuronal cells
NASA Astrophysics Data System (ADS)
Johannsmeier, Sonja; Heeger, Patrick; Terakawa, Mitsuhiro; Kalies, Stefan; Heisterkamp, Alexander; Ripken, Tammo; Heinemann, Dag
2017-07-01
Gold nanoparticle mediated laser stimulation of neuronal cells allows for cell activation on a single-cell level. It could therefore be considered an alternative to classical electric neurostimulation. The physiological impact of this new approach has not been intensively studied so far. Here, we investigate the targeted cell's reaction to a laser stimulus based on its calcium response. A complex cellular reaction involving multiple sources has been revealed.
Zinser, Max J; Sailer, Hermann F; Ritter, Lutz; Braumann, Bert; Maegele, Marc; Zöller, Joachim E
2013-12-01
Advances in computers and imaging have permitted the adoption of 3-dimensional (3D) virtual planning protocols in orthognathic surgery, which may allow a paradigm shift when the virtual planning can be transferred properly. The purpose of this investigation was to compare the versatility and precision of innovative computer-aided designed and computer-aided manufactured (CAD/CAM) surgical splints, intraoperative navigation, and "classic" intermaxillary occlusal splints for surgical transfer of virtual orthognathic planning. The protocols consisted of maxillofacial imaging, diagnosis, virtual orthognathic planning, and surgical planning transfer using newly designed CAD/CAM splints (approach A), navigation (approach B), and intermaxillary occlusal splints (approach C). In this prospective observational study, all patients underwent bimaxillary osteotomy. Eight patients were treated using approach A, 10 using approach B, and 12 using approach C. These techniques were evaluated by applying 13 hard and 7 soft tissue parameters to compare the virtual orthognathic planning (T0) with the postoperative result (T1) using 3D cephalometry and image fusion (ΔT1 vs T0). The highest precision (ΔT1 vs T0) for the maxillary planning transfer was observed with CAD/CAM splints (<0.23 mm; P > .05) followed by surgical "waferless" navigation (<0.61 mm, P < .05) and classic intermaxillary occlusal splints (<1.1 mm; P < .05). Only the innovative CAD/CAM splints kept the condyles in their central position in the temporomandibular joint. However, no technique enables a precise prediction of the mandible and soft tissue. CAD/CAM splints and surgical navigation provide a reliable, innovative, and precise approach for the transfer of virtual orthognathic planning. These computer-assisted techniques may offer an alternate approach to the use of classic intermaxillary occlusal splints. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Sánchez, José; Guarnes, Miguel Ángel; Dormido, Sebastián
2009-01-01
This paper is an experimental study of the utilization of different event-based strategies for the automatic control of a simple but very representative industrial process: the level control of a tank. In an event-based control approach it is the triggering of a specific event, and not the time, that instructs the sensor to send the current state of the process to the controller, and the controller to compute a new control action and send it to the actuator. In the document, five control strategies based on different event-based sampling techniques are described, compared, and contrasted with a classical time-based control approach and a hybrid one. The common denominator in the time, the hybrid, and the event-based control approaches is the controller: a proportional-integral algorithm with adaptations depending on the selected control approach. To compare and contrast each one of the hybrid and the pure event-based control algorithms with the time-based counterpart, the two tasks that a control strategy must achieve (set-point following and disturbance rejection) are independently analyzed. The experimental study provides new proof concerning the ability of event-based control strategies to minimize the data exchange among the control agents (sensors, controllers, actuators) when an error-free control of the process is not a hard requirement. PMID:22399975
Wonnemann, Meinolf; Frömke, Cornelia; Koch, Armin
2015-01-01
We investigated different evaluation strategies for bioequivalence trials with highly variable drugs on their resulting empirical type I error and empirical power. The classical 'unscaled' crossover design with average bioequivalence evaluation, the Add-on concept of the Japanese guideline, and the current 'scaling' approach of EMA were compared. Simulation studies were performed based on the assumption of a single dose drug administration while changing the underlying intra-individual variability. Inclusion of Add-on subjects following the Japanese concept led to slight increases of the empirical α-error (≈7.5%). For the approach of EMA we noted an unexpected tremendous increase of the rejection rate at a geometric mean ratio of 1.25. Moreover, we detected error rates slightly above the pre-set limit of 5% even at the proposed 'scaled' bioequivalence limits. With the classical 'unscaled' approach and the Japanese guideline concept the goal of reduced subject numbers in bioequivalence trials of HVDs cannot be achieved. On the other hand, widening the acceptance range comes at the price that quite a number of products will be accepted bioequivalent that had not been accepted in the past. A two-stage design with control of the global α therefore seems the better alternative.
Fragoulakis, Vasilios; Mitropoulou, Christina; van Schaik, Ron H; Maniadakis, Nikolaos; Patrinos, George P
2016-05-01
Genomic Medicine aims to improve therapeutic interventions and diagnostics, the quality of life of patients, but also to rationalize healthcare costs. To reach this goal, careful assessment and identification of evidence gaps for public health genomics priorities are required so that a more efficient healthcare environment is created. Here, we propose a public health genomics-driven approach to adjust the classical healthcare decision making process with an alternative methodological approach of cost-effectiveness analysis, which is particularly helpful for genomic medicine interventions. By combining classical cost-effectiveness analysis with budget constraints, social preferences, and patient ethics, we demonstrate the application of this model, the Genome Economics Model (GEM), based on a previously reported genome-guided intervention from a developing country environment. The model and the attendant rationale provide a practical guide by which all major healthcare stakeholders could ensure the sustainability of funding for genome-guided interventions, their adoption and coverage by health insurance funds, and prioritization of Genomic Medicine research, development, and innovation, given the restriction of budgets, particularly in developing countries and low-income healthcare settings in developed countries. The implications of the GEM for the policy makers interested in Genomic Medicine and new health technology and innovation assessment are also discussed.
A quantitative approach to evolution of music and philosophy
NASA Astrophysics Data System (ADS)
Vieira, Vilson; Fabbri, Renato; Travieso, Gonzalo; Oliveira, Osvaldo N., Jr.; da Fontoura Costa, Luciano
2012-08-01
The development of new statistical and computational methods is increasingly making it possible to bridge the gap between hard sciences and humanities. In this study, we propose an approach based on a quantitative evaluation of attributes of objects in fields of humanities, from which concepts such as dialectics and opposition are formally defined mathematically. As case studies, we analyzed the temporal evolution of classical music and philosophy by obtaining data for 8 features characterizing the corresponding fields for 7 well-known composers and philosophers, which were treated with multivariate statistics and pattern recognition methods. A bootstrap method was applied to avoid statistical bias caused by the small sample data set, with which hundreds of artificial composers and philosophers were generated, influenced by the 7 names originally chosen. Upon defining indices for opposition, skewness and counter-dialectics, we confirmed the intuitive analysis of historians in that classical music evolved according to a master-apprentice tradition, while in philosophy changes were driven by opposition. Though these case studies were meant only to show the possibility of treating phenomena in humanities quantitatively, including a quantitative measure of concepts such as dialectics and opposition, the results are encouraging for further application of the approach presented here to many other areas, since it is entirely generic.
Eagle's syndrome-A non-perceived differential diagnosis of temporomandibular disorder.
Thoenissen, P; Bittermann, G; Schmelzeisen, R; Oshima, T; Fretwurst, T
2015-01-01
This article unveils a case of the classic styloid syndrome and states that panoramic imaging and ultrasound can be an alternative to computed tomography. In addition, the endoscope-assisted extraoral approach using CT-based navigation is useful. Eagle's Syndrome is an aggregate of symptoms described by Eagle in 1937. He described different forms: the classic styloid syndrome consisting of elongation of the styloid process which causes pain. Second, the stylo-carotid-artery syndrome which is responsible for transient ischemic attack or stroke. Using the example of a 66 years old male patient suffering from long term pain, we explain our diagnostic and surgical approach. After dissecting the styloid process of the right side using an extraoral approach, the pain ceased and the patient could be discharged without any recurrence of the pain up to this point. Eagle's syndrome, with its similar symptoms, is rather difficult to differentiate from temporomandibular joint disorders (TMD), but can be easily excluded from possible differential diagnoses of TMD using panoramic radiographs and ultrasound. Making use of low cost and easily accessible diagnostic workup techniques can reveal this particular cause for chronic pain restricting quality of life. Thereby differentiation from the TMD symptomatic complex is possible. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Adab and its significance for an Islamic medical ethics.
Sartell, Elizabeth; Padela, Aasim I
2015-09-01
Discussions of Islamic medical ethics tend to focus on Sharī'ah-based, or obligation-based, ethics. However, limiting Islamic medical ethics discourse to the derivation of religious duties ignores discussions about moulding an inner disposition that inclines towards adherence to the Sharī'ah. In classical Islamic intellectual thought, such writings are the concern of adab literature. In this paper, we call for a renewal of adabi discourse as part of Islamic medical ethics. We argue that adab complements Sharī'ah-based writings to generate a more holistic vision of Islamic medical ethics by supplementing an obligation-based approach with a virtue-based approach. While Sharī'ah-based medical ethics focuses primarily on the moral status of actions, adab literature adds to this genre by addressing the moral formation of the agent. By complementing Sharī'ah-based approaches with adab-focused writings, Islamic medical ethics discourse can describe the relationship between the agent and the action, within a moral universe informed by the Islamic intellectual tradition. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
O'Malley, D.; Vesselinov, V. V.
2017-12-01
Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.
APPROACH TO EQUILIBRIUM OF A QUANTUM PLASMA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balescu, R.
1961-01-01
The treatment of irreversible processes in a classical plasma (R. Balescu, Phys. Fluids 3, 62(1960)) was extended to a gas of charged particles obeying quantum statistics. The various contributions to the equation of evolution for the reduced one-particle Wigner function were written in a form analogous to the classical formalism. The summation was then performed in a straightforward manner. The resulting equation describes collisions between particles "dressed" by their polarization clouds, exactly as in the classical situation. (auth)