Telerobotic control of a mobile coordinated robotic server, executive summary
NASA Technical Reports Server (NTRS)
Lee, Gordon
1993-01-01
This interim report continues with the research effort on advanced adaptive controls for space robotics systems. In particular, previous results developed by the principle investigator and his research team centered around fuzzy logic control (FLC) in which the lack of knowledge of the robotic system as well as the uncertainties of the environment are compensated for by a rule base structure which interacts with varying degrees of belief of control action using system measurements. An on-line adaptive algorithm was developed using a single parameter tuning scheme. In the effort presented, the methodology is further developed to include on-line scaling factor tuning and self-learning control as well as extended to the multi-input, multi-output (MIMO) case. Classical fuzzy logic control requires tuning input scale factors off-line through trial and error techniques. This is time-consuming and cannot adapt to new changes in the process. The new adaptive FLC includes a self-tuning scheme for choosing the scaling factors on-line. Further the rule base in classical FLC is usually produced by soliciting knowledge from human operators as to what is good control action for given circumstances. This usually requires full knowledge and experience of the process and operating conditions, which limits applicability. A self-learning scheme is developed which adaptively forms the rule base with very limited knowledge of the process. Finally, a MIMO method is presented employing optimization techniques. This is required for application to space robotics in which several degrees-of-freedom links are commonly used. Simulation examples are presented for terminal control - typical of robotic problems in which a desired terminal point is to be reached for each link. Future activities will be to implement the MIMO adaptive FLC on an INTEL microcontroller-based circuit and to test the algorithm on a robotic system at the Mars Mission Research Center at North Carolina State University.
Adaptive WTA with an analog VLSI neuromorphic learning chip.
Häfliger, Philipp
2007-03-01
In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.
NASA Astrophysics Data System (ADS)
Manna, Arun K.; Dunietz, Barry D.
2014-09-01
We investigate photoinduced charge transfer (CT) processes within dyads consisting of porphyrin derivatives in which one ring ligates a Zn metal center and where the rings vary by their degree of conjugation. Using a first-principles approach, we show that molecular-scale means can tune CT rates through stabilization affected by the polar environment. Such means of CT tuning are important for achieving high efficiency optoelectronic applications using organic semiconducting materials. Our fully quantum mechanical scheme is necessary for reliably modeling the CT process across different regimes, in contrast to the pervading semi-classical Marcus picture that grossly underestimates transfer in the far-inverted regime.
Learning and Tuning of Fuzzy Rules
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1997-01-01
In this chapter, we review some of the current techniques for learning and tuning fuzzy rules. For clarity, we refer to the process of generating rules from data as the learning problem and distinguish it from tuning an already existing set of fuzzy rules. For learning, we touch on unsupervised learning techniques such as fuzzy c-means, fuzzy decision tree systems, fuzzy genetic algorithms, and linear fuzzy rules generation methods. For tuning, we discuss Jang's ANFIS architecture, Berenji-Khedkar's GARIC architecture and its extensions in GARIC-Q. We show that the hybrid techniques capable of learning and tuning fuzzy rules, such as CART-ANFIS, RNN-FLCS, and GARIC-RB, are desirable in development of a number of future intelligent systems.
Das, Saptarshi; Pan, Indranil; Das, Shantanu; Gupta, Amitava
2012-03-01
Genetic algorithm (GA) has been used in this study for a new approach of suboptimal model reduction in the Nyquist plane and optimal time domain tuning of proportional-integral-derivative (PID) and fractional-order (FO) PI(λ)D(μ) controllers. Simulation studies show that the new Nyquist-based model reduction technique outperforms the conventional H(2)-norm-based reduced parameter modeling technique. With the tuned controller parameters and reduced-order model parameter dataset, optimum tuning rules have been developed with a test-bench of higher-order processes via genetic programming (GP). The GP performs a symbolic regression on the reduced process parameters to evolve a tuning rule which provides the best analytical expression to map the data. The tuning rules are developed for a minimum time domain integral performance index described by a weighted sum of error index and controller effort. From the reported Pareto optimal front of the GP-based optimal rule extraction technique, a trade-off can be made between the complexity of the tuning formulae and the control performance. The efficacy of the single-gene and multi-gene GP-based tuning rules has been compared with the original GA-based control performance for the PID and PI(λ)D(μ) controllers, handling four different classes of representative higher-order processes. These rules are very useful for process control engineers, as they inherit the power of the GA-based tuning methodology, but can be easily calculated without the requirement for running the computationally intensive GA every time. Three-dimensional plots of the required variation in PID/fractional-order PID (FOPID) controller parameters with reduced process parameters have been shown as a guideline for the operator. Parametric robustness of the reported GP-based tuning rules has also been shown with credible simulation examples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Yang, Xiong; He, Haibo
2018-05-26
In this paper, we develop a novel optimal control strategy for a class of uncertain nonlinear systems with unmatched interconnections. To begin with, we present a stabilizing feedback controller for the interconnected nonlinear systems by modifying an array of optimal control laws of auxiliary subsystems. We also prove that this feedback controller ensures a specified cost function to achieve optimality. Then, under the framework of adaptive critic designs, we use critic networks to solve the Hamilton-Jacobi-Bellman equations associated with auxiliary subsystem optimal control laws. The critic network weights are tuned through the gradient descent method combined with an additional stabilizing term. By using the newly established weight tuning rules, we no longer need the initial admissible control condition. In addition, we demonstrate that all signals in the closed-loop auxiliary subsystems are stable in the sense of uniform ultimate boundedness by using classic Lyapunov techniques. Finally, we provide an interconnected nonlinear plant to validate the present control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.
He, ZeFang; Zhao, Long
2014-01-01
An attitude control strategy based on Ziegler-Nichols rules for tuning PD (proportional-derivative) parameters of quadrotor helicopters is presented to solve the problem that quadrotor tends to be instable. This problem is caused by the narrow definition domain of attitude angles of quadrotor helicopters. The proposed controller is nonlinear and consists of a linear part and a nonlinear part. The linear part is a PD controller with PD parameters tuned by Ziegler-Nichols rules and acts on the quadrotor decoupled linear system after feedback linearization; the nonlinear part is a feedback linearization item which converts a nonlinear system into a linear system. It can be seen from the simulation results that the attitude controller proposed in this paper is highly robust, and its control effect is better than the other two nonlinear controllers. The nonlinear parts of the other two nonlinear controllers are the same as the attitude controller proposed in this paper. The linear part involves a PID (proportional-integral-derivative) controller with the PID controller parameters tuned by Ziegler-Nichols rules and a PD controller with the PD controller parameters tuned by GA (genetic algorithms). Moreover, this attitude controller is simple and easy to implement.
On the fusion of tuning parameters of fuzzy rules and neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.
Classical Causal Models for Bell and Kochen-Specker Inequality Violations Require Fine-Tuning
NASA Astrophysics Data System (ADS)
Cavalcanti, Eric G.
2018-04-01
Nonlocality and contextuality are at the root of conceptual puzzles in quantum mechanics, and they are key resources for quantum advantage in information-processing tasks. Bell nonlocality is best understood as the incompatibility between quantum correlations and the classical theory of causality, applied to relativistic causal structure. Contextuality, on the other hand, is on a more controversial foundation. In this work, I provide a common conceptual ground between nonlocality and contextuality as violations of classical causality. First, I show that Bell inequalities can be derived solely from the assumptions of no signaling and no fine-tuning of the causal model. This removes two extra assumptions from a recent result from Wood and Spekkens and, remarkably, does not require any assumption related to independence of measurement settings—unlike all other derivations of Bell inequalities. I then introduce a formalism to represent contextuality scenarios within causal models and show that all classical causal models for violations of a Kochen-Specker inequality require fine-tuning. Thus, the quantum violation of classical causality goes beyond the case of spacelike-separated systems and already manifests in scenarios involving single systems.
He, ZeFang
2014-01-01
An attitude control strategy based on Ziegler-Nichols rules for tuning PD (proportional-derivative) parameters of quadrotor helicopters is presented to solve the problem that quadrotor tends to be instable. This problem is caused by the narrow definition domain of attitude angles of quadrotor helicopters. The proposed controller is nonlinear and consists of a linear part and a nonlinear part. The linear part is a PD controller with PD parameters tuned by Ziegler-Nichols rules and acts on the quadrotor decoupled linear system after feedback linearization; the nonlinear part is a feedback linearization item which converts a nonlinear system into a linear system. It can be seen from the simulation results that the attitude controller proposed in this paper is highly robust, and its control effect is better than the other two nonlinear controllers. The nonlinear parts of the other two nonlinear controllers are the same as the attitude controller proposed in this paper. The linear part involves a PID (proportional-integral-derivative) controller with the PID controller parameters tuned by Ziegler-Nichols rules and a PD controller with the PD controller parameters tuned by GA (genetic algorithms). Moreover, this attitude controller is simple and easy to implement. PMID:25614879
PID tuning rules for SOPDT systems: review and some new results.
Panda, Rames C; Yu, Cheng-Ching; Huang, Hsiao-Ping
2004-04-01
PID controllers are widely used in industries and so many tuning rules have been proposed over the past 50 years that users are often lost in the jungle of tuning formulas. Moreover, unlike PI control, different control laws and structures of implementation further complicate the use of the PID controller. In this work, five different tuning rules are taken for study to control second-order plus dead time systems with wide ranges of damping coefficients and dead time to time constant ratios (D/tau). Four of them are based on IMC design with different types of approximations on dead time and the other on desired closed-loop specifications (i.e., specified forward transfer function). The method of handling dead time in the IMC type of design is important especially for systems with large D/tau ratios. A systematic approach was followed to evaluate the performance of controllers. The regions of applicability of suitable tuning rules are highlighted and recommendations are also given. It turns out that IMC designed with the Maclaurin series expansion type PID is a better choice for both set point and load changes for systems with D/tau greater than 1. For systems with D/tau less than 1, the desired closed-loop specification approach is favored.
Comprehensive decision tree models in bioinformatics.
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
2012-01-01
Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly redundant attributes that are very common in bioinformatics.
Comprehensive Decision Tree Models in Bioinformatics
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
2012-01-01
Purpose Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. Methods This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. Results The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. Conclusions The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly redundant attributes that are very common in bioinformatics. PMID:22479449
Implicit Learning of Nonlocal Musical Rules: A Comment on Kuhn and Dienes (2005)
ERIC Educational Resources Information Center
Desmet, Charlotte; Poulin-Charronnat, Benedicte; Lalitte, Philippe; Perruchet, Pierre
2009-01-01
In a recent study, G. Kuhn and Z. Dienes (2005) reported that participants previously exposed to a set of musical tunes generated by a biconditional grammar subsequently preferred new tunes that respected the grammar over new ungrammatical tunes. Because the study and test tunes did not share any chunks of adjacent intervals, this result may be…
Magnetic induction of hyperthermia by a modified self-learning fuzzy temperature controller
NASA Astrophysics Data System (ADS)
Wang, Wei-Cheng; Tai, Cheng-Chi
2017-07-01
The aim of this study involved developing a temperature controller for magnetic induction hyperthermia (MIH). A closed-loop controller was applied to track a reference model to guarantee a desired temperature response. The MIH system generated an alternating magnetic field to heat a high magnetic permeability material. This wireless induction heating had few side effects when it was extensively applied to cancer treatment. The effects of hyperthermia strongly depend on the precise control of temperature. However, during the treatment process, the control performance is degraded due to severe perturbations and parameter variations. In this study, a modified self-learning fuzzy logic controller (SLFLC) with a gain tuning mechanism was implemented to obtain high control performance in a wide range of treatment situations. This implementation was performed by appropriately altering the output scaling factor of a fuzzy inverse model to adjust the control rules. In this study, the proposed SLFLC was compared to the classical self-tuning fuzzy logic controller and fuzzy model reference learning control. Additionally, the proposed SLFLC was verified by conducting in vitro experiments with porcine liver. The experimental results indicated that the proposed controller showed greater robustness and excellent adaptability with respect to the temperature control of the MIH system.
NASA Astrophysics Data System (ADS)
Ugon, B.; Nandong, J.; Zang, Z.
2017-06-01
The presence of unstable dead-time systems in process plants often leads to a daunting challenge in the design of standard PID controllers, which are not only intended to provide close-loop stability but also to give good performance-robustness overall. In this paper, we conduct stability analysis on a double-loop control scheme based on the Routh-Hurwitz stability criteria. We propose to use this unstable double-loop control scheme which employs two P/PID controllers to control first-order or second-order unstable dead-time processes typically found in process industries. Based on the Routh-Hurwitz stability necessary and sufficient criteria, we establish several stability regions which enclose within them the P/PID parameter values that guarantee close-loop stability of the double-loop control scheme. A systematic tuning rule is developed for the purpose of obtaining the optimal P/PID parameter values within the established regions. The effectiveness of the proposed tuning rule is demonstrated using several numerical examples and the result are compared with some well-established tuning methods reported in the literature.
Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.
Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon
2017-01-01
In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.
Learning and tuning fuzzy logic controllers through reinforcements.
Berenji, H R; Khedkar, P
1992-01-01
A method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. It is shown that: the generalized approximate-reasoning-based intelligent control (GARIC) architecture learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
A composite self tuning strategy for fuzzy control of dynamic systems
NASA Technical Reports Server (NTRS)
Shieh, C.-Y.; Nair, Satish S.
1992-01-01
The feature of self learning makes fuzzy logic controllers attractive in control applications. This paper proposes a strategy to tune the fuzzy logic controller on-line by tuning the data base as well as the rule base. The structure of the controller is outlined and preliminary results are presented using simulation studies.
Ellipsoidal fuzzy learning for smart car platoons
NASA Astrophysics Data System (ADS)
Dickerson, Julie A.; Kosko, Bart
1993-12-01
A neural-fuzzy system combined supervised and unsupervised learning to find and tune the fuzzy-rules. An additive fuzzy system approximates a function by covering its graph with fuzzy rules. A fuzzy rule patch can take the form of an ellipsoid in the input-output space. Unsupervised competitive learning found the statistics of data clusters. The covariance matrix of each synaptic quantization vector defined on ellipsoid centered at the centroid of the data cluster. Tightly clustered data gave smaller ellipsoids or more certain rules. Sparse data gave larger ellipsoids or less certain rules. Supervised learning tuned the ellipsoids to improve the approximation. The supervised neural system used gradient descent to find the ellipsoidal fuzzy patches. It locally minimized the mean-squared error of the fuzzy approximation. Hybrid ellipsoidal learning estimated the control surface for a smart car controller.
NASA Astrophysics Data System (ADS)
Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd
2018-03-01
Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.
Quantum Tic-Tac-Toe as Metaphor for Quantum Physics
NASA Astrophysics Data System (ADS)
Goff, Allan; Lehmann, Dale; Siegel, Joel
2004-02-01
Quantum Tic-Tac-Toe is presented as an abstract quantum system derived from the rules of Classical Tic-Tac-Toe. Abstract quantum systems can be constructed from classical systems by the addition of three types of rules; rules of Superposition, rules of Entanglement, and rules of Collapse. This is formally done for Quantum Tic-Tac-Toe. As a part of this construction it is shown that abstract quantum systems can be viewed as an ensemble of classical systems. That is, the state of a quantum game implies a set of simultaneous classical games. The number and evolution of the ensemble of classical games is driven by the superposition, entanglement, and collapse rules. Various aspects and play situations provide excellent metaphors for standard features of quantum mechanics. Several of the more significant metaphors are discussed, including a measurement mechanism, the correspondence principle, Everett's Many Worlds Hypothesis, an ascertainity principle, and spooky action at a distance. Abstract quantum systems also show the consistency of backwards-in-time causality, and the influence on the present of both pasts and futures that never happened. The strongest logical argument against faster-than-light (FTL) phenomena is that since FTL implies backwards-in-time causality, temporal paradox is an unavoidable consequence of FTL; hence FTL is impossible. Since abstract quantum systems support backwards-in-time causality but avoid temporal paradox through pruning of the classical ensemble, it may be that quantum based FTL schemes are possible allowing backwards-in-time causality, but prohibiting temporal paradox.
Dynamic cluster generation for a fuzzy classifier with ellipsoidal regions.
Abe, S
1998-01-01
In this paper, we discuss a fuzzy classifier with ellipsoidal regions that dynamically generates clusters. First, for the data belonging to a class we define a fuzzy rule with an ellipsoidal region. Namely, using the training data for each class, we calculate the center and the covariance matrix of the ellipsoidal region for the class. Then we tune the fuzzy rules, i.e., the slopes of the membership functions, successively until there is no improvement in the recognition rate of the training data. Then if the number of the data belonging to a class that are misclassified into another class exceeds a prescribed number, we define a new cluster to which those data belong and the associated fuzzy rule. Then we tune the newly defined fuzzy rules in the similar way as stated above, fixing the already obtained fuzzy rules. We iterate generation of clusters and tuning of the newly generated fuzzy rules until the number of the data belonging to a class that are misclassified into another class does not exceed the prescribed number. We evaluate our method using thyroid data, Japanese Hiragana data of vehicle license plates, and blood cell data. By dynamic cluster generation, the generalization ability of the classifier is improved and the recognition rate of the fuzzy classifier for the test data is the best among the neural network classifiers and other fuzzy classifiers if there are no discrete input variables.
Analysis and Synthesis of Adaptive Neural Elements and Assemblies
1992-12-14
network, a learning rule (activity-dependent neuromodulation ), which has been proposed as a cellular mechanism for classical conditioning , was...activity-dependent neuromodulation ), which has been proposed as a cellular mechanism for classical conditioning, was demonstrated to support many...network, a learning rule (activity-dependent neuromodulation ), which has been proposed as a cellular mechanism for classical conditioning, was
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breznay, Nicholas P.; Tendulkar, Mihir; Zhang, Li
Here, we study the two-dimensional superconductor-insulator transition (SIT) in thin films of tantalum nitride. At zero magnetic field, films can be disorder-tuned across the SIT by adjusting thickness and film stoichiometry; insulating films exhibit classical hopping transport. Superconducting films exhibit a magnetic-field-tuned SIT, whose insulating ground state at high field appears to be a quantum-corrected metal. Scaling behavior at the field-tuned SIT shows classical percolation critical exponents zν ≈ 1.3, with a corresponding critical field H c << H c2, the upper critical field. The Hall effect exhibits a crossing point near H c, but with a nonuniversal critical valuemore » ρ c xy comparable to the normal-state Hall resistivity. We propose that high-carrier-density metals will always exhibit this pattern of behavior at the boundary between superconducting and (trivially) insulating ground states.« less
Superconductor to weak-insulator transitions in disordered tantalum nitride films
NASA Astrophysics Data System (ADS)
Breznay, Nicholas P.; Tendulkar, Mihir; Zhang, Li; Lee, Sang-Chul; Kapitulnik, Aharon
2017-10-01
We study the two-dimensional superconductor-insulator transition (SIT) in thin films of tantalum nitride. At zero magnetic field, films can be disorder-tuned across the SIT by adjusting thickness and film stoichiometry; insulating films exhibit classical hopping transport. Superconducting films exhibit a magnetic-field-tuned SIT, whose insulating ground state at high field appears to be a quantum-corrected metal. Scaling behavior at the field-tuned SIT shows classical percolation critical exponents z ν ≈1.3 , with a corresponding critical field Hc≪Hc 2 , the upper critical field. The Hall effect exhibits a crossing point near Hc, but with a nonuniversal critical value ρxy c comparable to the normal-state Hall resistivity. We propose that high-carrier-density metals will always exhibit this pattern of behavior at the boundary between superconducting and (trivially) insulating ground states.
Learning and tuning fuzzy logic controllers through reinforcements
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap
1992-01-01
A new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. In particular, our Generalized Approximate Reasoning-based Intelligent Control (GARIC) architecture: (1) learns and tunes a fuzzy logic controller even when only weak reinforcements, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto, Sutton, and Anderson to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and has demonstrated significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
Conserved Sequence Preferences Contribute to Substrate Recognition by the Proteasome*
Yu, Houqing; Singh Gautam, Amit K.; Wilmington, Shameika R.; Wylie, Dennis; Martinez-Fonts, Kirby; Kago, Grace; Warburton, Marie; Chavali, Sreenivas; Inobe, Tomonao; Finkelstein, Ilya J.; Babu, M. Madan
2016-01-01
The proteasome has pronounced preferences for the amino acid sequence of its substrates at the site where it initiates degradation. Here, we report that modulating these sequences can tune the steady-state abundance of proteins over 2 orders of magnitude in cells. This is the same dynamic range as seen for inducing ubiquitination through a classic N-end rule degron. The stability and abundance of His3 constructs dictated by the initiation site affect survival of yeast cells and show that variation in proteasomal initiation can affect fitness. The proteasome's sequence preferences are linked directly to the affinity of the initiation sites to their receptor on the proteasome and are conserved between Saccharomyces cerevisiae, Schizosaccharomyces pombe, and human cells. These findings establish that the sequence composition of unstructured initiation sites influences protein abundance in vivo in an evolutionarily conserved manner and can affect phenotype and fitness. PMID:27226608
Superconductor to weak-insulator transitions in disordered tantalum nitride films
Breznay, Nicholas P.; Tendulkar, Mihir; Zhang, Li; ...
2017-10-31
Here, we study the two-dimensional superconductor-insulator transition (SIT) in thin films of tantalum nitride. At zero magnetic field, films can be disorder-tuned across the SIT by adjusting thickness and film stoichiometry; insulating films exhibit classical hopping transport. Superconducting films exhibit a magnetic-field-tuned SIT, whose insulating ground state at high field appears to be a quantum-corrected metal. Scaling behavior at the field-tuned SIT shows classical percolation critical exponents zν ≈ 1.3, with a corresponding critical field H c << H c2, the upper critical field. The Hall effect exhibits a crossing point near H c, but with a nonuniversal critical valuemore » ρ c xy comparable to the normal-state Hall resistivity. We propose that high-carrier-density metals will always exhibit this pattern of behavior at the boundary between superconducting and (trivially) insulating ground states.« less
Optimum tuned mass damper design using harmony search with comparison of classical methods
NASA Astrophysics Data System (ADS)
Nigdeli, Sinan Melih; Bekdaş, Gebrail; Sayin, Baris
2017-07-01
As known, tuned mass dampers (TMDs) are added to mechanical systems in order to obtain a good vibration damping. The main aim is to reduce the maximum amplitude at the resonance state. In this study, a metaheuristic algorithm called harmony search employed for the optimum design of TMDs. As the optimization objective, the transfer function of the acceleration of the system with respect to ground acceleration was minimized. The numerical trails were conducted for 4 single degree of freedom systems and the results were compared with classical methods. As a conclusion, the proposed method is feasible and more effective than the other documented methods.
Leibo, Joel Z.; Liao, Qianli; Freiwald, Winrich A.; Anselmi, Fabio; Poggio, Tomaso
2017-01-01
SUMMARY The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations like depth-rotations [1, 2]. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3, 4, 5, 6]. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here we demonstrate that one specific biologically-plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli like faces at intermediate levels of the architecture and show why it does so. Thus the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. PMID:27916522
NASA Astrophysics Data System (ADS)
Giaralis, Agathoklis; Marian, Laurentiu
2016-04-01
This paper explores the practical benefits of the recently proposed by the authors tuned mass-damper-inerter (TMDI) visà- vis the classical tuned mass-damper (TMD) for the passive vibration control of seismically excited linearly building structures assumed to respond linearly. Special attention is focused on showcasing that the TMDI requires considerably reduced attached mass/weight to achieve the same vibration suppression level as the classical TMD by exploiting the mass amplification effect of the ideal inerter device. The latter allows for increasing the inertial property of the TMDI without a significant increase to its physical weight. To this end, novel numerical results pertaining to a seismically excited 3-storey frame building equipped with optimally designed TMDIs for various values of attached mass and inertance (i.e., constant of proportionality of the inerter resisting force in mass units) are furnished. The seismic action is modelled by a non-stationary stochastic process compatible with the elastic acceleration response spectrum of the European seismic code (Eurocode 8), while the TMDIs are tuned to minimize the mean square top floor displacement. It is shown that the TMDI achieves the same level of performance as an unconventional "large mass" TMD for seismic protection (i.e., more than 10% of attached mass of the total building mass), by incorporating attached masses similar to the ones used for controlling wind-induced vibrations via TMDs (i.e., 1%-5% of the total building mass). Moreover, numerical data from response history analyses for a suite of Eurocode 8 compatible recorded ground motions further demonstrate that optimally tuned TMDIs for top floor displacement minimization achieve considerable reductions in terms of top floor acceleration and attached mass displacement (stroke) compared to the classical TMD with the same attached mass.
Orientation tuning of contrast masking caused by motion streaks.
Apthorp, Deborah; Cass, John; Alais, David
2010-08-01
We investigated whether the oriented trails of blur left by fast-moving dots (i.e., "motion streaks") effectively mask grating targets. Using a classic overlay masking paradigm, we varied mask contrast and target orientation to reveal underlying tuning. Fast-moving Gaussian blob arrays elevated thresholds for detection of static gratings, both monoptically and dichoptically. Monoptic masking at high mask (i.e., streak) contrasts is tuned for orientation and exhibits a similar bandwidth to masking functions obtained with grating stimuli (∼30 degrees). Dichoptic masking fails to show reliable orientation-tuned masking, but dichoptic masks at very low contrast produce a narrowly tuned facilitation (∼17 degrees). For iso-oriented streak masks and grating targets, we also explored masking as a function of mask contrast. Interestingly, dichoptic masking shows a classic "dipper"-like TVC function, whereas monoptic masking shows no dip and a steeper "handle". There is a very strong unoriented component to the masking, which we attribute to transiently biased temporal frequency masking. Fourier analysis of "motion streak" images shows interesting differences between dichoptic and monoptic functions and the information in the stimulus. Our data add weight to the growing body of evidence that the oriented blur of motion streaks contributes to the processing of fast motion signals.
Lee, Gil-Ho; Jeong, Dongchan; Park, Kee-Su; Meir, Yigal; Cha, Min-Chul; Lee, Hu-Jong
2015-01-01
The influence of static disorder on a quantum phase transition (QPT) is a fundamental issue in condensed matter physics. As a prototypical example of a disorder-tuned QPT, the superconductor–insulator transition (SIT) has been investigated intensively over the past three decades, but as yet without a general consensus on its nature. A key element is good control of disorder. Here, we present an experimental study of the SIT based on precise in-situ tuning of disorder in dual-gated bilayer graphene proximity-coupled to two superconducting electrodes through electrical and reversible control of the band gap and the charge carrier density. In the presence of a static disorder potential, Andreev-paired carriers formed close to the Fermi level in bilayer graphene constitute a randomly distributed network of proximity-induced superconducting puddles. The landscape of the network was easily tuned by electrical gating to induce percolative clusters at the onset of superconductivity. This is evidenced by scaling behavior consistent with the classical percolation in transport measurements. At lower temperatures, the solely electrical tuning of the disorder-induced landscape enables us to observe, for the first time, a crossover from classical to quantum percolation in a single device, which elucidates how thermal dephasing engages in separating the two regimes. PMID:26310774
Lee, Gil-Ho; Jeong, Dongchan; Park, Kee-Su; Meir, Yigal; Cha, Min-Chul; Lee, Hu-Jong
2015-08-27
The influence of static disorder on a quantum phase transition (QPT) is a fundamental issue in condensed matter physics. As a prototypical example of a disorder-tuned QPT, the superconductor-insulator transition (SIT) has been investigated intensively over the past three decades, but as yet without a general consensus on its nature. A key element is good control of disorder. Here, we present an experimental study of the SIT based on precise in-situ tuning of disorder in dual-gated bilayer graphene proximity-coupled to two superconducting electrodes through electrical and reversible control of the band gap and the charge carrier density. In the presence of a static disorder potential, Andreev-paired carriers formed close to the Fermi level in bilayer graphene constitute a randomly distributed network of proximity-induced superconducting puddles. The landscape of the network was easily tuned by electrical gating to induce percolative clusters at the onset of superconductivity. This is evidenced by scaling behavior consistent with the classical percolation in transport measurements. At lower temperatures, the solely electrical tuning of the disorder-induced landscape enables us to observe, for the first time, a crossover from classical to quantum percolation in a single device, which elucidates how thermal dephasing engages in separating the two regimes.
Resolving Task Rule Incongruence during Task Switching by Competitor Rule Suppression
ERIC Educational Resources Information Center
Meiran, Nachshon; Hsieh, Shulan; Dimov, Eduard
2010-01-01
Task switching requires maintaining readiness to execute any task of a given set of tasks. However, when tasks switch, the readiness to execute the now-irrelevant task generates interference, as seen in the task rule incongruence effect. Overcoming such interference requires fine-tuned inhibition that impairs task readiness only minimally. In an…
Learning and tuning fuzzy logic controllers through reinforcements
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap
1992-01-01
This paper presents a new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system. In particular, our generalized approximate reasoning-based intelligent control (GARIC) architecture (1) learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward neural network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto et al. (1983) to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
Design of an iterative auto-tuning algorithm for a fuzzy PID controller
NASA Astrophysics Data System (ADS)
Saeed, Bakhtiar I.; Mehrdadi, B.
2012-05-01
Since the first application of fuzzy logic in the field of control engineering, it has been extensively employed in controlling a wide range of applications. The human knowledge on controlling complex and non-linear processes can be incorporated into a controller in the form of linguistic terms. However, with the lack of analytical design study it is becoming more difficult to auto-tune controller parameters. Fuzzy logic controller has several parameters that can be adjusted, such as: membership functions, rule-base and scaling gains. Furthermore, it is not always easy to find the relation between the type of membership functions or rule-base and the controller performance. This study proposes a new systematic auto-tuning algorithm to fine tune fuzzy logic controller gains. A fuzzy PID controller is proposed and applied to several second order systems. The relationship between the closed-loop response and the controller parameters is analysed to devise an auto-tuning method. The results show that the proposed method is highly effective and produces zero overshoot with enhanced transient response. In addition, the robustness of the controller is investigated in the case of parameter changes and the results show a satisfactory performance.
Application of genetic algorithms to tuning fuzzy control systems
NASA Technical Reports Server (NTRS)
Espy, Todd; Vombrack, Endre; Aldridge, Jack
1993-01-01
Real number genetic algorithms (GA) were applied for tuning fuzzy membership functions of three controller applications. The first application is our 'Fuzzy Pong' demonstration, a controller that controls a very responsive system. The performance of the automatically tuned membership functions exceeded that of manually tuned membership functions both when the algorithm started with randomly generated functions and with the best manually-tuned functions. The second GA tunes input membership functions to achieve a specified control surface. The third application is a practical one, a motor controller for a printed circuit manufacturing system. The GA alters the positions and overlaps of the membership functions to accomplish the tuning. The applications, the real number GA approach, the fitness function and population parameters, and the performance improvements achieved are discussed. Directions for further research in tuning input and output membership functions and in tuning fuzzy rules are described.
A genetic algorithms approach for altering the membership functions in fuzzy logic controllers
NASA Technical Reports Server (NTRS)
Shehadeh, Hana; Lea, Robert N.
1992-01-01
Through previous work, a fuzzy control system was developed to perform translational and rotational control of a space vehicle. This problem was then re-examined to determine the effectiveness of genetic algorithms on fine tuning the controller. This paper explains the problems associated with the design of this fuzzy controller and offers a technique for tuning fuzzy logic controllers. A fuzzy logic controller is a rule-based system that uses fuzzy linguistic variables to model human rule-of-thumb approaches to control actions within a given system. This 'fuzzy expert system' features rules that direct the decision process and membership functions that convert the linguistic variables into the precise numeric values used for system control. Defining the fuzzy membership functions is the most time consuming aspect of the controller design. One single change in the membership functions could significantly alter the performance of the controller. This membership function definition can be accomplished by using a trial and error technique to alter the membership functions creating a highly tuned controller. This approach can be time consuming and requires a great deal of knowledge from human experts. In order to shorten development time, an iterative procedure for altering the membership functions to create a tuned set that used a minimal amount of fuel for velocity vector approach and station-keep maneuvers was developed. Genetic algorithms, search techniques used for optimization, were utilized to solve this problem.
Leibo, Joel Z; Liao, Qianli; Anselmi, Fabio; Freiwald, Winrich A; Poggio, Tomaso
2017-01-09
The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations, like depth rotations [1, 2]. Current computational models of object recognition, including recent deep-learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3-6]. Here, we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here, we demonstrate that one specific biologically plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli, like faces, at intermediate levels of the architecture and show why it does so. Thus, the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Simple Derivation of Chemically Important Classical Observables and Superselection Rules.
ERIC Educational Resources Information Center
Muller-Herold, U.
1985-01-01
Explores the question "Why are so many stationary states allowed by traditional quantum mechanics not realized in nature?" through discussion of classical observables and superselection rules. Three examples are given that can be used in introductory courses (including the fermion/boson property and the mass of a "nonrelativistic" particle). (JN)
A comparative study of different methods for calculating electronic transition rates
NASA Astrophysics Data System (ADS)
Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan
2018-03-01
We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.
Classical Music as Popular Music: Adolescents' Recognition of Western Art Music
ERIC Educational Resources Information Center
VanWeelden, Kimberly
2012-01-01
The purpose of this study was to determine which "popular" classical repertoire is familiar and predictable to adolescents. Specifically, the study sought to examine (1) if students had heard the music before, (2) where they had heard the music before, and (3) if they could "name that tune". Participants (N = 668) for this…
Tunable Superconducting Split Ring Resonators
2012-09-19
microwave field-strength distortion and quality- factor dependence on tuning. Feedback for changes in design and fabrication, (4) design and fabrication...elements. For many applications tuning of the resonance frequency of the SRR is needed. Classically this is done by varactor diodes. Their capacitance ... capacitance of the gap to form a resonator circuit. The advantage of such a circuit is its quite low resonance frequency compared to other structures
Comparative study of multimodal biometric recognition by fusion of iris and fingerprint.
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.
Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint
Benaliouche, Houda; Touahria, Mohamed
2014-01-01
This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results. PMID:24605065
Lokriti, Abdesslam; Salhi, Issam; Doubabi, Said; Zidani, Youssef
2013-05-01
An IP-self-tuning controller tuned by a fuzzy adjustor, is proposed to improve induction machine speed control. The interest of such controller is the possibility to adjust only one gain, instead of two gains for the case of the PI-self-tuning controllers commonly used in the literature. This paper presents simulation and experimental results. These latter were obtained by practical implementation on a DSPace 1104 board of three different speed controllers (the classical IP, the fuzzy-like-PI and the IP-self-tuning), for a 1.5KW induction machine. The paper presents different tests used to compare the performances of the proposed controller to the two others in terms of computation time, tracking performances and disturbances rejection. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Confidence of compliance: a Bayesian approach for percentile standards.
McBride, G B; Ellis, J C
2001-04-01
Rules for assessing compliance with percentile standards commonly limit the number of exceedances permitted in a batch of samples taken over a defined assessment period. Such rules are commonly developed using classical statistical methods. Results from alternative Bayesian methods are presented (using beta-distributed prior information and a binomial likelihood), resulting in "confidence of compliance" graphs. These allow simple reading of the consumer's risk and the supplier's risks for any proposed rule. The influence of the prior assumptions required by the Bayesian technique on the confidence results is demonstrated, using two reference priors (uniform and Jeffreys') and also using optimistic and pessimistic user-defined priors. All four give less pessimistic results than does the classical technique, because interpreting classical results as "confidence of compliance" actually invokes a Bayesian approach with an extreme prior distribution. Jeffreys' prior is shown to be the most generally appropriate choice of prior distribution. Cost savings can be expected using rules based on this approach.
A Love Supreme--Riffing on the Standards: Placing Ideas at the Center of High Stakes Schooling
ERIC Educational Resources Information Center
Kohl, Herbert
2006-01-01
The Fake Book is a square spiral bound Xeroxed book, about 7" by 7", maybe 250 pages long. It's all music--the notes, usually in C or B minor, of hundreds of standard tunes, jazz, pop, and every once in a while, classical. The Fake Book and all of its variants provide an evolving canon of tunes that defines a set of common standards for…
Tervaniemi, Mari; Janhunen, Lauri; Kruck, Stefanie; Putkinen, Vesa; Huotilainen, Minna
2015-01-01
When compared with individuals without explicit training in music, adult musicians have facilitated neural functions in several modalities. They also display structural changes in various brain areas, these changes corresponding to the intensity and duration of their musical training. Previous studies have focused on investigating musicians with training in Western classical music. However, musicians involved in different musical genres may display highly differentiated auditory profiles according to the demands set by their genre, i.e., varying importance of different musical sound features. This hypothesis was tested in a novel melody paradigm including deviants in tuning, timbre, rhythm, melody transpositions, and melody contour. Using this paradigm while the participants were watching a silent video and instructed to ignore the sounds, we compared classical, jazz, and rock musicians' and non-musicians' accuracy of neural encoding of the melody. In all groups of participants, all deviants elicited an MMN response, which is a cortical index of deviance discrimination. The strength of the MMN and the subsequent attentional P3a responses reflected the importance of various sound features in each music genre: these automatic brain responses were selectively enhanced to deviants in tuning (classical musicians), timing (classical and jazz musicians), transposition (jazz musicians), and melody contour (jazz and rock musicians). Taken together, these results indicate that musicians with different training history have highly specialized cortical reactivity to sounds which violate the neural template for melody content.
Tervaniemi, Mari; Janhunen, Lauri; Kruck, Stefanie; Putkinen, Vesa; Huotilainen, Minna
2016-01-01
When compared with individuals without explicit training in music, adult musicians have facilitated neural functions in several modalities. They also display structural changes in various brain areas, these changes corresponding to the intensity and duration of their musical training. Previous studies have focused on investigating musicians with training in Western classical music. However, musicians involved in different musical genres may display highly differentiated auditory profiles according to the demands set by their genre, i.e., varying importance of different musical sound features. This hypothesis was tested in a novel melody paradigm including deviants in tuning, timbre, rhythm, melody transpositions, and melody contour. Using this paradigm while the participants were watching a silent video and instructed to ignore the sounds, we compared classical, jazz, and rock musicians' and non-musicians' accuracy of neural encoding of the melody. In all groups of participants, all deviants elicited an MMN response, which is a cortical index of deviance discrimination. The strength of the MMN and the subsequent attentional P3a responses reflected the importance of various sound features in each music genre: these automatic brain responses were selectively enhanced to deviants in tuning (classical musicians), timing (classical and jazz musicians), transposition (jazz musicians), and melody contour (jazz and rock musicians). Taken together, these results indicate that musicians with different training history have highly specialized cortical reactivity to sounds which violate the neural template for melody content. PMID:26779055
HyFIS: adaptive neuro-fuzzy inference systems and their application to nonlinear dynamical systems.
Kim, J; Kasabov, N
1999-11-01
This paper proposes an adaptive neuro-fuzzy system, HyFIS (Hybrid neural Fuzzy Inference System), for building and optimising fuzzy models. The proposed model introduces the learning power of neural networks to fuzzy logic systems and provides linguistic meaning to the connectionist architectures. Heuristic fuzzy logic rules and input-output fuzzy membership functions can be optimally tuned from training examples by a hybrid learning scheme comprised of two phases: rule generation phase from data; and rule tuning phase using error backpropagation learning scheme for a neural fuzzy system. To illustrate the performance and applicability of the proposed neuro-fuzzy hybrid model, extensive simulation studies of nonlinear complex dynamic systems are carried out. The proposed method can be applied to an on-line incremental adaptive learning for the prediction and control of nonlinear dynamical systems. Two benchmark case studies are used to demonstrate that the proposed HyFIS system is a superior neuro-fuzzy modelling technique.
Gain and frequency tuning within the mouse cochlear apex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oghalai, John S.; Raphael, Patrick D.; Gao, Simon
Normal mammalian hearing requires cochlear outer hair cell active processes that amplify the traveling wave with high gain and sharp tuning, termed cochlear amplification. We have used optical coherence tomography to study cochlear amplification within the apical turn of the mouse cochlea. We measured not only classical basilar membrane vibratory tuning curves but also vibratory responses from the rest of the tissues that compose the organ of Corti. Basilar membrane tuning was sharp in live mice and broad in dead mice, whereas other regions of the organ of Corti demonstrated phase shifts consistent with additional filtering beyond that provided bymore » basilar membrane mechanics. We use these experimental data to support a conceptual framework of how cochlear amplification is tuned within the mouse cochlear apex. We will also study transgenic mice with targeted mutations that affect different biomechanical aspects of the organ of Corti in an effort to localize the underlying processes that produce this additional filtering.« less
Generalized quantum theory of recollapsing homogeneous cosmologies
NASA Astrophysics Data System (ADS)
Craig, David; Hartle, James B.
2004-06-01
A sum-over-histories generalized quantum theory is developed for homogeneous minisuperspace type A Bianchi cosmological models, focusing on the particular example of the classically recollapsing Bianchi type-IX universe. The decoherence functional for such universes is exhibited. We show how the probabilities of decoherent sets of alternative, coarse-grained histories of these model universes can be calculated. We consider in particular the probabilities for classical evolution defined by a suitable coarse graining. For a restricted class of initial conditions and coarse grainings we exhibit the approximate decoherence of alternative histories in which the universe behaves classically and those in which it does not. For these situations we show that the probability is near unity for the universe to recontract classically if it expands classically. We also determine the relative probabilities of quasiclassical trajectories for initial states of WKB form, recovering for such states a precise form of the familiar heuristic “JṡdΣ” rule of quantum cosmology, as well as a generalization of this rule to generic initial states.
Photosynthetic Energy Transfer at the Quantum/Classical Border.
Keren, Nir; Paltiel, Yossi
2018-06-01
Quantum mechanics diverges from the classical description of our world when very small scales or very fast processes are involved. Unlike classical mechanics, quantum effects cannot be easily related to our everyday experience and are often counterintuitive to us. Nevertheless, the dimensions and time scales of the photosynthetic energy transfer processes puts them close to the quantum/classical border, bringing them into the range of measurable quantum effects. Here we review recent advances in the field and suggest that photosynthetic processes can take advantage of the sensitivity of quantum effects to the environmental 'noise' as means of tuning exciton energy transfer efficiency. If true, this design principle could be a base for 'nontrivial' coherent wave property nano-devices. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Ying-Jie, E-mail: yingjiezhang@qfnu.edu.cn; Han, Wei; Xia, Yun-Jie, E-mail: yjxia@qfnu.edu.cn
We propose a scheme of controlling entanglement dynamics of a quantum system by applying the external classical driving field for two atoms separately located in a single-mode photon cavity. It is shown that, with a judicious choice of the classical-driving strength and the atom–photon detuning, the effective atom–photon interaction Hamiltonian can be switched from Jaynes–Cummings model to anti-Jaynes–Cummings model. By tuning the controllable atom–photon interaction induced by the classical field, we illustrate that the evolution trajectory of the Bell-like entanglement states can be manipulated from entanglement-sudden-death to no-entanglement-sudden-death, from no-entanglement-invariant to entanglement-invariant. Furthermore, the robustness of the initial Bell-like entanglementmore » can be improved by the classical driving field in the leaky cavities. This classical-driving-assisted architecture can be easily extensible to multi-atom quantum system for scalability.« less
Tuning without over-tuning: parametric uncertainty quantification for the NEMO ocean model
NASA Astrophysics Data System (ADS)
Williamson, Daniel B.; Blaker, Adam T.; Sinha, Bablu
2017-04-01
In this paper we discuss climate model tuning and present an iterative automatic tuning method from the statistical science literature. The method, which we refer to here as iterative refocussing (though also known as history matching), avoids many of the common pitfalls of automatic tuning procedures that are based on optimisation of a cost function, principally the over-tuning of a climate model due to using only partial observations. This avoidance comes by seeking to rule out parameter choices that we are confident could not reproduce the observations, rather than seeking the model that is closest to them (a procedure that risks over-tuning). We comment on the state of climate model tuning and illustrate our approach through three waves of iterative refocussing of the NEMO (Nucleus for European Modelling of the Ocean) ORCA2 global ocean model run at 2° resolution. We show how at certain depths the anomalies of global mean temperature and salinity in a standard configuration of the model exceeds 10 standard deviations away from observations and show the extent to which this can be alleviated by iterative refocussing without compromising model performance spatially. We show how model improvements can be achieved by simultaneously perturbing multiple parameters, and illustrate the potential of using low-resolution ensembles to tune NEMO ORCA configurations at higher resolutions.
On the Possibility of Using Nonlinear Elements for Landau Damping in High-Intensity Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexahin, Y.; Gianfelice-Wendt, E.; Lebedev, V.
2016-09-30
Direct space-charge force shifts incoherent tunes downwards from the coherent ones breaking the Landau mechanism of coherent oscillations damping at high beam intensity. To restore it nonlinear elements can be employed which move back tunes of large amplitude particles. In the present report we consider the possibility of creating a “nonlinear integrable optics” insertion in the Fermilab Recycler to host either octupoles or hollow electron lens for this purpose. For comparison we also consider the classic scheme with distributed octupole families. It is shown that for the Proton Improvement Plan II (PIP II) parameters the required nonlinear tune shift canmore » be created without destroying the dynamic aperture.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, S.P.; Woodard, R.P., E-mail: spmiao5@mail.ncku.edu.tw, E-mail: woodard@phys.ufl.edu
2015-09-01
We argue that the fine tuning problems of scalar-driven inflation may be worse than is commonly believed. The reason is that reheating requires the inflaton to be coupled to other matter fields whose vacuum fluctuations alter the inflaton potential. The usual response has been that even more fine-tuning of the classical potential V(φ) can repair any damage done in this way. We point out that the effective potential in de Sitter background actually depends in a complicated way upon the dimensionless combination of φ/H. We also show that the factors of H which occur in de Sitter do not evenmore » correspond to local functionals of the metric for general geometries, nor are they Planck-suppressed.« less
An Application of Fuzzy Logic Control to a Classical Military Tracking Problem
1994-05-19
34, Fuzzy Sets and Systems, vol.4., 1980, pp.13-30. Berenji , Hamid R . and Pratap Khedkar. "Learning and Tuning Fuzzy Logic Controllers Through...A TRIDENT SCHOLAR PROJECT REPORT" NO. 222 "An Application of Fuzzy Logic Control to a Classical Military Tracking Problem" DTIC •S r F UNITED STATES...Zq qAvail andlor ____________________I__________ Dist SpecialDate USNA- 1531-2 REPORT DOCUMENTATION PAGE r •,,,op APmw OMB no. 0704.0188 ¢iQiiati~m.f
Luria-Delbrück, revisited: the classic experiment does not rule out Lamarckian evolution
NASA Astrophysics Data System (ADS)
Holmes, Caroline M.; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya
2017-10-01
We re-examined data from the classic Luria-Delbrück fluctuation experiment, which is often credited with establishing a Darwinian basis for evolution. We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms (as would happen for bacteria with CRISPR-Cas immunity). Analysis of the combined model was not performed in the original 1943 paper. The Luria-Delbrück paper also did not consider the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that the Luria-Delbrück experiment, indeed, favors the Darwinian evolution over purely Lamarckian. However, our analysis does not rule out the combined model, and hence cannot rule out Lamarckian contributions to the evolutionary dynamics.
Luria-Delbrück, revisited: the classic experiment does not rule out Lamarckian evolution.
Holmes, Caroline M; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya
2017-08-21
We re-examined data from the classic Luria-Delbrück fluctuation experiment, which is often credited with establishing a Darwinian basis for evolution. We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms (as would happen for bacteria with CRISPR-Cas immunity). Analysis of the combined model was not performed in the original 1943 paper. The Luria-Delbrück paper also did not consider the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that the Luria-Delbrück experiment, indeed, favors the Darwinian evolution over purely Lamarckian. However, our analysis does not rule out the combined model, and hence cannot rule out Lamarckian contributions to the evolutionary dynamics.
Optical tuning of electronic valleys (Conference Presentation)
NASA Astrophysics Data System (ADS)
Sie, Edbert J.; Gedik, Nuh
2017-02-01
Monolayer transition-metal dichalcogenides such as MoS2 and WS2 are prime examples of atomically thin semiconducting crystals that exhibit remarkable electronic and optical properties. They have a pair of valleys that can serve as a new electronic degree of freedom, and these valleys obey optical selection rules with circularly polarized light. Here, we discuss how ultrafast laser pulses can be used to tune their energy levels in a controllable valley-selective manner. The energy tunability is extremely large, comparable to what would be obtained using a hundred Tesla of magnetic field. We will also show that such valley tunability can be performed while we effectively manipulate the valley selection rules. Finally, we will explore the prospect of using this technique through photoemission spectroscopy to create a new phase of matter called a valley Floquet topological insulator.
Implementing a Commercial Rule Base as a Medication Order Safety Net
Reichley, Richard M.; Seaton, Terry L.; Resetar, Ervina; Micek, Scott T.; Scott, Karen L.; Fraser, Victoria J.; Dunagan, W. Claiborne; Bailey, Thomas C.
2005-01-01
A commercial rule base (Cerner Multum) was used to identify medication orders exceeding recommended dosage limits at five hospitals within BJC HealthCare, an integrated health care system. During initial testing, clinical pharmacists determined that there was an excessive number of nuisance and clinically insignificant alerts, with an overall alert rate of 9.2%. A method for customizing the commercial rule base was implemented to increase rule specificity for problematic rules. The system was subsequently deployed at two facilities and achieved alert rates of less than 1%. Pharmacists screened these alerts and contacted ordering physicians in 21% of cases. Physicians made therapeutic changes in response to 38% of alerts presented to them. By applying simple techniques to customize rules, commercial rule bases can be used to rapidly deploy a safety net to screen drug orders for excessive dosages, while preserving the rule architecture for later implementations of more finely tuned clinical decision support. PMID:15802481
Resolving task rule incongruence during task switching by competitor rule suppression.
Meiran, Nachshon; Hsieh, Shulan; Dimov, Eduard
2010-07-01
Task switching requires maintaining readiness to execute any task of a given set of tasks. However, when tasks switch, the readiness to execute the now-irrelevant task generates interference, as seen in the task rule incongruence effect. Overcoming such interference requires fine-tuned inhibition that impairs task readiness only minimally. In an experiment involving 2 object classification tasks and 2 location classification tasks, the authors show that irrelevant task rules that generate response conflicts are inhibited. This competitor rule suppression (CRS) is seen in response slowing in subsequent trials, when the competing rules become relevant. CRS is shown to operate on specific rules without affecting similar rules. CRS and backward inhibition, which is another inhibitory phenomenon, produced additive effects on reaction time, suggesting their mutual independence. Implications for current formal theories of task switching as well as for conflict monitoring theories are discussed. (c) 2010 APA, all rights reserved
NASA Astrophysics Data System (ADS)
Walker, Theodore, Jr.
2011-10-01
Anthropic reasoning about observation selection effects upon the appearance of cosmic providential fine-tuning (fine-tuning that provides for life) is often motivated by a desire to avoid theological implications (implications favoring the idea of a divine cosmic provider) without appealing to sheer lucky-for-us-cosmic-jackpot happenstance and coincidence. Cosmic coincidence can be rendered less incredible by appealing to a multiverse context. Cosmic providence can be rendered non-theological by appealing to an agent-less providential purpose, or by appealing to less-than-omnipresent/local providers, such as alien intelligences creating life- providing baby universes. Instead of choosing either cosmic coincidence or cosmic providence, as though they were mutually exclusive; it is better to accept both. Neoclassical thought accepts coincidence and providence, plus many local providers and one omnipresent provider. Moreover, fundamental observation selection theory should distinguish the many local observers of some events from the one omnipresent observer of all events. Accepting both coincidence and providence avoids classical theology (providence without coincidence) and classical atheism (coincidence without providence), but not neoclassical theology (providence with coincidence). Cosmology cannot avoid the idea of an all-inclusive omnipresent providential dice-throwing living-creative whole of reality, an idea essential to neoclassical theology, and to neoclassical cosmology.
NASA Astrophysics Data System (ADS)
Jia, S.; Bud'Ko, S. L.; Samolyuk, G. D.; Canfield, P. C.
2007-05-01
One of the historic goals of alchemy was to turn base elements into precious ones. Although the practice of alchemy has been superseded by chemistry and solid-state physics, the desire to dramatically change or tune the properties of a compound, preferably through small changes in stoichiometry or composition, remains. This desire becomes even more compelling for compounds that can be tuned to extremes in behaviour. Here, we report that the RT2Zn20 (R=rare earth and T=transition metal) family of compounds manifests exactly this type of versatility, even though they are more than 85% Zn. By tuning T, we find that YFe2Zn20 is closer to ferromagnetism than elemental Pd, the classic example of a nearly ferromagnetic Fermi liquid. By submerging Gd in this highly polarizable Fermi liquid, we tune the system to a remarkably high-temperature ferromagnetic (TC=86K) state for a compound with less than 5% Gd. Although this is not quite turning lead into gold, it is essentially tuning Zn to become a variety of model compounds.
Resonance strategies used in Bulgarian women's singing style: a pilot study.
Henrich, Nathalie; Kiek, Mara; Smith, John; Wolfe, Joe
2007-01-01
Are the characteristic timbre and loudness of Bulgarian women's singing related to tuning of resonances of the vocal tract? We studied an Australian female singer, who practises and teaches Bulgarian singing technique. Two different vocal qualities of this style were studied. The louder teshka is characterized by a sonorous voice production. The less loud leka has a smoother timbre that is closer to that of the head voice register. Six vowels in each of teshka, leka and the subject's 'normal' (i.e. Western rather than Bulgarian) style were studied. The acoustic resonances of the singer's vocal tract were measured directly during singing by injecting a synthesized, broad-band acoustic current. This singer does not use resonance tuning consistently in her classical Western style. However, in both teshka and leka, she tunes the first tract resonance close to the second harmonic of the voice for most vowels. This tuning boosts the power output in the radiation field for that harmonic. This tuning also contributes to the very strong second harmonic which is a characteristic of the timbre identified as the Bulgarian style.
Rana, Malay Kumar; Chandra, Amalendu
2013-05-28
The behavior of water near a graphene sheet is investigated by means of ab initio and classical molecular dynamics simulations. The wetting of the graphene sheet by ab initio water and the relation of such behavior to the strength of classical dispersion interaction between surface atoms and water are explored. The first principles simulations reveal a layered solvation structure around the graphene sheet with a significant water density in the interfacial region implying no drying or cavitation effect. It is found that the ab initio results of water density at interfaces can be reproduced reasonably well by classical simulations with a tuned dispersion potential between the surface and water molecules. Calculations of vibrational power spectrum from ab initio simulations reveal a shift of the intramolecular stretch modes to higher frequencies for interfacial water molecules when compared with those of the second solvation later or bulk-like water due to the presence of free OH modes near the graphene sheet. Also, a weakening of the water-water hydrogen bonds in the vicinity of the graphene surface is found in our ab initio simulations as reflected in the shift of intermolecular vibrational modes to lower frequencies for interfacial water molecules. The first principles calculations also reveal that the residence and orientational dynamics of interfacial water are somewhat slower than those of the second layer or bulk-like molecules. However, the lateral diffusion and hydrogen bond relaxation of interfacial water molecules are found to occur at a somewhat faster rate than that of the bulk-like water molecules. The classical molecular dynamics simulations with tuned Lennard-Jones surface-water interaction are found to produce dynamical results that are qualitatively similar to those of ab initio molecular dynamics simulations.
Evolutionary game theory meets social science: is there a unifying rule for human cooperation?
Rosas, Alejandro
2010-05-21
Evolutionary game theory has shown that human cooperation thrives in different types of social interactions with a PD structure. Models treat the cooperative strategies within the different frameworks as discrete entities and sometimes even as contenders. Whereas strong reciprocity was acclaimed as superior to classic reciprocity for its ability to defeat defectors in public goods games, recent experiments and simulations show that costly punishment fails to promote cooperation in the IR and DR games, where classic reciprocity succeeds. My aim is to show that cooperative strategies across frameworks are capable of a unified treatment, for they are governed by a common underlying rule or norm. An analysis of the reputation and action rules that govern some representative cooperative strategies both in models and in economic experiments confirms that the different frameworks share a conditional action rule and several reputation rules. The common conditional rule contains an option between costly punishment and withholding benefits that provides alternative enforcement methods against defectors. Depending on the framework, individuals can switch to the appropriate strategy and method of enforcement. The stability of human cooperation looks more promising if one mechanism controls successful strategies across frameworks. Published by Elsevier Ltd.
Sum Rules, Classical and Quantum - A Pedagogical Approach
NASA Astrophysics Data System (ADS)
Karstens, William; Smith, David Y.
2014-03-01
Sum rules in the form of integrals over the response of a system to an external probe provide general analytical tools for both experiment and theory. For example, the celebrated f-sum rule gives a system's plasma frequency as an integral over the optical-dipole absorption spectrum regardless of the specific spectral distribution. Moreover, this rule underlies Smakula's equation for the number density of absorbers in a sample in terms of the area under their absorption bands. Commonly such rules are derived from quantum-mechanical commutation relations, but many are fundamentally classical (independent of ℏ) and so can be derived from more transparent mechanical models. We have exploited this to illustrate the fundamental role of inertia in the case of optical sum rules. Similar considerations apply to sum rules in many other branches of physics. Thus, the ``attenuation integral theorems'' of ac circuit theory reflect the ``inertial'' effect of Lenz's Law in inductors or the potential energy ``storage'' in capacitors. These considerations are closely related to the fact that the real and imaginary parts of a response function cannot be specified independently, a result that is encapsulated in the Kramers-Kronig relations. Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.
Tuning the band structure of graphene nanoribbons through defect-interaction-driven edge patterning
NASA Astrophysics Data System (ADS)
Du, Lin; Nguyen, Tam N.; Gilman, Ari; Muniz, André R.; Maroudas, Dimitrios
2017-12-01
We report a systematic analysis of pore-edge interactions in graphene nanoribbons (GNRs) and their outcomes based on first-principles calculations and classical molecular-dynamics simulations. We find a strong attractive interaction between nanopores and GNR edges that drives the pores to migrate toward and coalesce with the GNR edges, which can be exploited to form GNR edge patterns that impact the GNR electronic band structure and tune the GNR band gap. Our analysis introduces a viable physical processing strategy for modifying GNR properties by combining defect engineering and thermal annealing.
Correlations and sum rules in a half-space for a quantum two-dimensional one-component plasma
NASA Astrophysics Data System (ADS)
Jancovici, B.; Šamaj, L.
2007-05-01
This paper is the continuation of a previous one (Šamaj and Jancovici, 2007 J. Stat. Mech. P02002); for a nearly classical quantum fluid in a half-space bounded by a plain plane hard wall (no image forces), we had generalized the Wigner Kirkwood expansion of the equilibrium statistical quantities in powers of Planck's constant \\hbar . As a model system for a more detailed study, we consider the quantum two-dimensional one-component plasma: a system of charged particles of one species, interacting through the logarithmic Coulomb potential in two dimensions, in a uniformly charged background of opposite sign, such that the total charge vanishes. The corresponding classical system is exactly solvable in a variety of geometries, including the present one of a half-plane, when βe2 = 2, where β is the inverse temperature and e is the charge of a particle: all the classical n-body densities are known. In the present paper, we have calculated the expansions of the quantum density profile and truncated two-body density up to order \\hbar ^2 (instead of only to order \\hbar as in the previous paper). These expansions involve the classical n-body densities up to n = 4; thus we obtain exact expressions for these quantum expansions in this special case. For the quantum one-component plasma, two sum rules involving the truncated two-body density (and, for one of them, the density profile) have been derived, a long time ago, by using heuristic macroscopic arguments: one sum rule concerns the asymptotic form along the wall of the truncated two-body density; the other one concerns the dipole moment of the structure factor. In the two-dimensional case at βe2 = 2, we now have explicit expressions up to order \\hbar^2 for these two quantum densities; thus we can microscopically check the sum rules at this order. The checks are positive, reinforcing the idea that the sum rules are correct.
Electrical tuning of a quantum plasmonic resonance
Liu, Xiaoge; Kang, Ju -Hyung; Yuan, Hongtao; ...
2017-06-12
Surface plasmon (SP) excitations in metals facilitate confinement of light into deep-subwavelength volumes and can induce strong light–matter interaction. Generally, the SP resonances supported by noble metal nanostructures are explained well by classical models, at least until the nanostructure size is decreased to a few nanometres, approaching the Fermi wavelength λ F of the electrons. Although there is a long history of reports on quantum size effects in the plasmonic response of nanometre-sized metal particles systematic experimental studies have been hindered by inhomogeneous broadening in ensemble measurements, as well as imperfect control over size, shape, faceting, surface reconstructions, contamination, chargingmore » effects and surface roughness in single-particle measurements. In particular, observation of the quantum size effect in metallic films and its tuning with thickness has been challenging as they only confine carriers in one direction. Here, we show active tuning of quantum size effects in SP resonances supported by a 20-nm-thick metallic film of indium tin oxide (ITO), a plasmonic material serving as a low-carrier-density Drude metal. An ionic liquid (IL) is used to electrically gate and partially deplete the ITO layer. The experiment shows a controllable and reversible blue-shift in the SP resonance above a critical voltage. As a result, a quantum-mechanical model including the quantum size effect reproduces the experimental results, whereas a classical model only predicts a red shift.« less
Electrical tuning of a quantum plasmonic resonance
NASA Astrophysics Data System (ADS)
Liu, Xiaoge; Kang, Ju-Hyung; Yuan, Hongtao; Park, Junghyun; Kim, Soo Jin; Cui, Yi; Hwang, Harold Y.; Brongersma, Mark L.
2017-09-01
Surface plasmon (SP) excitations in metals facilitate confinement of light into deep-subwavelength volumes and can induce strong light-matter interaction. Generally, the SP resonances supported by noble metal nanostructures are explained well by classical models, at least until the nanostructure size is decreased to a few nanometres, approaching the Fermi wavelength λF of the electrons. Although there is a long history of reports on quantum size effects in the plasmonic response of nanometre-sized metal particles, systematic experimental studies have been hindered by inhomogeneous broadening in ensemble measurements, as well as imperfect control over size, shape, faceting, surface reconstructions, contamination, charging effects and surface roughness in single-particle measurements. In particular, observation of the quantum size effect in metallic films and its tuning with thickness has been challenging as they only confine carriers in one direction. Here, we show active tuning of quantum size effects in SP resonances supported by a 20-nm-thick metallic film of indium tin oxide (ITO), a plasmonic material serving as a low-carrier-density Drude metal. An ionic liquid (IL) is used to electrically gate and partially deplete the ITO layer. The experiment shows a controllable and reversible blue-shift in the SP resonance above a critical voltage. A quantum-mechanical model including the quantum size effect reproduces the experimental results, whereas a classical model only predicts a red shift.
NASA Astrophysics Data System (ADS)
Lin, Kyaw Kyaw; Soe, Aung Kyaw; Thu, Theint Theint
2008-10-01
This research work investigates a Self-Tuning Proportional Derivative (PD) type Fuzzy Logic Controller (STPDFLC) for a two link robot system. The proposed scheme adjusts on-line the output Scaling Factor (SF) by fuzzy rules according to the current trend of the robot. The rule base for tuning the output scaling factor is defined on the error (e) and change in error (de). The scheme is also based on the fact that the controller always tries to manipulate the process input. The rules are in the familiar if-then format. All membership functions for controller inputs (e and de) and controller output (UN) are defined on the common interval [-1,1]; whereas the membership functions for the gain updating factor (α) is defined on [0,1]. There are various methods to calculate the crisp output of the system. Center of Gravity (COG) method is used in this application due to better results it gives. Performances of the proposed STPDFLC are compared with those of their corresponding PD-type conventional Fuzzy Logic Controller (PDFLC). The proposed scheme shows a remarkably improved performance over its conventional counterpart especially under parameters variation (payload). The two-link results of analysis are simulated. These simulation results are illustrated by using MATLAB® programming.
Man not a machine: Models, minds, and mental labor, c.1980.
Stadler, Max
2017-01-01
This essay is concerned with the fate of the so-called "computer metaphor" of the mind in the age of mass computing. As such, it is concerned with the ways the mighty metaphor of the rational, rule-based, and serial "information processor," which dominated neurological and psychological theorizing in the early post-WW2 era, came apart during the 1970s and 1980s; and how it was, step by step, replaced by a set of model entities more closely in tune with the significance that was now discerned in certain kinds of "everyday practical action" as the ultimate manifestation of the human mind. By taking a closer look at the ailments and promises of the so-called postindustrial age and more specifically, at the "hazards" associated with the introduction of computers into the workplace, it is shown how models and visions of the mind responded to this new state of affairs. It was in this context-the transformations of mental labor, c.1980-my argument goes, that the minds of men and women revealed themselves to be not so much like computing machines, as the "classic" computer metaphor of the mind, which had birthed the "cognitive revolution" of the 1950s and 1960s, once had it; they were positively unlike them. Instead of "rules" or "symbol manipulation," the minds of computer-equipped brainworkers thus evoked a different set of metaphors: at stake in postindustrial cognition, as this essay argues, was something "parallel," "tacit," and "embodied and embedded." © 2017 Elsevier B.V. All rights reserved.
Widely tunable semiconductor lasers with three interferometric arms.
Su, Guan-Lin; Wu, Ming C
2017-09-04
We present a comprehensive study for a new three-branch widely tunable semiconductor laser based on a self-imaging, lossless multi-mode interference (MMI) coupler. We have developed a general theoretical framework that is applicable to all types of interferometric lasers. Our analysis showed that the three-branch laser offers high side-mode suppression ratios (SMSRs) while maintaining a wide tuning range and a low threshold modal gain of the lasing mode. We also present the design rules for tuning over the dense-wavelength division multiplexing grid over the C-band.
One ring to rule them all: tuning bacteria collective motion via geometric confinement
NASA Astrophysics Data System (ADS)
Giomi, Luca
2016-08-01
Suspensions of swimming bacteria are known to self-organize into turbulent-like flows for sufficiently high density and nutrients concentration. This spectacular example of collective behavior, on which the survival of the colony itself is believed to rely, appears however impossible to control. In a recent experimental and computational study, Wioland et al (2016 New J. Phys. 18 075002) have demonstrated that the collective motion of B. subtilis can be in fact selectively tuned by confining the system into a ring-shaped channel.
Classical and quantum theories of proton disorder in hexagonal water ice
NASA Astrophysics Data System (ADS)
Benton, Owen; Sikora, Olga; Shannon, Nic
2016-03-01
It has been known since the pioneering work of Bernal, Fowler, and Pauling that common, hexagonal (Ih) water ice is the archetype of a frustrated material: a proton-bonded network in which protons satisfy strong local constraints (the "ice rules") but do not order. While this proton disorder is well established, there is now a growing body of evidence that quantum effects may also have a role to play in the physics of ice at low temperatures. In this paper, we use a combination of numerical and analytic techniques to explore the nature of proton correlations in both classical and quantum models of ice Ih. In the case of classical ice Ih, we find that the ice rules have two, distinct, consequences for scattering experiments: singular "pinch points," reflecting a zero-divergence condition on the uniform polarization of the crystal, and broad, asymmetric features, coming from its staggered polarization. In the case of the quantum model, we find that the collective quantum tunneling of groups of protons can convert states obeying the ice rules into a quantum liquid, whose excitations are birefringent, emergent photons. We make explicit predictions for scattering experiments on both classical and quantum ice Ih, and show how the quantum theory can explain the "wings" of incoherent inelastic scattering observed in recent neutron scattering experiments [Bove et al., Phys. Rev. Lett. 103, 165901 (2009), 10.1103/PhysRevLett.103.165901]. These results raise the intriguing possibility that the protons in ice Ih could form a quantum liquid at low temperatures, in which protons are not merely disordered, but continually fluctuate between different configurations obeying the ice rules.
Communicative signals support abstract rule learning by 7-month-old infants
Ferguson, Brock; Lew-Williams, Casey
2016-01-01
The mechanisms underlying the discovery of abstract rules like those found in natural language may be evolutionarily tuned to speech, according to previous research. When infants hear speech sounds, they can learn rules that govern their combination, but when they hear non-speech sounds such as sine-wave tones, they fail to do so. Here we show that infants’ rule learning is not tied to speech per se, but is instead enhanced more broadly by communicative signals. In two experiments, infants succeeded in learning and generalizing rules from tones that were introduced as if they could be used to communicate. In two control experiments, infants failed to learn the very same rules when familiarized to tones outside of a communicative exchange. These results reveal that infants’ attention to social agents and communication catalyzes a fundamental achievement of human learning. PMID:27150270
ERIC Educational Resources Information Center
Miller, Ann M.
A lexical representational analysis of Classical Arabic is proposed that captures a generalization that McCarthy's (1979, 1981) autosegmental analysis misses, namely that idiosyncratic characteristics of the derivational binyanim in Arabic are lexical, not morphological. This analysis captures that generalization by treating all the idiosyncracies…
Tuning algorithms for fractional order internal model controllers for time delay processes
NASA Astrophysics Data System (ADS)
Muresan, Cristina I.; Dutta, Abhishek; Dulf, Eva H.; Pinar, Zehra; Maxim, Anca; Ionescu, Clara M.
2016-03-01
This paper presents two tuning algorithms for fractional-order internal model control (IMC) controllers for time delay processes. The two tuning algorithms are based on two specific closed-loop control configurations: the IMC control structure and the Smith predictor structure. In the latter, the equivalency between IMC and Smith predictor control structures is used to tune a fractional-order IMC controller as the primary controller of the Smith predictor structure. Fractional-order IMC controllers are designed in both cases in order to enhance the closed-loop performance and robustness of classical integer order IMC controllers. The tuning procedures are exemplified for both single-input-single-output as well as multivariable processes, described by first-order and second-order transfer functions with time delays. Different numerical examples are provided, including a general multivariable time delay process. Integer order IMC controllers are designed in each case, as well as fractional-order IMC controllers. The simulation results show that the proposed fractional-order IMC controller ensures an increased robustness to modelling uncertainties. Experimental results are also provided, for the design of a multivariable fractional-order IMC controller in a Smith predictor structure for a quadruple-tank system.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. © 2017 Yufei Gao et al.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning.
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Zhou, Yanjie; Zhou, Bing; Shi, Lei
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. PMID:29065568
Using Literature to Teach the Rule of Law
ERIC Educational Resources Information Center
Landman, James
2008-01-01
This article looks at three examples of children's and young adult literature that offer entertaining, accessible, and at times provocative, explorations of the rule of law in very different settings. Lewis Carroll's classic, "Alice's Adventures in Wonderland," finds Alice confronting a host of procedural irregularities and abuses of power within…
Luria-Delbrück Revisited: The Classic Experiment Doesn't Rule out Lamarckian Evolution
NASA Astrophysics Data System (ADS)
Holmes, Caroline; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya
We re-examine data from the classic 1943 Luria-Delbruck fluctuation experiment. This experiment is often credited with establishing that phage resistance in bacteria is acquired through a Darwinian mechanism (natural selection on standing variation) rather than through a Lamarckian mechanism (environmentally induced mutations). We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms. Analysis of the combined model was not performed in the 1943 paper, and nor was analysis of the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that: 1) all datasets from the paper favor Darwinian over purely Lamarckian evolution, 2) some of the datasets are unable to distinguish between the purely Darwinian and the combined models, and 3) the other datasets cannot be explained by any of the models considered. In summary, the classic experiment cannot rule out Lamarckian contributions to the evolutionary dynamics. This work was supported by National Science Foundation Grant 1410978, NIH training Grant 5R90DA033462, and James S. McDonnell Foundation Grant 220020321.
A theoretical framework for constructing elastic/plastic constitutive models of triaxial tests
NASA Astrophysics Data System (ADS)
Collins, Ian F.; Hilder, Tamsyn
2002-11-01
Modern ideas of thermomechanics are used to develop families of models describing the elastic/plastic behaviour of cohesionless soils deforming under triaxial conditions. Once the form of the free energy and dissipation potential functions have been specified, the corresponding yield loci, flow rules, isotropic and kinematic hardening rules as well as the elasticity law are deduced in a systematic manner. The families contain the classical linear frictional (Coulomb type) models and the classical critical state models as special cases. The generalized models discussed here include non-associated flow rules, shear as well as volumetric hardening, anisotropic responses and rotational yield loci. The various parameters needed to describe the models can be interpreted in terms of ratio of the plastic work, which is dissipated, to that which is stored. Non-associated behaviour is found to occur whenever this division between dissipated and stored work is not equal. Micro-level interpretations of stored plastic work are discussed. The models automatically satisfy the laws of thermodynamics, and there is no need to invoke any stability postulates. Some classical forms of the peak-strength/dilatancy relationship are established theoretically. Some representative drained and undrained paths are computed.
Extraction of decision rules via imprecise probabilities
NASA Astrophysics Data System (ADS)
Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.
2017-05-01
Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.
Effect of Temporal Relationships in Associative Rule Mining for Web Log Data
Mohd Khairudin, Nazli; Mustapha, Aida
2014-01-01
The advent of web-based applications and services has created such diverse and voluminous web log data stored in web servers, proxy servers, client machines, or organizational databases. This paper attempts to investigate the effect of temporal attribute in relational rule mining for web log data. We incorporated the characteristics of time in the rule mining process and analysed the effect of various temporal parameters. The rules generated from temporal relational rule mining are then compared against the rules generated from the classical rule mining approach such as the Apriori and FP-Growth algorithms. The results showed that by incorporating the temporal attribute via time, the number of rules generated is subsequently smaller but is comparable in terms of quality. PMID:24587757
Explicit analytical tuning rules for digital PID controllers via the magnitude optimum criterion.
Papadopoulos, Konstantinos G; Yadav, Praveen K; Margaris, Nikolaos I
2017-09-01
Analytical tuning rules for digital PID type-I controllers are presented regardless of the process complexity. This explicit solution allows control engineers 1) to make an accurate examination of the effect of the controller's sampling time to the control loop's performance both in the time and frequency domain 2) to decide when the control has to be I, PI and when the derivative, D, term has to be added or omitted 3) apply this control action to a series of stable benchmark processes regardless of their complexity. The former advantages are considered critical in industry applications, since 1) most of the times the choice of the digital controller's sampling time is based on heuristics and past criteria, 2) there is little a-priori knowledge of the controlled process making the choice of the type of the controller a trial and error exercise 3) model parameters change often depending on the control loop's operating point making in this way, the problem of retuning the controller's parameter a much challenging issue. Basis of the proposed control law is the principle of the PID tuning via the Magnitude Optimum criterion. The final control law involves the controller's sampling time T s within the explicit solution of the controller's parameters. Finally, the potential of the proposed method is justified by comparing its performance with the conventional PID tuning when controlling the same process. Further investigation regarding the choice of the controller's sampling time T s is also presented and useful conclusions for control engineers are derived. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Comparative study of a learning fuzzy PID controller and a self-tuning controller.
Kazemian, H B
2001-01-01
The self-organising fuzzy controller is an extension of the rule-based fuzzy controller with an additional learning capability. The self-organising fuzzy (SOF) is used as a master controller to readjust conventional PID gains at the actuator level during the system operation, copying the experience of a human operator. The application of the self-organising fuzzy PID (SOF-PID) controller to a 2-link non-linear revolute-joint robot-arm is studied using path tracking trajectories at the setpoint. For the purpose of comparison, the same experiments are repeated by using the self-tuning controller subject to the same data supplied at the setpoint. For the path tracking experiments, the output trajectories of the SOF-PID controller followed the specified path closer and smoother than the self-tuning controller.
General self-tuning solutions and no-go theorem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Förste, Stefan; Kim, Jihn E.; Lee, Hyun Min, E-mail: forste@th.physik.uni-bonn.de, E-mail: jihnekim@gmail.com, E-mail: hyun.min.lee@kias.re.kr
2013-03-01
We consider brane world models with one extra dimension. In the bulk there is in addition to gravity a three form gauge potential or equivalently a scalar (by generalisation of electric magnetic duality). We find classical solutions for which the 4d effective cosmological constant is adjusted by choice of integration constants. No go theorems for such self-tuning mechanism are circumvented by unorthodox Lagrangians for the three form respectively the scalar. It is argued that the corresponding effective 4d theory always includes tachyonic Kaluza-Klein excitations or ghosts. Known no go theorems are extended to a general class of models with unorthodoxmore » Lagrangians.« less
Experimental studies of tuned particle damper: Design and characterization
NASA Astrophysics Data System (ADS)
Zhang, Kai; Xi, Yanhui; Chen, Tianning; Ma, Zhihao
2018-01-01
To better suppress the structural vibration in the micro vibration and harsh environment, a new type of damper, tuned particle damper (TPD), was designed by combining the advantage of classical dynamic vibration absorber (DVA) and particle damper (PD). An equivalent theoretical model was established to describe the dynamic behavior of a cantilever system treated with TPD. By means of a series of sine sweep tests, the dynamic characteristic of TPD under different excitation intensity was explored and the damping performance of TPD was investigated by comparing with classical DVA and PD with the same mass ratio. Experimental results show that with the increasing of excitation intensity TPD shows two different dynamic characteristics successively, i.e., PD-like and DVA-like. TPD shows a wider suppression frequency band than classical DVA and better practicability than PD in the micro vibration environment. Moreover, to characterize the dynamic characteristic of TPD, a simple evaluation of the equivalent dynamic mass and equivalent dynamic damping of the cantilever system treated with TPD was performed by fitting the experimental data to the presented theoretical model. Finally, based on the rheology behaviors of damping particles reported by the previous research results, an approximate phase diagram which shows the motion states of damping particles in TPD was employed to analyze the dynamic characteristic of TPD and several motion states of damping particles in TPD were presented via a high-speed camera.
The Multiphoton Interaction of Lambda Model Atom and Two-Mode Fields
NASA Technical Reports Server (NTRS)
Liu, Tang-Kun
1996-01-01
The system of two-mode fields interacting with atom by means of multiphotons is addressed, and the non-classical statistic quality of two-mode fields with interaction is discussed. Through mathematical calculation, some new rules of non-classical effects of two-mode fields which evolue with time, are established.
Classical Pragmatism on Mind and Rationality
ERIC Educational Resources Information Center
Maattanen, Pentti
2005-01-01
One of the major changes in twentieth century philosophy was the so-called linguistic turn, in which natural and formal languages became central subjects of study. This meant that theories of meaning became mostly about linguistic meaning, thinking was now analyzed in terms of symbol manipulation, and rules of classical logic formed the nucleus of…
Adaptive synchronized switch damping on an inductor: a self-tuning switching law
NASA Astrophysics Data System (ADS)
Kelley, Christopher R.; Kauffman, Jeffrey L.
2017-03-01
Synchronized switch damping (SSD) techniques exploit low-power switching between passive circuits connected to piezoelectric material to reduce structural vibration. In the classical implementation of SSD, the piezoelectric material remains in an open circuit for the majority of the vibration cycle and switches briefly to a shunt circuit at every displacement extremum. Recent research indicates that this switch timing is only optimal for excitation exactly at resonance and points to more general optimal switch criteria based on the phase of the displacement and the system parameters. This work proposes a self-tuning approach that implements the more general optimal switch timing for synchronized switch damping on an inductor (SSDI) without needing any knowledge of the system parameters. The law involves a gradient-based search optimization that is robust to noise and uncertainties in the system. Testing of a physical implementation confirms this law successfully adapts to the frequency and parameters of the system. Overall, the adaptive SSDI controller provides better off-resonance steady-state vibration reduction than classical SSDI while matching performance at resonance.
Smart Kirigami open honeycombs in shape changing actuation and dynamics
NASA Astrophysics Data System (ADS)
Neville, R. M.; Scarpa, F.; Leng, J.
2017-04-01
Kirigami is the ancient Japanese art of cutting and folding paper, widespread in Asia since the 17th century. Kirigami offers a broader set of geometries and topologies than classical fold/valleys Origami, because of the presence of cuts. Moreover, Kirigami can be readily applied to a large set of composite and smart 2D materials, and can be used to up-scaled productions with modular molding. We describe the manufacturing and testing of a topology of Kirigami cellular structures defined as Open Honeycombs. Open Honeycombs (OHs) can assume fully closed shape and be alike classical hexagonal centresymmetric honeycombs, or can vary their morphology by tuning the opening angle and rotational stiffness of the folds. We show the performance of experimental PEEK OHs with cable actuation and morphing shape characteristics, and the analogous morphing behavior of styrene SMPs under combined mechanical and thermal loading. We also show the dynamic (modal analysis) behavior of OHs configurations parameterized against their geometry characteristics, and the controllable modal density characteristics that one could obtain by tuning the topology and folding properties.
Kasturirangan, Rajesh
2008-01-01
Philosophers as well lay people often think of beliefs as psychological states with dubious epistemic properties. Beliefs are conceptualized as unregulated conceptual structures, for the most part hypothetical and often fanciful or deluded. Thinking and reasoning on the other hand are seen as rational activities regulated by rules and governed by norms. Computational modeling of the mind has focused on rule-governed behavior, ultimately trying to reduce them to rules of logic. What if thinking is less like reasoning and more like believing? I argue that the classical model of thought as rational is mistaken and that thinking is fundamentally constituted by believing. This new approach forces us to re-evaluate classical epistemic concepts like "truth", "justification" etc. Furthermore, if thinking is believing, then it is not clear how thoughts can be modeled computationally. We need new mathematical ideas to model thought, ideas that are quite different from traditional logic-based mathematical structures.
eFSM--a novel online neural-fuzzy semantic memory model.
Tung, Whye Loon; Quek, Chai
2010-01-01
Fuzzy rule-based systems (FRBSs) have been successfully applied to many areas. However, traditional fuzzy systems are often manually crafted, and their rule bases that represent the acquired knowledge are static and cannot be trained to improve the modeling performance. This subsequently leads to intensive research on the autonomous construction and tuning of a fuzzy system directly from the observed training data to address the knowledge acquisition bottleneck, resulting in well-established hybrids such as neural-fuzzy systems (NFSs) and genetic fuzzy systems (GFSs). However, the complex and dynamic nature of real-world problems demands that fuzzy rule-based systems and models be able to adapt their parameters and ultimately evolve their rule bases to address the nonstationary (time-varying) characteristics of their operating environments. Recently, considerable research efforts have been directed to the study of evolving Tagaki-Sugeno (T-S)-type NFSs based on the concept of incremental learning. In contrast, there are very few incremental learning Mamdani-type NFSs reported in the literature. Hence, this paper presents the evolving neural-fuzzy semantic memory (eFSM) model, a neural-fuzzy Mamdani architecture with a data-driven progressively adaptive structure (i.e., rule base) based on incremental learning. Issues related to the incremental learning of the eFSM rule base are carefully investigated, and a novel parameter learning approach is proposed for the tuning of the fuzzy set parameters in eFSM. The proposed eFSM model elicits highly interpretable semantic knowledge in the form of Mamdani-type if-then fuzzy rules from low-level numeric training data. These Mamdani fuzzy rules define the computing structure of eFSM and are incrementally learned with the arrival of each training data sample. New rules are constructed from the emergence of novel training data and obsolete fuzzy rules that no longer describe the recently observed data trends are pruned. This enables eFSM to maintain a current and compact set of Mamdani-type if-then fuzzy rules that collectively generalizes and describes the salient associative mappings between the inputs and outputs of the underlying process being modeled. The learning and modeling performances of the proposed eFSM are evaluated using several benchmark applications and the results are encouraging.
NASA Technical Reports Server (NTRS)
VanZwieten, Tannen; Zhu, J. Jim; Adami, Tony; Berry, Kyle; Grammar, Alex; Orr, Jeb S.; Best, Eric A.
2014-01-01
Recently, a robust and practical adaptive control scheme for launch vehicles [ [1] has been introduced. It augments a classical controller with a real-time loop-gain adaptation, and it is therefore called Adaptive Augmentation Control (AAC). The loop-gain will be increased from the nominal design when the tracking error between the (filtered) output and the (filtered) command trajectory is large; whereas it will be decreased when excitation of flex or sloshing modes are detected. There is a need to determine the range and rate of the loop-gain adaptation in order to retain (exponential) stability, which is critical in vehicle operation, and to develop some theoretically based heuristic tuning methods for the adaptive law gain parameters. The classical launch vehicle flight controller design technics are based on gain-scheduling, whereby the launch vehicle dynamics model is linearized at selected operating points along the nominal tracking command trajectory, and Linear Time-Invariant (LTI) controller design techniques are employed to ensure asymptotic stability of the tracking error dynamics, typically by meeting some prescribed Gain Margin (GM) and Phase Margin (PM) specifications. The controller gains at the design points are then scheduled, tuned and sometimes interpolated to achieve good performance and stability robustness under external disturbances (e.g. winds) and structural perturbations (e.g. vehicle modeling errors). While the GM does give a bound for loop-gain variation without losing stability, it is for constant dispersions of the loop-gain because the GM is based on frequency-domain analysis, which is applicable only for LTI systems. The real-time adaptive loop-gain variation of the AAC effectively renders the closed-loop system a time-varying system, for which it is well-known that the LTI system stability criterion is neither necessary nor sufficient when applying to a Linear Time-Varying (LTV) system in a frozen-time fashion. Therefore, a generalized stability metric for time-varying loop=gain perturbations is needed for the AAC.
Fuzzy Logic-Based Audio Pattern Recognition
NASA Astrophysics Data System (ADS)
Malcangi, M.
2008-11-01
Audio and audio-pattern recognition is becoming one of the most important technologies to automatically control embedded systems. Fuzzy logic may be the most important enabling methodology due to its ability to rapidly and economically model such application. An audio and audio-pattern recognition engine based on fuzzy logic has been developed for use in very low-cost and deeply embedded systems to automate human-to-machine and machine-to-machine interaction. This engine consists of simple digital signal-processing algorithms for feature extraction and normalization, and a set of pattern-recognition rules manually tuned or automatically tuned by a self-learning process.
Improving Students' Memory for Musical Compositions and Their Composers: Mneme that Tune!
ERIC Educational Resources Information Center
Carney, Russell N.; Levin, Joel R.
2007-01-01
Students enrolled in music appreciation and music history courses may find it difficult to remember composers' names and the titles of their compositions--particularly when retrieval is prompted by corresponding classical music themes. We sought to develop and validate a mnemonic approach in which musical themes were first recoded as more concrete…
Jusyte, Aiste; Pfister, Roland; Mayer, Sarah V; Schwarz, Katharina A; Wirth, Robert; Kunde, Wilfried; Schönenberg, Michael
2017-09-01
Classic findings on conformity and obedience document a strong and automatic drive of human agents to follow any type of rule or social norm. At the same time, most individuals tend to violate rules on occasion, and such deliberate rule violations have recently been shown to yield cognitive conflict for the rule-breaker. These findings indicate persistent difficulty to suppress the rule representation, even though rule violations were studied in a controlled experimental setting with neither gains nor possible sanctions for violators. In the current study, we validate these findings by showing that convicted criminals, i.e., individuals with a history of habitual and severe forms of rule violations, can free themselves from such cognitive conflict in a similarly controlled laboratory task. These findings support an emerging view that aims at understanding rule violations from the perspective of the violating agent rather than from the perspective of outside observer.
NASA Technical Reports Server (NTRS)
Worm, Jeffrey A.; Culas, Donald E.
1991-01-01
Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. This paper examines the concepts of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to provide the possible optimal solution. By incorporating principles from these theories, a decision-making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much we believe these rules is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of its fuzzy attributes is studied.
Rule-based navigation control design for autonomous flight
NASA Astrophysics Data System (ADS)
Contreras, Hugo; Bassi, Danilo
2008-04-01
This article depicts a navigation control system design that is based on a set of rules in order to follow a desired trajectory. The full control of the aircraft considered here comprises: a low level stability control loop, based on classic PID controller and the higher level navigation whose main job is to exercise lateral control (course) and altitude control, trying to follow a desired trajectory. The rules and PID gains were adjusted systematically according to the result of flight simulation. In spite of its simplicity, the rule-based navigation control proved to be robust, even with big perturbation, like crossing winds.
NASA Astrophysics Data System (ADS)
Sun, Yun-Ping; Ju, Jiun-Yan; Liang, Yen-Chu
2008-12-01
Since the unmanned aerial vehicles (UAVs) bring forth many innovative applications in scientific, civilian, and military fields, the development of UAVs is rapidly growing every year. The on-board autopilot that reliably performs attitude and guidance control is a vital part for out-of-sight flights. However, the control law in autopilot is designed according to a simplified plant model in which the dynamics of real hardware are usually not taken into consideration. It is a necessity to develop a test-bed including real servos to make real-time control experiments for prototype autopilots, so called hardware-in-the-loop (HIL) simulation. In this paper on the basis of the graphical application software LabVIEW, the real-time HIL simulation system is realized efficiently by the virtual instrumentation approach. The proportional-integral-derivative (PID) controller in autopilot for the pitch angle control loop is experimentally determined by the classical Ziegler-Nichols tuning rule and exhibits good transient and steady-state response in real-time HIL simulation. From the results the differences between numerical simulation and real-time HIL simulation are also clearly presented. The effectiveness of HIL simulation for UAV autopilot design is definitely confirmed
Grid occupancy estimation for environment perception based on belief functions and PCR6
NASA Astrophysics Data System (ADS)
Moras, Julien; Dezert, Jean; Pannetier, Benjamin
2015-05-01
In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.
A quantum theory account of order effects and conjunction fallacies in political judgments.
Yearsley, James M; Trueblood, Jennifer S
2017-09-06
Are our everyday judgments about the world around us normative? Decades of research in the judgment and decision-making literature suggest the answer is no. If people's judgments do not follow normative rules, then what rules if any do they follow? Quantum probability theory is a promising new approach to modeling human behavior that is at odds with normative, classical rules. One key advantage of using quantum theory is that it explains multiple types of judgment errors using the same basic machinery, unifying what have previously been thought of as disparate phenomena. In this article, we test predictions from quantum theory related to the co-occurrence of two classic judgment phenomena, order effects and conjunction fallacies, using judgments about real-world events (related to the U.S. presidential primaries). We also show that our data obeys two a priori and parameter free constraints derived from quantum theory. Further, we examine two factors that moderate the effects, cognitive thinking style (as measured by the Cognitive Reflection Test) and political ideology.
Jin, Rui; Lin, Zhi-jian; Xue, Chun-miao; Zhang, Bing
2013-09-01
Knowledge Discovery in Databases is gaining attention and raising new hopes for traditional Chinese medicine (TCM) researchers. It is a useful tool in understanding and deciphering TCM theories. Aiming for a better understanding of Chinese herbal property theory (CHPT), this paper performed an improved association rule learning to analyze semistructured text in the book entitled Shennong's Classic of Materia Medica. The text was firstly annotated and transformed to well-structured multidimensional data. Subsequently, an Apriori algorithm was employed for producing association rules after the sensitivity analysis of parameters. From the confirmed 120 resulting rules that described the intrinsic relationships between herbal property (qi, flavor and their combinations) and herbal efficacy, two novel fundamental principles underlying CHPT were acquired and further elucidated: (1) the many-to-one mapping of herbal efficacy to herbal property; (2) the nonrandom overlap between the related efficacy of qi and flavor. This work provided an innovative knowledge about CHPT, which would be helpful for its modern research.
Neural activity in superior parietal cortex during rule-based visual-motor transformations.
Hawkins, Kara M; Sayegh, Patricia; Yan, Xiaogang; Crawford, J Douglas; Sergio, Lauren E
2013-03-01
Cognition allows for the use of different rule-based sensorimotor strategies, but the neural underpinnings of such strategies are poorly understood. The purpose of this study was to compare neural activity in the superior parietal lobule during a standard (direct interaction) reaching task, with two nonstandard (gaze and reach spatially incongruent) reaching tasks requiring the integration of rule-based information. Specifically, these nonstandard tasks involved dissociating the planes of reach and vision or rotating visual feedback by 180°. Single unit activity, gaze, and reach trajectories were recorded from two female Macaca mulattas. In all three conditions, we observed a temporal discharge pattern at the population level reflecting early reach planning and on-line reach monitoring. In the plane-dissociated task, we found a significant overall attenuation in the discharge rate of cells from deep recording sites, relative to standard reaching. We also found that cells modulated by reach direction tended to be significantly tuned either during the standard or the plane-dissociated task but rarely during both. In the standard versus feedback reversal comparison, we observed some cells that shifted their preferred direction by 180° between conditions, reflecting maintenance of directional tuning with respect to the reach goal. Our findings suggest that the superior parietal lobule plays an important role in processing information about the nonstandard nature of a task, which, through reciprocal connections with precentral motor areas, contributes to the accurate transformation of incongruent sensory inputs into an appropriate motor output. Such processing is crucial for the integration of rule-based information into a motor act.
Valencia-Palomo, G; Rossiter, J A
2011-01-01
This paper makes two key contributions. First, it tackles the issue of the availability of constrained predictive control for low-level control loops. Hence, it describes how the constrained control algorithm is embedded in an industrial programmable logic controller (PLC) using the IEC 61131-3 programming standard. Second, there is a definition and implementation of a novel auto-tuned predictive controller; the key novelty is that the modelling is based on relatively crude but pragmatic plant information. Laboratory experiment tests were carried out in two bench-scale laboratory systems to prove the effectiveness of the combined algorithm and hardware solution. For completeness, the results are compared with a commercial proportional-integral-derivative (PID) controller (also embedded in the PLC) using the most up to date auto-tuning rules. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Performance analysis of different tuning rules for an isothermal CSTR using integrated EPC and SPC
NASA Astrophysics Data System (ADS)
Roslan, A. H.; Karim, S. F. Abd; Hamzah, N.
2018-03-01
This paper demonstrates the integration of Engineering Process Control (EPC) and Statistical Process Control (SPC) for the control of product concentration of an isothermal CSTR. The objectives of this study are to evaluate the performance of Ziegler-Nichols (Z-N), Direct Synthesis, (DS) and Internal Model Control (IMC) tuning methods and determine the most effective method for this process. The simulation model was obtained from past literature and re-constructed using SIMULINK MATLAB to evaluate the process response. Additionally, the process stability, capability and normality were analyzed using Process Capability Sixpack reports in Minitab. Based on the results, DS displays the best response for having the smallest rise time, settling time, overshoot, undershoot, Integral Time Absolute Error (ITAE) and Integral Square Error (ISE). Also, based on statistical analysis, DS yields as the best tuning method as it exhibits the highest process stability and capability.
NASA Astrophysics Data System (ADS)
Shida, R. Y.; Gater, W.
2007-10-01
The website YouTube was created in 2005 and has rapidly become one of the most popular entertainment websites on the internet. It is riding the online video wave today like few other online companies and is currently more popular than the video sections of either Yahoo or Google. iTunes, a digital media application created by Apple in 2001, where one can download and play music and videos, has had a similar success. There is little doubt that they both represent important communication channels in a world heavily influenced by online media, especially among teenagers and young adults. As science communicators we can use this direct route to a younger audience to our advantage. This article aims to give a taste of these applications with a few selected examples demonstrating that both YouTube and iTunes are excellent tools to teach and inspire the general public.
NASA Technical Reports Server (NTRS)
Hayashi, Isao; Nomura, Hiroyoshi; Wakami, Noboru
1991-01-01
Whereas conventional fuzzy reasonings are associated with tuning problems, which are lack of membership functions and inference rule designs, a neural network driven fuzzy reasoning (NDF) capable of determining membership functions by neural network is formulated. In the antecedent parts of the neural network driven fuzzy reasoning, the optimum membership function is determined by a neural network, while in the consequent parts, an amount of control for each rule is determined by other plural neural networks. By introducing an algorithm of neural network driven fuzzy reasoning, inference rules for making a pendulum stand up from its lowest suspended point are determined for verifying the usefulness of the algorithm.
NASA Astrophysics Data System (ADS)
Yukino, Ryoji; Sahoo, Pankaj K.; Sharma, Jaiyam; Takamura, Tsukasa; Joseph, Joby; Sandhu, Adarsh
2017-01-01
We describe wavelength tuning in a one dimensional (1D) silicon nitride nano-grating guided mode resonance (GMR) structure under conical mounting configuration of the device. When the GMR structure is rotated about the axis perpendicular to the surface of the device (azimuthal rotation) for light incident at oblique angles, the conditions for resonance are different than for conventional GMR structures under classical mounting. These resonance conditions enable tuning of the GMR peak position over a wide range of wavelengths. We experimental demonstrate tuning over a range of 375 nm between 500 nm˜875 nm. We present a theoretical model to explain the resonance conditions observed in our experiments and predict the peak positions with show excellent agreement with experiments. Our method for tuning wavelengths is simpler and more efficient than conventional procedures that employ variations in the design parameters of structures or conical mounting of two-dimensional (2D) GMR structures and enables a single 1D GMR device to function as a high efficiency wavelength filter over a wide range of wavelengths. We expect tunable filters based on this technique to be applicable in a wide range of fields including astronomy and biomedical imaging.
NASA Astrophysics Data System (ADS)
Tofighi, Elham; Mahdizadeh, Amin
2016-09-01
This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.
Deviation from Standard Inflationary Cosmology and the Problems in Ekpyrosis
NASA Astrophysics Data System (ADS)
Tseng, Chien-Yao
There are two competing models of our universe right now. One is Big Bang with inflation cosmology. The other is the cyclic model with ekpyrotic phase in each cycle. This paper is divided into two main parts according to these two models. In the first part, we quantify the potentially observable effects of a small violation of translational invariance during inflation, as characterized by the presence of a preferred point, line, or plane. We explore the imprint such a violation would leave on the cosmic microwave background anisotropy, and provide explicit formulas for the expected amplitudes ( alma*l'm') of the spherical-harmonic coefficients. We then provide a model and study the two-point correlation of a massless scalar (the inflaton) when the stress tensor contains the energy density from an infinitely long straight cosmic string in addition to a cosmological constant. Finally, we discuss if inflation can reconcile with the Liouville's theorem as far as the fine-tuning problem is concerned. In the second part, we find several problems in the cyclic/ekpyrotic cosmology. First of all, quantum to classical transition would not happen during an ekpyrotic phase even for superhorizon modes, and therefore the fluctuations cannot be interpreted as classical. This implies the prediction of scale-free power spectrum in ekpyrotic/cyclic universe model requires more inspection. Secondly, we find that the usual mechanism to solve fine-tuning problems is not compatible with eternal universe which contains infinitely many cycles in both direction of time. Therefore, all fine-tuning problems including the flatness problem still asks for an explanation in any generic cyclic models.
Kuhlmann, Levin; Vidyasagar, Trichur R.
2011-01-01
Controversy remains about how orientation selectivity emerges in simple cells of the mammalian primary visual cortex. In this paper, we present a computational model of how the orientation-biased responses of cells in lateral geniculate nucleus (LGN) can contribute to the orientation selectivity in simple cells in cats. We propose that simple cells are excited by lateral geniculate fields with an orientation-bias and disynaptically inhibited by unoriented lateral geniculate fields (or biased fields pooled across orientations), both at approximately the same retinotopic co-ordinates. This interaction, combined with recurrent cortical excitation and inhibition, helps to create the sharp orientation tuning seen in simple cell responses. Along with describing orientation selectivity, the model also accounts for the spatial frequency and length–response functions in simple cells, in normal conditions as well as under the influence of the GABAA antagonist, bicuculline. In addition, the model captures the response properties of LGN and simple cells to simultaneous visual stimulation and electrical stimulation of the LGN. We show that the sharp selectivity for stimulus orientation seen in primary visual cortical cells can be achieved without the excitatory convergence of the LGN input cells with receptive fields along a line in visual space, which has been a core assumption in classical models of visual cortex. We have also simulated how the full range of orientations seen in the cortex can emerge from the activity among broadly tuned channels tuned to a limited number of optimum orientations, just as in the classical case of coding for color in trichromatic primates. PMID:22013414
From classical to quantum mechanics: ``How to translate physical ideas into mathematical language''
NASA Astrophysics Data System (ADS)
Bergeron, H.
2001-09-01
Following previous works by E. Prugovečki [Physica A 91A, 202 (1978) and Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)] on common features of classical and quantum mechanics, we develop a unified mathematical framework for classical and quantum mechanics (based on L2-spaces over classical phase space), in order to investigate to what extent quantum mechanics can be obtained as a simple modification of classical mechanics (on both logical and analytical levels). To obtain this unified framework, we split quantum theory in two parts: (i) general quantum axiomatics (a system is described by a state in a Hilbert space, observables are self-adjoints operators, and so on) and (ii) quantum mechanics proper that specifies the Hilbert space as L2(Rn); the Heisenberg rule [pi,qj]=-iℏδij with p=-iℏ∇, the free Hamiltonian H=-ℏ2Δ/2m and so on. We show that general quantum axiomatics (up to a supplementary "axiom of classicity") can be used as a nonstandard mathematical ground to formulate physical ideas and equations of ordinary classical statistical mechanics. So, the question of a "true quantization" with "ℏ" must be seen as an independent physical problem not directly related with quantum formalism. At this stage, we show that this nonstandard formulation of classical mechanics exhibits a new kind of operation that has no classical counterpart: this operation is related to the "quantization process," and we show why quantization physically depends on group theory (the Galilei group). This analytical procedure of quantization replaces the "correspondence principle" (or canonical quantization) and allows us to map classical mechanics into quantum mechanics, giving all operators of quantum dynamics and the Schrödinger equation. The great advantage of this point of view is that quantization is based on concrete physical arguments and not derived from some "pure algebraic rule" (we exhibit also some limit of the correspondence principle). Moreover spins for particles are naturally generated, including an approximation of their interaction with magnetic fields. We also recover by this approach the semi-classical formalism developed by E. Prugovečki [Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)].
Grimm, Lisa R; Maddox, W Todd
2013-11-01
Research has identified multiple category-learning systems with each being "tuned" for learning categories with different task demands and each governed by different neurobiological systems. Rule-based (RB) classification involves testing verbalizable rules for category membership while information-integration (II) classification requires the implicit learning of stimulus-response mappings. In the first study to directly test rule priming with RB and II category learning, we investigated the influence of the availability of information presented at the beginning of the task. Participants viewed lines that varied in length, orientation, and position on the screen, and were primed to focus on stimulus dimensions that were relevant or irrelevant to the correct classification rule. In Experiment 1, we used an RB category structure, and in Experiment 2, we used an II category structure. Accuracy and model-based analyses suggested that a focus on relevant dimensions improves RB task performance later in learning while a focus on an irrelevant dimension improves II task performance early in learning. © 2013.
Meiran, Nachshon; Hsieh, Shulan; Chang, Chi-Chih
2011-09-01
A major challenge for task switching is maintaining a balance between high task readiness and effectively ignoring irrelevant task rules. This calls for finely tuned inhibition that targets only the source of interference without adversely influencing other task-related representations. The authors show that irrelevant task rules generating response conflict are inhibited, causing their inefficient execution on the next trial (indicating the presence of competitor rule suppression[CRS];Meiran, Hsieh, & Dimov, Journal of Experimental Psychology: Learning, Memory and Cognition, 36, 992-1002, 2010). To determine whether CRS influences task rules, rather than target stimuli or responses, the authors focused on the processing of the task cue before the target stimulus was presented and before the response could be chosen. As was predicted, CRS was found in the event-related potentials in two time windows during task cue processing. It was also found in three time windows after target presentation. Source localization analyses suggest the involvement of the right dorsal prefrontal cortex in all five time windows.
NASA Astrophysics Data System (ADS)
Sha, Wei E. I.; Zhu, Hugh L.; Chen, Luzhou; Chew, Weng Cho; Choy, Wallace C. H.
2015-02-01
It is well known that transport paths of photocarriers (electrons and holes) before collected by electrodes strongly affect bulk recombination and thus electrical properties of solar cells, including open-circuit voltage and fill factor. For boosting device performance, a general design rule, tailored to arbitrary electron to hole mobility ratio, is proposed to decide the transport paths of photocarriers. Due to a unique ability to localize and concentrate light, plasmonics is explored to manipulate photocarrier transport through spatially redistributing light absorption at the active layer of devices. Without changing the active materials, we conceive a plasmonic-electrical concept, which tunes electrical properties of solar cells via the plasmon-modified optical field distribution, to realize the design rule. Incorporating spectrally and spatially configurable metallic nanostructures, thin-film solar cells are theoretically modelled and experimentally fabricated to validate the design rule and verify the plasmonic-tunable electrical properties. The general design rule, together with the plasmonic-electrical effect, contributes to the evolution of emerging photovoltaics.
NASA Astrophysics Data System (ADS)
Saha, Suman; Das, Saptarshi; Das, Shantanu; Gupta, Amitava
2012-09-01
A novel conformal mapping based fractional order (FO) methodology is developed in this paper for tuning existing classical (Integer Order) Proportional Integral Derivative (PID) controllers especially for sluggish and oscillatory second order systems. The conventional pole placement tuning via Linear Quadratic Regulator (LQR) method is extended for open loop oscillatory systems as well. The locations of the open loop zeros of a fractional order PID (FOPID or PIλDμ) controller have been approximated in this paper vis-à-vis a LQR tuned conventional integer order PID controller, to achieve equivalent integer order PID control system. This approach eases the implementation of analog/digital realization of a FOPID controller with its integer order counterpart along with the advantages of fractional order controller preserved. It is shown here in the paper that decrease in the integro-differential operators of the FOPID/PIλDμ controller pushes the open loop zeros of the equivalent PID controller towards greater damping regions which gives a trajectory of the controller zeros and dominant closed loop poles. This trajectory is termed as "M-curve". This phenomena is used to design a two-stage tuning algorithm which reduces the existing PID controller's effort in a significant manner compared to that with a single stage LQR based pole placement method at a desired closed loop damping and frequency.
Tuning fuzzy PD and PI controllers using reinforcement learning.
Boubertakh, Hamid; Tadjine, Mohamed; Glorennec, Pierre-Yves; Labiod, Salim
2010-10-01
In this paper, we propose a new auto-tuning fuzzy PD and PI controllers using reinforcement Q-learning (QL) algorithm for SISO (single-input single-output) and TITO (two-input two-output) systems. We first, investigate the design parameters and settings of a typical class of Fuzzy PD (FPD) and Fuzzy PI (FPI) controllers: zero-order Takagi-Sugeno controllers with equidistant triangular membership functions for inputs, equidistant singleton membership functions for output, Larsen's implication method, and average sum defuzzification method. Secondly, the analytical structures of these typical fuzzy PD and PI controllers are compared to their classical counterpart PD and PI controllers. Finally, the effectiveness of the proposed method is proven through simulation examples. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Dynamic tuning of plasmon resonance in the visible using graphene.
Balci, Sinan; Balci, Osman; Kakenov, Nurbek; Atar, Fatih Bilge; Kocabas, Coskun
2016-03-15
We report active electrical tuning of plasmon resonance of silver nanoprisms (Ag NPs) in the visible spectrum. Ag NPs are placed in close proximity to graphene which leads to additional tunable loss for the plasmon resonance. The ionic gating of graphene modifies its Fermi level from 0.2 to 1 eV, which then affects the absorption of graphene due to Pauli blocking. Plasmon resonance frequency and linewidth of Ag NPs can be reversibly shifted by 20 and 35 meV, respectively. The coupled graphene-Ag NPs system can be classically described by a damped harmonic oscillator model. Atomic layer deposition allows for controlling the graphene-Ag NP separation with atomic-level precision to optimize coupling between them.
Quantum friction on monoatomic layers and its classical analog
NASA Astrophysics Data System (ADS)
Maslovski, Stanislav I.; Silveirinha, Mário G.
2013-07-01
We consider the effect of quantum friction at zero absolute temperature resulting from polaritonic interactions in closely positioned two-dimensional arrays of polarizable atoms (e.g., graphene sheets) or thin dielectric sheets modeled as such arrays. The arrays move one with respect to another with a nonrelativistic velocity v≪c. We confirm that quantum friction is inevitably related to material dispersion, and that such friction vanishes in nondispersive media. In addition, we consider a classical analog of the quantum friction which allows us to establish a link between the phenomena of quantum friction and classical parametric generation. In particular, we demonstrate how the quasiparticle generation rate typically obtained from the quantum Fermi golden rule can be calculated classically.
NASA Technical Reports Server (NTRS)
Tsue, Yasuhiko
1994-01-01
A general framework for time-dependent variational approach in terms of squeezed coherent states is constructed with the aim of describing quantal systems by means of classical mechanics including higher order quantal effects with the aid of canonicity conditions developed in the time-dependent Hartree-Fock theory. The Maslov phase occurring in a semi-classical quantization rule is investigated in this framework. In the limit of a semi-classical approximation in this approach, it is definitely shown that the Maslov phase has a geometric nature analogous to the Berry phase. It is also indicated that this squeezed coherent state approach is a possible way to go beyond the usual WKB approximation.
NASA Astrophysics Data System (ADS)
Bresnahan, Patricia A.; Pukinskis, Madeleine; Wiggins, Michael
1999-03-01
Image quality assessment systems differ greatly with respect to the number and types of mags they need to evaluate, and their overall architectures. Managers of these systems, however, all need to be able to tune and evaluate system performance, requirements often overlooked or under-designed during project planning. Performance tuning tools allow users to define acceptable quality standards for image features and attributes by adjusting parameter settings. Performance analysis tools allow users to evaluate and/or predict how well a system performs in a given parameter state. While image assessment algorithms are becoming quite sophisticated, duplicating or surpassing the human decision making process in their speed and reliability, they often require a greater investment in 'training' or fine tuning of parameters in order to achieve optimum performance. This process may involve the analysis of hundreds or thousands of images, generating a large database of files and statistics that can be difficult to sort through and interpret. Compounding the difficulty is the fact that personnel charged with tuning and maintaining the production system may not have the statistical or analytical background required for the task. Meanwhile, hardware innovations have greatly increased the volume of images that can be handled in a given time frame, magnifying the consequences of running a production site with an inadequately tuned system. In this paper, some general requirements for a performance evaluation and tuning data visualization system are discussed. A custom engineered solution to the tuning and evaluation problem is then presented, developed within the context of a high volume image quality assessment, data entry, OCR, and image archival system. A key factor influencing the design of the system was the context-dependent definition of image quality, as perceived by a human interpreter. This led to the development of a five-level, hierarchical approach to image quality evaluation. Lower-level pass-fail conditions and decision rules were coded into the system. Higher-level image quality states were defined by allowing the users to interactively adjust the system's sensitivity to various image attributes by manipulating graphical controls. Results were presented in easily interpreted bar graphs. These graphs were mouse- sensitive, allowing the user to more fully explore the subsets of data indicated by various color blocks. In order to simplify the performance evaluation and tuning process, users could choose to view the results of (1) the existing system parameter state, (2) the results of any arbitrary parameter values they chose, or (3) the results of a quasi-optimum parameter state, derived by applying a decision rule to a large set of possible parameter states. Giving managers easy- to-use tools for defining the more subjective aspects of quality resulted in a system that responded to contextual cues that are difficult to hard-code. It had the additional advantage of allowing the definition of quality to evolve over time, as users became more knowledgeable as to the strengths and limitations of an automated quality inspection system.
Design technology co-optimization for 14/10nm metal1 double patterning layer
NASA Astrophysics Data System (ADS)
Duan, Yingli; Su, Xiaojing; Chen, Ying; Su, Yajuan; Shao, Feng; Zhang, Recco; Lei, Junjiang; Wei, Yayi
2016-03-01
Design and technology co-optimization (DTCO) can satisfy the needs of the design, generate robust design rule, and avoid unfriendly patterns at the early stage of design to ensure a high level of manufacturability of the product by the technical capability of the present process. The DTCO methodology in this paper includes design rule translation, layout analysis, model validation, hotspots classification and design rule optimization mainly. The correlation of the DTCO and double patterning (DPT) can optimize the related design rule and generate friendlier layout which meets the requirement of the 14/10nm technology node. The experiment demonstrates the methodology of DPT-compliant DTCO which is applied to a metal1 layer from the 14/10nm node. The DTCO workflow proposed in our job is an efficient solution for optimizing the design rules for 14/10 nm tech node Metal1 layer. And the paper also discussed and did the verification about how to tune the design rule of the U-shape and L-shape structures in a DPT-aware metal layer.
Intelligent Distributed Systems
2015-10-23
periodic gossiping algorithms by using convex combination rules rather than standard averaging rules. On a ring graph, we have discovered how to sequence...the gossips within a period to achieve the best possible convergence rate and we have related this optimal value to the classic edge coloring problem...consensus. There are three different approaches to distributed averaging: linear iterations, gossiping , and dou- ble linear iterations which are also known as
Rosetta Phase II: Measuring and Interpreting Cultural Differences in Cognition
2008-07-31
approaches are used to capture culture. First, anthropology and psychiatry adopt research methods that focus on specific groups or individuals...Classical anthropology provides information about behaviors, customs, social roles, and social rules based on extended and intense observation of single...This training goes beyond rules and procedures so that military personnel can see events through the eyes of adversaries or host nationals. They must
Born’s rule as signature of a superclassical current algebra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fussy, S.; Mesa Pascasio, J.; Institute for Atomic and Subatomic Physics, Vienna University of Technology, Operng. 9, 1040 Vienna
2014-04-15
We present a new tool for calculating the interference patterns and particle trajectories of a double-, three- and N-slit system on the basis of an emergent sub-quantum theory developed by our group throughout the last years. The quantum itself is considered as an emergent system representing an off-equilibrium steady state oscillation maintained by a constant throughput of energy provided by a classical zero-point energy field. We introduce the concept of a “relational causality” which allows for evaluating structural interdependences of different systems levels, i.e. in our case of the relations between partial and total probability density currents, respectively. Combined with themore » application of 21st century classical physics like, e.g., modern nonequilibrium thermodynamics, we thus arrive at a “superclassical” theory. Within this framework, the proposed current algebra directly leads to a new formulation of the guiding equation which is equivalent to the original one of the de Broglie–Bohm theory. By proving the absence of third order interferences in three-path systems it is shown that Born’s rule is a natural consequence of our theory. Considering the series of one-, double-, or, generally, of N-slit systems, with the first appearance of an interference term in the double slit case, we can explain the violation of Sorkin’s first order sum rule, just as the validity of all higher order sum rules. Moreover, the Talbot patterns and Talbot distance for an arbitrary N-slit device can be reproduced exactly by our model without any quantum physics tool. -- Highlights: •Calculating the interference patterns and particle trajectories of a double-, three- and N-slit system. •Deriving a new formulation of the guiding equation equivalent to the de Broglie–Bohm one. •Proving the absence of third order interferences and thus explaining Born’s rule. •Explaining the violation of Sorkin’s order sum rules. •Classical simulation of Talbot patterns and exact reproduction of Talbot distance for N slits.« less
Stott, Clifford; Drury, John
2016-04-01
This article explores the origins and ideology of classical crowd psychology, a body of theory reflected in contemporary popularised understandings such as of the 2011 English 'riots'. This article argues that during the nineteenth century, the crowd came to symbolise a fear of 'mass society' and that 'classical' crowd psychology was a product of these fears. Classical crowd psychology pathologised, reified and decontextualised the crowd, offering the ruling elites a perceived opportunity to control it. We contend that classical theory misrepresents crowd psychology and survives in contemporary understanding because it is ideological. We conclude by discussing how classical theory has been supplanted in academic contexts by an identity-based crowd psychology that restores the meaning to crowd action, replaces it in its social context and in so doing transforms theoretical understanding of 'riots' and the nature of the self. © The Author(s) 2016.
High-frequency sum rules for classical one-component plasma in a magnetic field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genga, R.O.
A high-frequency sum-rule expansion is derived for all elements of a classical plasma dielectric tensor in the presence of an external magnetic field. Omega/sub 4//sup 13/ is found to be the only coefficient of omega/sup -4/ that has no correlational and finite-radiation-temperature contributions. The finite-radiation-temperature effect results in an upward renormalization of the frequencies of the modes; it also leads to either reduction of the negative correlational effect on the positive thermal dispersion or, together with correlation, enhancement of the positive thermal dispersion for finite k, depending on the direction of propagation. Further, for the extraordinary mode, the finite-radiation-temperature effectmore » increases the positive refractive dispersion for finite k.« less
NASA Astrophysics Data System (ADS)
Inoue, Makoto
2017-12-01
Some new formulae of the canonical correlation functions for the one dimensional quantum transverse Ising model are found by the ST-transformation method using a Morita's sum rule and its extensions for the two dimensional classical Ising model. As a consequence we obtain a time-independent term of the dynamical correlation functions. Differences of quantum version and classical version of these formulae are also discussed.
The Fine-Tuning of the Universe for Intelligent Life
NASA Astrophysics Data System (ADS)
Barnes, L. A.
2012-06-01
The fine-tuning of the universe for intelligent life has received a great deal of attention in recent years, both in the philosophical and scientific literature. The claim is that in the space of possible physical laws, parameters and initial conditions, the set that permits the evolution of intelligent life is very small. I present here a review of the scientific literature, outlining cases of fine-tuning in the classic works of Carter, Carr and Rees, and Barrow and Tipler, as well as more recent work. To sharpen the discussion, the role of the antagonist will be played by Victor Stenger's recent book The Fallacy of Fine-Tuning: Why the Universe is Not Designed for Us. Stenger claims that all known fine-tuning cases can be explained without the need for a multiverse. Many of Stenger's claims will be found to be highly problematic. We will touch on such issues as the logical necessity of the laws of nature; objectivity, invariance and symmetry; theoretical physics and possible universes; entropy in cosmology; cosmic inflation and initial conditions; galaxy formation; the cosmological constant; stars and their formation; the properties of elementary particles and their effect on chemistry and the macroscopic world; the origin of mass; grand unified theories; and the dimensionality of space and time. I also provide an assessment of the multiverse, noting the significant challenges that it must face. I do not attempt to defend any conclusion based on the fine-tuning of the universe for intelligent life. This paper can be viewed as a critique of Stenger's book, or read independently.
Generalized mutual information and Tsirelson's bound
NASA Astrophysics Data System (ADS)
Wakakuwa, Eyuri; Murao, Mio
2014-12-01
We introduce a generalization of the quantum mutual information between a classical system and a quantum system into the mutual information between a classical system and a system described by general probabilistic theories. We apply this generalized mutual information (GMI) to a derivation of Tsirelson's bound from information causality, and prove that Tsirelson's bound can be derived from the chain rule of the GMI. By using the GMI, we formulate the "no-supersignalling condition" (NSS), that the assistance of correlations does not enhance the capability of classical communication. We prove that NSS is never violated in any no-signalling theory.
Generalized mutual information and Tsirelson's bound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wakakuwa, Eyuri; Murao, Mio
2014-12-04
We introduce a generalization of the quantum mutual information between a classical system and a quantum system into the mutual information between a classical system and a system described by general probabilistic theories. We apply this generalized mutual information (GMI) to a derivation of Tsirelson's bound from information causality, and prove that Tsirelson's bound can be derived from the chain rule of the GMI. By using the GMI, we formulate the 'no-supersignalling condition' (NSS), that the assistance of correlations does not enhance the capability of classical communication. We prove that NSS is never violated in any no-signalling theory.
A reinforcement learning-based architecture for fuzzy logic control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1992-01-01
This paper introduces a new method for learning to refine a rule-based fuzzy logic controller. A reinforcement learning technique is used in conjunction with a multilayer neural network model of a fuzzy controller. The approximate reasoning based intelligent control (ARIC) architecture proposed here learns by updating its prediction of the physical system's behavior and fine tunes a control knowledge base. Its theory is related to Sutton's temporal difference (TD) method. Because ARIC has the advantage of using the control knowledge of an experienced operator and fine tuning it through the process of learning, it learns faster than systems that train networks from scratch. The approach is applied to a cart-pole balancing system.
Measuring uncertainty by extracting fuzzy rules using rough sets
NASA Technical Reports Server (NTRS)
Worm, Jeffrey A.
1991-01-01
Despite the advancements in the computer industry in the past 30 years, there is still one major deficiency. Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. The methods are examined of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to possibly provide the optimal solution. By incorporating principles from these theories, a decision making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much these rules is believed is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of a set of fuzzy attributes is studied.
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
Robust linear discriminant analysis with distance based estimators
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina
2017-11-01
Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.
Zurek, Wojciech Hubert
2018-07-13
The emergence of the classical world from the quantum substrate of our Universe is a long-standing conundrum. In this paper, I describe three insights into the transition from quantum to classical that are based on the recognition of the role of the environment. I begin with the derivation of preferred sets of states that help to define what exists-our everyday classical reality. They emerge as a result of the breaking of the unitary symmetry of the Hilbert space which happens when the unitarity of quantum evolutions encounters nonlinearities inherent in the process of amplification-of replicating information. This derivation is accomplished without the usual tools of decoherence, and accounts for the appearance of quantum jumps and the emergence of preferred pointer states consistent with those obtained via environment-induced superselection, or einselection The pointer states obtained in this way determine what can happen-define events-without appealing to Born's Rule for probabilities. Therefore, p k =| ψ k | 2 can now be deduced from the entanglement-assisted invariance, or envariance -a symmetry of entangled quantum states. With probabilities at hand, one also gains new insights into the foundations of quantum statistical physics. Moreover, one can now analyse the information flows responsible for decoherence. These information flows explain how the perception of objective classical reality arises from the quantum substrate: the effective amplification that they represent accounts for the objective existence of the einselected states of macroscopic quantum systems through the redundancy of pointer state records in their environment-through quantum Darwinism This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).
Velocity control of servo systems using an integral retarded algorithm.
Ramírez, Adrián; Garrido, Rubén; Mondié, Sabine
2015-09-01
This paper presents a design technique for the delay-based controller called Integral Retarded (IR), and its applications to velocity control of servo systems. Using spectral analysis, the technique yields a tuning strategy for the IR by assigning a triple real dominant root for the closed-loop system. This result ultimately guarantees a desired exponential decay rate σ(d) while achieving the IR tuning as explicit function of σ(d) and system parameters. The intentional introduction of delay allows using noisy velocity measurements without additional filtering. The structure of the controller is also able to avoid velocity measurements by using instead position information. The IR is compared to a classical PI, both tested in a laboratory prototype. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Kume, Satoshi; Lee, Young-Ho; Nakatsuji, Masatoshi; Teraoka, Yoshiaki; Yamaguchi, Keisuke; Goto, Yuji; Inui, Takashi
2014-03-18
The hydrophobic cavity of lipocalin-type prostaglandin D synthase (L-PGDS) has been suggested to accommodate various lipophilic ligands through hydrophobic effects, but its energetic origin remains unknown. We characterized 18 buffer-independent binding systems between human L-PGDS and lipophilic ligands using isothermal titration calorimetry. Although the classical hydrophobic effect was mostly detected, all complex formations were driven by favorable enthalpic gains. Gibbs energy changes strongly correlated with the number of hydrogen bond acceptors of ligand. Thus, the broad binding capability of L-PGDS for ligands should be viewed as hydrophilic interactions delicately tuned by enthalpy-entropy compensation using combined effects of hydrophilic and hydrophobic interactions. Copyright © 2014 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Qian, Linyong; Zhang, Dawei; Dai, Bo; Wang, Qi; Huang, Yuanshen; Zhuang, Songlin
2015-07-13
A novel bandwidth-tunable notch filter is proposed based on the guided-mode resonance effect. The notch is created due to the superposition spectra response of two guided-mode resonant filters. The compact, bandwidth tuning capability is realized by taking advantage the effect of spectra-to-polarization sensitivity in one-dimensional classical guided-mode resonance filter, and using a liquid crystal polarization rotator for precise and simple polarization control. The operation principle and the design of the device are presented, and we demonstrate it experimentally. The central wavelength is fixed at 766.4 nm with a relatively symmetric profile. The full width at half maximum bandwidth could be tuned from 8.6 nm to 18.2 nm by controlling the applied voltage in electrically-driving polarization rotator.
Magnetic and electrostatic confinement of plasma with tuning of electrostatic field
Rostoker, Norman [Irvine, CA; Binderbauer, Michl [Irvine, CA; Qerushi, Artan [Irvine, CA; Tahsiri, Hooshang [Irvine, CA
2008-10-21
A system and method for containing plasma and forming a Field Reversed Configuration (FRC) magnetic topology are described in which plasma ions are contained magnetically in stable, non-adiabatic orbits in the FRC. Further, the electrons are contained electrostatically in a deep energy well, created by tuning an externally applied magnetic field. The simultaneous electrostatic confinement of electrons and magnetic confinement of ions avoids anomalous transport and facilitates classical containment of both electrons and ions. In this configuration, ions and electrons may have adequate density and temperature so that upon collisions they are fused together by nuclear force, thus releasing fusion energy. Moreover, the fusion fuel plasmas that can be used with the present confinement system and method are not limited to neutronic fuels only, but also advantageously include advanced fuels.
Magnetic and electrostatic confinement of plasma with tuning of electrostatic field
Rostoker, Norman; Binderbauer, Michl; Qerushi, Artan; Tahsiri, Hooshang
2006-10-10
A system and method for containing plasma and forming a Field Reversed Configuration (FRC) magnetic topology are described in which plasma ions are contained magnetically in stable, non-adiabatic orbits in the FRC. Further, the electrons are contained electrostatically in a deep energy well, created by tuning an externally applied magnetic field. The simultaneous electrostatic confinement of electrons and magnetic confinement of ions avoids anomalous transport and facilitates classical containment of both electrons and ions. In this configuration, ions and electrons may have adequate density and temperature so that upon collisions they are fused together by nuclear force, thus releasing fusion energy. Moreover, the fusion fuel plasmas that can be used with the present confinement system and method are not limited to neutronic fuels only, but also advantageously include advanced fuels.
Magnetic and electrostatic confinement of plasma with tuning of electrostatic field
Rostoker, Norman; Binderbauer, Michl; Qerushi, Artan; Tahsiri, Hooshang
2006-03-21
A system and method for containing plasma and forming a Field Reversed Configuration (FRC) magnetic topology are described in which plasma ions are contained magnetically in stable, non-adiabatic orbits in the FRC. Further, the electrons are contained electrostatically in a deep energy well, created by tuning an externally applied magnetic field. The simultaneous electrostatic confinement of electrons and magnetic confinement of ions avoids anomalous transport and facilitates classical containment of both electrons and ions. In this configuration, ions and electrons may have adequate density and temperature so that upon collisions they are fused together by nuclear force, thus releasing fusion energy. Moreover, the fusion fuel plasmas that can be used with the present confinement system and method are not limited to neutronic fuels only, but also advantageously include advanced fuels.
Tuning the Photon Statistics of a Strongly Coupled Nanophotonic System
NASA Astrophysics Data System (ADS)
Dory, C.; Fischer, K. A.; Müller, K.; Lagoudakis, K. G.; Sarmiento, T.; Rundquist, A.; Zhang, J. L.; Kelaita, Y.; Sapra, N. V.; Vučković, J.
Strongly coupled quantum-dot-photonic-crystal cavity systems provide a nonlinear ladder of hybridized light-matter states, which are a promising platform for non-classical light generation. The transmission of light through such systems enables light generation with tunable photon counting statistics. By detuning the frequencies of quantum emitter and cavity, we can tune the transmission of light to strongly enhance either single- or two-photon emission processes. However, these nanophotonic systems show a strongly dissipative nature and classical light obscures any quantum character of the emission. In this work, we utilize a self-homodyne interference technique combined with frequency-filtering to overcome this obstacle. This allows us to generate emission with a strong two-photon component in the multi-photon regime, where we measure a second-order coherence value of g (2) [ 0 ] = 1 . 490 +/- 0 . 034 . We propose rate equation models that capture the dominant processes of emission both in the single- and multi-photon regimes and support them by quantum-optical simulations that fully capture the frequency filtering of emission from our solid-state system. Finally, we simulate a third-order coherence value of g (3) [ 0 ] = 0 . 872 +/- 0 . 021 . Army Research Office (ARO) (W911NF1310309), National Science Foundation (1503759), Stanford Graduate Fellowship.
Quantum criticality among entangled spin chains
Blanc, N.; Trinh, J.; Dong, L.; ...
2017-12-11
Here, an important challenge in magnetism is the unambiguous identification of a quantum spin liquid, of potential importance for quantum computing. In such a material, the magnetic spins should be fluctuating in the quantum regime, instead of frozen in a classical long-range-ordered state. While this requirement dictates systems wherein classical order is suppressed by a frustrating lattice, an ideal system would allow tuning of quantum fluctuations by an external parameter. Conventional three-dimensional antiferromagnets can be tuned through a quantum critical point—a region of highly fluctuating spins—by an applied magnetic field. Such systems suffer from a weak specific-heat peak at themore » quantum critical point, with little entropy available for quantum fluctuations. Here we study a different type of antiferromagnet, comprised of weakly coupled antiferromagnetic spin-1/2 chains as realized in the molecular salt K 2PbCu(NO 2) 6. Across the temperature–magnetic field boundary between three-dimensional order and the paramagnetic phase, the specific heat exhibits a large peak whose magnitude approaches a value suggestive of the spinon Sommerfeld coefficient of isolated quantum spin chains. These results demonstrate an alternative approach for producing quantum matter via a magnetic-field-induced shift of entropy from one-dimensional short-range order to a three-dimensional quantum critical point.« less
Quantum criticality among entangled spin chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanc, N.; Trinh, J.; Dong, L.
Here, an important challenge in magnetism is the unambiguous identification of a quantum spin liquid, of potential importance for quantum computing. In such a material, the magnetic spins should be fluctuating in the quantum regime, instead of frozen in a classical long-range-ordered state. While this requirement dictates systems wherein classical order is suppressed by a frustrating lattice, an ideal system would allow tuning of quantum fluctuations by an external parameter. Conventional three-dimensional antiferromagnets can be tuned through a quantum critical point—a region of highly fluctuating spins—by an applied magnetic field. Such systems suffer from a weak specific-heat peak at themore » quantum critical point, with little entropy available for quantum fluctuations. Here we study a different type of antiferromagnet, comprised of weakly coupled antiferromagnetic spin-1/2 chains as realized in the molecular salt K 2PbCu(NO 2) 6. Across the temperature–magnetic field boundary between three-dimensional order and the paramagnetic phase, the specific heat exhibits a large peak whose magnitude approaches a value suggestive of the spinon Sommerfeld coefficient of isolated quantum spin chains. These results demonstrate an alternative approach for producing quantum matter via a magnetic-field-induced shift of entropy from one-dimensional short-range order to a three-dimensional quantum critical point.« less
Quantum criticality among entangled spin chains
NASA Astrophysics Data System (ADS)
Blanc, N.; Trinh, J.; Dong, L.; Bai, X.; Aczel, A. A.; Mourigal, M.; Balents, L.; Siegrist, T.; Ramirez, A. P.
2018-03-01
An important challenge in magnetism is the unambiguous identification of a quantum spin liquid1,2, of potential importance for quantum computing. In such a material, the magnetic spins should be fluctuating in the quantum regime, instead of frozen in a classical long-range-ordered state. While this requirement dictates systems3,4 wherein classical order is suppressed by a frustrating lattice5, an ideal system would allow tuning of quantum fluctuations by an external parameter. Conventional three-dimensional antiferromagnets can be tuned through a quantum critical point—a region of highly fluctuating spins—by an applied magnetic field. Such systems suffer from a weak specific-heat peak at the quantum critical point, with little entropy available for quantum fluctuations6. Here we study a different type of antiferromagnet, comprised of weakly coupled antiferromagnetic spin-1/2 chains as realized in the molecular salt K2PbCu(NO2)6. Across the temperature-magnetic field boundary between three-dimensional order and the paramagnetic phase, the specific heat exhibits a large peak whose magnitude approaches a value suggestive of the spinon Sommerfeld coefficient of isolated quantum spin chains. These results demonstrate an alternative approach for producing quantum matter via a magnetic-field-induced shift of entropy from one-dimensional short-range order to a three-dimensional quantum critical point.
NASA Astrophysics Data System (ADS)
Li, Guolong; Xiao, Xiao; Li, Yong; Wang, Xiaoguang
2018-02-01
We propose a multimode optomechanical system to realize tunable optical nonreciprocity that has the prospect of making an optical diode for information technology. The proposed model consists of two subsystems, each of which contains two optical cavities, injected with a classical field and a quantum signal via a 50:50 beam splitter, and a mechanical oscillator, coupled to both cavities via optomechanical coupling. Meanwhile two cavities and an oscillator in a subsystem are respectively coupled to their corresponding cavities and an oscillator in the other subsystem. Our scheme yields nonreciprocal effects at different frequencies with opposite directions, but each effective linear optomechanical coupling can be controlled by an independent classical one-frequency pump. With this setup one is able to apply quantum states with large fluctuations, which extends the scope of applicable quantum states, and exploit the independence of paths. Moreover, the optimal frequencies for nonreciprocal effects can be controlled by adjusting the relevant parameters. We also exhibit the path switching of two directions, from a mechanical input to two optical output channels, via tuning the signal frequency. In experiment, the considered scheme can be tuned to reach small damping rates of the oscillators relative to those of the cavities, which is more practical and requires less power than in previous schemes.
Zeinalian, Mehrdad; Eshaghi, Mehdi; Sharbafchi, Mohammad Reza; Naji, Homayoun; Marandi, Sayed Mohammad Masoud; Asgary, Sedigheh
2016-01-01
Cancer is one of the three main causes of mortality in most human communities whose prevalence is being increased. A significant part of health budget in all countries has been allocated to treat the cancer, which is incurable in many cases. It has led the global health attitude to cancer prevention. Many cancer-related risk factors have been identified for which preventive recommendations have been offered by international organizations such as World Health Organization. Some of the most important of these risk factors are smoking and alcohol consumption, hypercaloric and low-fiber diet, obesity, inactivity, environmental and industrial pollution, some viral infections, and hereditary factors. Exact reviewing of Iranian-Islamic traditional medicine (IITM) resources determines that preventive rules, which named as six essential rules (Sitteh-e-Zarurieah) are abundantly found, including all identified cancer-related risk factors. These preventive rules are: Air (Hava), body movement and repose, sleep and wakefulness, food and drink, evacuation and retention, and mental movement and repose (A’raz-e-Nafsani). The associated risk factors in classic medicine are: Smoking and air pollution, sedentary life, sleep disturbance, improper nutrition and alcohol, chronic constipation, and psychoneurotic stresses. Moreover, these rules are comprehensive enough to include many of the other harmful health-related factors whose roles have been confirmed in the occurrence of different diseases, except cancer. Apparently, cancer prevention in Iran would be more successful if the sextet necessary rules of IITM are promoted among the populations and health policy makers. PMID:27141280
Audiologist-driven versus patient-driven fine tuning of hearing instruments.
Boymans, Monique; Dreschler, Wouter A
2012-03-01
Two methods of fine tuning the initial settings of hearing aids were compared: An audiologist-driven approach--using real ear measurements and a patient-driven fine-tuning approach--using feedback from real-life situations. The patient-driven fine tuning was conducted by employing the Amplifit(®) II system using audiovideo clips. The audiologist-driven fine tuning was based on the NAL-NL1 prescription rule. Both settings were compared using the same hearing aids in two 6-week trial periods following a randomized blinded cross-over design. After each trial period, the settings were evaluated by insertion-gain measurements. Performance was evaluated by speech tests in quiet, in noise, and in time-reversed speech, presented at 0° and with spatially separated sound sources. Subjective results were evaluated using extensive questionnaires and audiovisual video clips. A total of 73 participants were included. On average, higher gain values were found for the audiologist-driven settings than for the patient-driven settings, especially at 1000 and 2000 Hz. Better objective performance was obtained for the audiologist-driven settings for speech perception in quiet and in time-reversed speech. This was supported by better scores on a number of subjective judgments and in the subjective ratings of video clips. The perception of loud sounds scored higher than when patient-driven, but the overall preference was in favor of the audiologist-driven settings for 67% of the participants.
Herguedas, Beatriz; Krieger, James; Greger, Ingo H
2013-01-01
The composition and spatial arrangement of subunits in ion channels are essential for their function. Diverse stoichiometries are possible in a multitude of channels. These depend upon cell type-specific subunit expression, which can be tuned in a developmentally regulated manner and in response to activity, on subunit stability in the endoplasmic reticulum, intersubunit affinities, and potentially subunit diffusion within the ER membrane. In concert, these parameters shape channel biogenesis and ultimately tune cellular response properties. The complexity of this assembly process is particularly well illustrated by the ionotropic glutamate receptors, the main mediators of excitatory neurotransmission. These tetrameric cation channels predominantly assemble into heteromers, which is "obligatory" for some iGluR subfamilies but "preferential" for others. Here, we discuss recent insights into the rules underlying these two pathways, the role of individual domains based on an ever increasing list of crystal structures, and how these assembly parameters tune assembly across diverse receptor oligomers. Copyright © 2013 Elsevier Inc. All rights reserved.
Increasing complexity with quantum physics.
Anders, Janet; Wiesner, Karoline
2011-09-01
We argue that complex systems science and the rules of quantum physics are intricately related. We discuss a range of quantum phenomena, such as cryptography, computation and quantum phases, and the rules responsible for their complexity. We identify correlations as a central concept connecting quantum information and complex systems science. We present two examples for the power of correlations: using quantum resources to simulate the correlations of a stochastic process and to implement a classically impossible computational task.
Plugged in or Tuned out? Students and Educators Reflect on Current Technology in the Classroom
ERIC Educational Resources Information Center
Blankenship, Mark
2010-01-01
Not long ago, television sets in classrooms were a novel idea and most professors could remember the days of slide rules, mimeograph machines, and manual typewriters. Now, even DVD players can seem quaint. But education has been altered by the breathtakingly rapid technological advancements of the last 10 years, and just like the rest of the…
Recurrent network dynamics reconciles visual motion segmentation and integration.
Medathati, N V Kartheek; Rankin, James; Meso, Andrew I; Kornprobst, Pierre; Masson, Guillaume S
2017-09-12
In sensory systems, a range of computational rules are presumed to be implemented by neuronal subpopulations with different tuning functions. For instance, in primate cortical area MT, different classes of direction-selective cells have been identified and related either to motion integration, segmentation or transparency. Still, how such different tuning properties are constructed is unclear. The dominant theoretical viewpoint based on a linear-nonlinear feed-forward cascade does not account for their complex temporal dynamics and their versatility when facing different input statistics. Here, we demonstrate that a recurrent network model of visual motion processing can reconcile these different properties. Using a ring network, we show how excitatory and inhibitory interactions can implement different computational rules such as vector averaging, winner-take-all or superposition. The model also captures ordered temporal transitions between these behaviors. In particular, depending on the inhibition regime the network can switch from motion integration to segmentation, thus being able to compute either a single pattern motion or to superpose multiple inputs as in motion transparency. We thus demonstrate that recurrent architectures can adaptively give rise to different cortical computational regimes depending upon the input statistics, from sensory flow integration to segmentation.
150-nm DR contact holes die-to-database inspection
NASA Astrophysics Data System (ADS)
Kuo, Shen C.; Wu, Clare; Eran, Yair; Staud, Wolfgang; Hemar, Shirley; Lindman, Ofer
2000-07-01
Using a failure analysis-driven yield enhancements concept, based on an optimization of the mask manufacturing process and UV reticle inspection is studied and shown to improve the contact layer quality. This is achieved by relating various manufacturing processes to very fine tuned contact defect detection. In this way, selecting an optimized manufacturing process with fine-tuned inspection setup is achieved in a controlled manner. This paper presents a study, performed on a specially designed test reticle, which simulates production contact layers of design rule 250nm, 180nm and 150nm. This paper focuses on the use of advanced UV reticle inspection techniques as part of the process optimization cycle. Current inspection equipment uses traditional and insufficient methods of small contact-hole inspection and review.
Axions, inflation and the anthropic principle
NASA Astrophysics Data System (ADS)
Mack, Katherine J.
2011-07-01
The QCD axion is the leading solution to the strong-CP problem, a dark matter candidate, and a possible result of string theory compactifications. However, for axions produced before inflation, symmetry-breaking scales of fagtrsim1012 GeV (which are favored in string-theoretic axion models) are ruled out by cosmological constraints unless both the axion misalignment angle θ0 and the inflationary Hubble scale HI are extremely fine-tuned. We show that attempting to accommodate a high-fa axion in inflationary cosmology leads to a fine-tuning problem that is worse than the strong-CP problem the axion was originally invented to solve. We also show that this problem remains unresolved by anthropic selection arguments commonly applied to the high-fa axion scenario.
NASA Astrophysics Data System (ADS)
Tiwari, Vivek; Peters, William K.; Jonas, David M.
2017-10-01
Non-adiabatic vibrational-electronic resonance in the excited electronic states of natural photosynthetic antennas drastically alters the adiabatic framework, in which electronic energy transfer has been conventionally studied, and suggests the possibility of exploiting non-adiabatic dynamics for directed energy transfer. Here, a generalized dimer model incorporates asymmetries between pigments, coupling to the environment, and the doubly excited state relevant for nonlinear spectroscopy. For this generalized dimer model, the vibrational tuning vector that drives energy transfer is derived and connected to decoherence between singly excited states. A correlation vector is connected to decoherence between the ground state and the doubly excited state. Optical decoherence between the ground and singly excited states involves linear combinations of the correlation and tuning vectors. Excitonic coupling modifies the tuning vector. The correlation and tuning vectors are not always orthogonal, and both can be asymmetric under pigment exchange, which affects energy transfer. For equal pigment vibrational frequencies, the nonadiabatic tuning vector becomes an anti-correlated delocalized linear combination of intramolecular vibrations of the two pigments, and the nonadiabatic energy transfer dynamics become separable. With exchange symmetry, the correlation and tuning vectors become delocalized intramolecular vibrations that are symmetric and antisymmetric under pigment exchange. Diabatic criteria for vibrational-excitonic resonance demonstrate that anti-correlated vibrations increase the range and speed of vibronically resonant energy transfer (the Golden Rule rate is a factor of 2 faster). A partial trace analysis shows that vibronic decoherence for a vibrational-excitonic resonance between two excitons is slower than their purely excitonic decoherence.
Tiwari, Vivek; Peters, William K; Jonas, David M
2017-10-21
Non-adiabatic vibrational-electronic resonance in the excited electronic states of natural photosynthetic antennas drastically alters the adiabatic framework, in which electronic energy transfer has been conventionally studied, and suggests the possibility of exploiting non-adiabatic dynamics for directed energy transfer. Here, a generalized dimer model incorporates asymmetries between pigments, coupling to the environment, and the doubly excited state relevant for nonlinear spectroscopy. For this generalized dimer model, the vibrational tuning vector that drives energy transfer is derived and connected to decoherence between singly excited states. A correlation vector is connected to decoherence between the ground state and the doubly excited state. Optical decoherence between the ground and singly excited states involves linear combinations of the correlation and tuning vectors. Excitonic coupling modifies the tuning vector. The correlation and tuning vectors are not always orthogonal, and both can be asymmetric under pigment exchange, which affects energy transfer. For equal pigment vibrational frequencies, the nonadiabatic tuning vector becomes an anti-correlated delocalized linear combination of intramolecular vibrations of the two pigments, and the nonadiabatic energy transfer dynamics become separable. With exchange symmetry, the correlation and tuning vectors become delocalized intramolecular vibrations that are symmetric and antisymmetric under pigment exchange. Diabatic criteria for vibrational-excitonic resonance demonstrate that anti-correlated vibrations increase the range and speed of vibronically resonant energy transfer (the Golden Rule rate is a factor of 2 faster). A partial trace analysis shows that vibronic decoherence for a vibrational-excitonic resonance between two excitons is slower than their purely excitonic decoherence.
Is there sufficient evidence for tuning fork tests in diagnosing fractures? A systematic review.
Mugunthan, Kayalvili; Doust, Jenny; Kurz, Bodo; Glasziou, Paul
2014-08-04
To determine the diagnostic accuracy of tuning fork tests for detecting fractures. Systematic review of primary studies evaluating the diagnostic accuracy of tuning fork tests for the presence of fracture. We searched MEDLINE, CINAHL, AMED, EMBASE, Sports Discus, CAB Abstracts and Web of Science from commencement to November 2012. We manually searched the reference lists of any review papers and any identified relevant studies. Two reviewers independently reviewed the list of potentially eligible studies and rated the studies for quality using the QUADAS-2 tool. Data were extracted to form 2×2 contingency tables. The primary outcome measure was the accuracy of the test as measured by its sensitivity and specificity with 95% CIs. We included six studies (329 patients), with two types of tuning fork tests (pain induction and loss of sound transmission). The studies included patients with an age range 7-60 years. The prevalence of fracture ranged from 10% to 80%. The sensitivity of the tuning fork tests was high, ranging from 75% to 100%. The specificity of the tests was highly heterogeneous, ranging from 18% to 95%. Based on the studies in this review, tuning fork tests have some value in ruling out fractures, but are not sufficiently reliable or accurate for widespread clinical use. The small sample size of the studies and the observed heterogeneity make generalisable conclusion difficult. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Quantum mechanics: The Bayesian theory generalized to the space of Hermitian matrices
NASA Astrophysics Data System (ADS)
Benavoli, Alessio; Facchini, Alessandro; Zaffalon, Marco
2016-10-01
We consider the problem of gambling on a quantum experiment and enforce rational behavior by a few rules. These rules yield, in the classical case, the Bayesian theory of probability via duality theorems. In our quantum setting, they yield the Bayesian theory generalized to the space of Hermitian matrices. This very theory is quantum mechanics: in fact, we derive all its four postulates from the generalized Bayesian theory. This implies that quantum mechanics is self-consistent. It also leads us to reinterpret the main operations in quantum mechanics as probability rules: Bayes' rule (measurement), marginalization (partial tracing), independence (tensor product). To say it with a slogan, we obtain that quantum mechanics is the Bayesian theory in the complex numbers.
Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules.
Frémaux, Nicolas; Gerstner, Wulfram
2015-01-01
Classical Hebbian learning puts the emphasis on joint pre- and postsynaptic activity, but neglects the potential role of neuromodulators. Since neuromodulators convey information about novelty or reward, the influence of neuromodulators on synaptic plasticity is useful not just for action learning in classical conditioning, but also to decide "when" to create new memories in response to a flow of sensory stimuli. In this review, we focus on timing requirements for pre- and postsynaptic activity in conjunction with one or several phasic neuromodulatory signals. While the emphasis of the text is on conceptual models and mathematical theories, we also discuss some experimental evidence for neuromodulation of Spike-Timing-Dependent Plasticity. We highlight the importance of synaptic mechanisms in bridging the temporal gap between sensory stimulation and neuromodulatory signals, and develop a framework for a class of neo-Hebbian three-factor learning rules that depend on presynaptic activity, postsynaptic variables as well as the influence of neuromodulators.
Random Walk Quantum Clustering Algorithm Based on Space
NASA Astrophysics Data System (ADS)
Xiao, Shufen; Dong, Yumin; Ma, Hongyang
2018-01-01
In the random quantum walk, which is a quantum simulation of the classical walk, data points interacted when selecting the appropriate walk strategy by taking advantage of quantum-entanglement features; thus, the results obtained when the quantum walk is used are different from those when the classical walk is adopted. A new quantum walk clustering algorithm based on space is proposed by applying the quantum walk to clustering analysis. In this algorithm, data points are viewed as walking participants, and similar data points are clustered using the walk function in the pay-off matrix according to a certain rule. The walk process is simplified by implementing a space-combining rule. The proposed algorithm is validated by a simulation test and is proved superior to existing clustering algorithms, namely, Kmeans, PCA + Kmeans, and LDA-Km. The effects of some of the parameters in the proposed algorithm on its performance are also analyzed and discussed. Specific suggestions are provided.
Study of optical and electronic properties of nickel from reflection electron energy loss spectra
NASA Astrophysics Data System (ADS)
Xu, H.; Yang, L. H.; Da, B.; Tóth, J.; Tőkési, K.; Ding, Z. J.
2017-09-01
We use the classical Monte Carlo transport model of electrons moving near the surface and inside solids to reproduce the measured reflection electron energy-loss spectroscopy (REELS) spectra. With the combination of the classical transport model and the Markov chain Monte Carlo (MCMC) sampling of oscillator parameters the so-called reverse Monte Carlo (RMC) method was developed, and used to obtain optical constants of Ni in this work. A systematic study of the electronic and optical properties of Ni has been performed in an energy loss range of 0-200 eV from the measured REELS spectra at primary energies of 1000 eV, 2000 eV and 3000 eV. The reliability of our method was tested by comparing our results with the previous data. Moreover, the accuracy of our optical data has been confirmed by applying oscillator strength-sum rule and perfect-screening-sum rule.
Entangled Biphoton Virtual-State Spectroscopy of the A(exp 2)Sigma(sup +)-X(exp 2)Pi System of OH
NASA Technical Reports Server (NTRS)
Kojima, Jun; Nguyen, Quang-Viet
2004-01-01
This Letter describes the first application of entanglement-induced virtual-state spectroscopy to a molecular system. Non-classical, non-monotonic behavior in a two-photon absorption cross section of the OH A-X system, induced by an entangled biphoton state is theoretically demonstrated. A Fourier transform analysis of the biphoton cross section permits access to the energy eigenvalues of intermediate rovibronic states with a fixed excitation photon energy. The dependence of the Fourier spectrum on the tuning range of the entanglement time (T(sub e)) and the relative path delay (tau(sub e)) is discussed. Our analysis reveals that the implementation of molecular virtual-state spectroscopy for the OH A-X system requires the tuning of tau(sub e) over a pico-second range with femto-second resolution.
Entangled Biphoton Virtual-State Spectroscopy of the A(exp 2)Sigma(+) - X(exp 2)Pi System of OH
NASA Technical Reports Server (NTRS)
Kojima, Jun; Nguyen, Quang-Viet
2004-01-01
This Letter describes the first application of entanglement-induced virtual-state spectroscopy to a molecular system. Non-classical, non-monotonic behavior in a two-photon absorption cross section of the OH A-X system, induced by an entangled biphoton state is theoretically demonstrated. A Fourier transform analysis of the biphoton cross section permits access to the energy eigenvalues of intermediate rovibronic states with a fixed excitation photon energy. The dependence of the Fourier spectrum on the tuning range of the entanglement time T(sub e), and the relative path delay tau(sub e) is discussed. Our analysis reveals that the implementation of molecular virtual-state spectroscopy for the OH A-X system requires the tuning of tau(sub e) over a pico-second range with femto-second resolution.
Soft edges--organizational structure in dental education.
Chambers, D W
1995-03-01
There is no one best organizational structure for dental schools or for their major subunits. The classical alternatives of functional and divisional organization are discussed in light of the rule that follows function, and the advantages and disadvantages of each are presented. Newer models--decentralization, matrix, and heterarchy--show how features of functional and divisional structure can be blended. Virtual organizations, systems theory, and networks are also considered as new expressions of classical structures. The principle of suboptimization (soft edges) is presented.
S-duality in SU(3) Yang-Mills theory with non-abelian unbroken gauge group
NASA Astrophysics Data System (ADS)
Schroers, B. J.; Bais, F. A.
1998-12-01
It is observed that the magnetic charges of classical monopole solutions in Yang-Mills-Higgs theory with non-abelian unbroken gauge group H are in one-to-one correspondence with coherent states of a dual or magnetic group H˜. In the spirit of the Goddard-Nuyts-Olive conjecture this observation is interpreted as evidence for a hidden magnetic symmetry of Yang-Mills theory. SU(3) Yang-Mills-Higgs theory with unbroken gauge group U(2) is studied in detail. The action of the magnetic group on semi-classical states is given explicitly. Investigations of dyonic excitations show that electric and magnetic symmetry are never manifest at the same time: Non-abelian magnetic charge obstructs the realisation of electric symmetry and vice-versa. On the basis of this fact the charge sectors in the theory are classified and their fusion rules are discussed. Non-abelian electric-magnetic duality is formulated as a map between charge sectors. Coherent states obey particularly simple fusion rules, and in the set of coherent states S-duality can be formulated as an SL(2, Z) mapping between sectors which leaves the fusion rules invariant.
Vibration of the organ of Corti within the cochlear apex in mice
Gao, Simon S.; Wang, Rosalie; Raphael, Patrick D.; Moayedi, Yalda; Groves, Andrew K.; Zuo, Jian; Applegate, Brian E.
2014-01-01
The tonotopic map of the mammalian cochlea is commonly thought to be determined by the passive mechanical properties of the basilar membrane. The other tissues and cells that make up the organ of Corti also have passive mechanical properties; however, their roles are less well understood. In addition, active forces produced by outer hair cells (OHCs) enhance the vibration of the basilar membrane, termed cochlear amplification. Here, we studied how these biomechanical components interact using optical coherence tomography, which permits vibratory measurements within tissue. We measured not only classical basilar membrane tuning curves, but also vibratory responses from the rest of the organ of Corti within the mouse cochlear apex in vivo. As expected, basilar membrane tuning was sharp in live mice and broad in dead mice. Interestingly, the vibratory response of the region lateral to the OHCs, the “lateral compartment,” demonstrated frequency-dependent phase differences relative to the basilar membrane. This was sharply tuned in both live and dead mice. We then measured basilar membrane and lateral compartment vibration in transgenic mice with targeted alterations in cochlear mechanics. Prestin499/499, Prestin−/−, and TectaC1509G/C1509G mice demonstrated no cochlear amplification but maintained the lateral compartment phase difference. In contrast, SfswapTg/Tg mice maintained cochlear amplification but did not demonstrate the lateral compartment phase difference. These data indicate that the organ of Corti has complex micromechanical vibratory characteristics, with passive, yet sharply tuned, vibratory characteristics associated with the supporting cells. These characteristics may tune OHC force generation to produce the sharp frequency selectivity of mammalian hearing. PMID:24920025
Generating self-organizing collective behavior using separation dynamics from experimental data
NASA Astrophysics Data System (ADS)
Dieck Kattas, Graciano; Xu, Xiao-Ke; Small, Michael
2012-09-01
Mathematical models for systems of interacting agents using simple local rules have been proposed and shown to exhibit emergent swarming behavior. Most of these models are constructed by intuition or manual observations of real phenomena, and later tuned or verified to simulate desired dynamics. In contrast to this approach, we propose using a model that attempts to follow an averaged rule of the essential distance-dependent collective behavior of real pigeon flocks, which was abstracted from experimental data. By using a simple model to follow the behavioral tendencies of real data, we show that our model can exhibit a wide range of emergent self-organizing dynamics such as flocking, pattern formation, and counter-rotating vortices.
Generating self-organizing collective behavior using separation dynamics from experimental data.
Dieck Kattas, Graciano; Xu, Xiao-Ke; Small, Michael
2012-09-01
Mathematical models for systems of interacting agents using simple local rules have been proposed and shown to exhibit emergent swarming behavior. Most of these models are constructed by intuition or manual observations of real phenomena, and later tuned or verified to simulate desired dynamics. In contrast to this approach, we propose using a model that attempts to follow an averaged rule of the essential distance-dependent collective behavior of real pigeon flocks, which was abstracted from experimental data. By using a simple model to follow the behavioral tendencies of real data, we show that our model can exhibit a wide range of emergent self-organizing dynamics such as flocking, pattern formation, and counter-rotating vortices.
Recognition of Handwritten Arabic words using a neuro-fuzzy network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boukharouba, Abdelhak; Bennia, Abdelhak
We present a new method for the recognition of handwritten Arabic words based on neuro-fuzzy hybrid network. As a first step, connected components (CCs) of black pixels are detected. Then the system determines which CCs are sub-words and which are stress marks. The stress marks are then isolated and identified separately and the sub-words are segmented into graphemes. Each grapheme is described by topological and statistical features. Fuzzy rules are extracted from training examples by a hybrid learning scheme comprised of two phases: rule generation phase from data using a fuzzy c-means, and rule parameter tuning phase using gradient descentmore » learning. After learning, the network encodes in its topology the essential design parameters of a fuzzy inference system.The contribution of this technique is shown through the significant tests performed on a handwritten Arabic words database.« less
A blueprint for demonstrating quantum supremacy with superconducting qubits
NASA Astrophysics Data System (ADS)
Neill, C.; Roushan, P.; Kechedzhi, K.; Boixo, S.; Isakov, S. V.; Smelyanskiy, V.; Megrant, A.; Chiaro, B.; Dunsworth, A.; Arya, K.; Barends, R.; Burkett, B.; Chen, Y.; Chen, Z.; Fowler, A.; Foxen, B.; Giustina, M.; Graff, R.; Jeffrey, E.; Huang, T.; Kelly, J.; Klimov, P.; Lucero, E.; Mutus, J.; Neeley, M.; Quintana, C.; Sank, D.; Vainsencher, A.; Wenner, J.; White, T. C.; Neven, H.; Martinis, J. M.
2018-04-01
A key step toward demonstrating a quantum system that can address difficult problems in physics and chemistry will be performing a computation beyond the capabilities of any classical computer, thus achieving so-called quantum supremacy. In this study, we used nine superconducting qubits to demonstrate a promising path toward quantum supremacy. By individually tuning the qubit parameters, we were able to generate thousands of distinct Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert space. As the number of qubits increases, the system continues to explore the exponentially growing number of states. Extending these results to a system of 50 qubits has the potential to address scientific questions that are beyond the capabilities of any classical computer.
Classical topological paramagnetism
NASA Astrophysics Data System (ADS)
Bondesan, R.; Ringel, Z.
2017-05-01
Topological phases of matter are one of the hallmarks of quantum condensed matter physics. One of their striking features is a bulk-boundary correspondence wherein the topological nature of the bulk manifests itself on boundaries via exotic massless phases. In classical wave phenomena, analogous effects may arise; however, these cannot be viewed as equilibrium phases of matter. Here, we identify a set of rules under which robust equilibrium classical topological phenomena exist. We write simple and analytically tractable classical lattice models of spins and rotors in two and three dimensions which, at suitable parameter ranges, are paramagnetic in the bulk but nonetheless exhibit some unusual long-range or critical order on their boundaries. We point out the role of simplicial cohomology as a means of classifying, writing, and analyzing such models. This opens an experimental route for studying strongly interacting topological phases of spins.
Rigorous ILT optimization for advanced patterning and design-process co-optimization
NASA Astrophysics Data System (ADS)
Selinidis, Kosta; Kuechler, Bernd; Cai, Howard; Braam, Kyle; Hoppe, Wolfgang; Domnenko, Vitaly; Poonawala, Amyn; Xiao, Guangming
2018-03-01
Despite the large difficulties involved in extending 193i multiple patterning and the slow ramp of EUV lithography to full manufacturing readiness, the pace of development for new technology node variations has been accelerating. Multiple new variations of new and existing technology nodes have been introduced for a range of device applications; each variation with at least a few new process integration methods, layout constructs and/or design rules. This had led to a strong increase in the demand for predictive technology tools which can be used to quickly guide important patterning and design co-optimization decisions. In this paper, we introduce a novel hybrid predictive patterning method combining two patterning technologies which have each individually been widely used for process tuning, mask correction and process-design cooptimization. These technologies are rigorous lithography simulation and inverse lithography technology (ILT). Rigorous lithography simulation has been extensively used for process development/tuning, lithography tool user setup, photoresist hot-spot detection, photoresist-etch interaction analysis, lithography-TCAD interactions/sensitivities, source optimization and basic lithography design rule exploration. ILT has been extensively used in a range of lithographic areas including logic hot-spot fixing, memory layout correction, dense memory cell optimization, assist feature (AF) optimization, source optimization, complex patterning design rules and design-technology co-optimization (DTCO). The combined optimization capability of these two technologies will therefore have a wide range of useful applications. We investigate the benefits of the new functionality for a few of these advanced applications including correction for photoresist top loss and resist scumming hotspots.
Fuzzy self-learning control for magnetic servo system
NASA Technical Reports Server (NTRS)
Tarn, J. H.; Kuo, L. T.; Juang, K. Y.; Lin, C. E.
1994-01-01
It is known that an effective control system is the key condition for successful implementation of high-performance magnetic servo systems. Major issues to design such control systems are nonlinearity; unmodeled dynamics, such as secondary effects for copper resistance, stray fields, and saturation; and that disturbance rejection for the load effect reacts directly on the servo system without transmission elements. One typical approach to design control systems under these conditions is a special type of nonlinear feedback called gain scheduling. It accommodates linear regulators whose parameters are changed as a function of operating conditions in a preprogrammed way. In this paper, an on-line learning fuzzy control strategy is proposed. To inherit the wealth of linear control design, the relations between linear feedback and fuzzy logic controllers have been established. The exercise of engineering axioms of linear control design is thus transformed into tuning of appropriate fuzzy parameters. Furthermore, fuzzy logic control brings the domain of candidate control laws from linear into nonlinear, and brings new prospects into design of the local controllers. On the other hand, a self-learning scheme is utilized to automatically tune the fuzzy rule base. It is based on network learning infrastructure; statistical approximation to assign credit; animal learning method to update the reinforcement map with a fast learning rate; and temporal difference predictive scheme to optimize the control laws. Different from supervised and statistical unsupervised learning schemes, the proposed method learns on-line from past experience and information from the process and forms a rule base of an FLC system from randomly assigned initial control rules.
Thumb rule of visual angle: a new confirmation.
Groot, C; Ortega, F; Beltran, F S
1994-02-01
The classical thumb rule of visual angle was reexamined. Hence, the visual angle was measured as a function of a thumb's width and the distance between eye and thumb. The measurement of a thumb's width when held at arm's length was taken on 67 second-year students of psychology. The visual angle was about 2 degrees as R. P. O'Shea confirmed in 1991. Also, we confirmed a linear relationship between the size of a thumb's width at arm's length and the visual angle.
Cerebellum tunes the excitability of the motor system: evidence from peripheral motor axons.
Nodera, Hiroyuki; Manto, Mario
2014-12-01
Cerebellum is highly connected with the contralateral cerebral cortex. So far, the motor deficits observed in acute focal cerebellar lesions in human have been mainly explained on the basis of a disruption of the cerebello-thalamo-cortical projections. Cerebellar circuits have also numerous anatomical and functional interactions with brainstem nuclei and projects also directly to the spinal cord. Cerebellar lesions alter the excitability of peripheral motor axons as demonstrated by peripheral motor threshold-tracking techniques in cerebellar stroke. The biophysical changes are correlated with the functional scores. Nerve excitability measurements represent an attractive tool to extract the rules underlying the tuning of excitability of the motor pathways by the cerebellum and to discover the contributions of each cerebellar nucleus in this key function, contributing to early plasticity and sensorimotor learning.
Hu, Yu; Zylberberg, Joel; Shea-Brown, Eric
2014-01-01
Over repeat presentations of the same stimulus, sensory neurons show variable responses. This “noise” is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem – neural tuning curves, etc. – held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) — if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all. PMID:24586128
Tuning and synthesis of metallic nanostructures by mechanical compression
Fan, Hongyou; Li, Binsong
2015-11-17
The present invention provides a pressure-induced phase transformation process to engineer metal nanoparticle architectures and to fabricate new nanostructured materials. The reversible changes of the nanoparticle unit cell dimension under pressure allow precise control over interparticle separation in 2D or 3D nanoparticle assemblies, offering unique robustness for interrogation of both quantum and classic coupling interactions. Irreversible changes above a threshold pressure of about 8 GPa enables new nanostructures, such as nanorods, nanowires, or nanosheets.
ReactPRED: a tool to predict and analyze biochemical reactions.
Sivakumar, Tadi Venkata; Giri, Varun; Park, Jin Hwan; Kim, Tae Yong; Bhaduri, Anirban
2016-11-15
Biochemical pathways engineering is often used to synthesize or degrade target chemicals. In silico screening of the biochemical transformation space allows predicting feasible reactions, constituting these pathways. Current enabling tools are customized to predict reactions based on pre-defined biochemical transformations or reaction rule sets. Reaction rule sets are usually curated manually and tailored to specific applications. They are not exhaustive. In addition, current systems are incapable of regulating and refining data with an aim to tune specificity and sensitivity. A robust and flexible tool that allows automated reaction rule set creation along with regulated pathway prediction and analyses is a need. ReactPRED aims to address the same. ReactPRED is an open source flexible and customizable tool enabling users to predict biochemical reactions and pathways. The tool allows automated reaction rule creation from a user defined reaction set. Additionally, reaction rule degree and rule tolerance features allow refinement of predicted data. It is available as a flexible graphical user interface and a console application. ReactPRED is available at: https://sourceforge.net/projects/reactpred/ CONTACT: anirban.b@samsung.com or ty76.kim@samsung.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Chen, Jingbo; Wang, Chengyi; Yue, Anzhi; Chen, Jiansheng; He, Dongxu; Zhang, Xiuyan
2017-10-01
The tremendous success of deep learning models such as convolutional neural networks (CNNs) in computer vision provides a method for similar problems in the field of remote sensing. Although research on repurposing pretrained CNN to remote sensing tasks is emerging, the scarcity of labeled samples and the complexity of remote sensing imagery still pose challenges. We developed a knowledge-guided golf course detection approach using a CNN fine-tuned on temporally augmented data. The proposed approach is a combination of knowledge-driven region proposal, data-driven detection based on CNN, and knowledge-driven postprocessing. To confront data complexity, knowledge-derived cooccurrence, composition, and area-based rules are applied sequentially to propose candidate golf regions. To confront sample scarcity, we employed data augmentation in the temporal domain, which extracts samples from multitemporal images. The augmented samples were then used to fine-tune a pretrained CNN for golf detection. Finally, commission error was further suppressed by postprocessing. Experiments conducted on GF-1 imagery prove the effectiveness of the proposed approach.
Yang, Lei; Yang, Ming; Xu, Zihao; Zhuang, Xiaoqi; Wang, Wei; Zhang, Haibo; Han, Lu; Xu, Liang
2014-10-01
The purpose of this paper is to report the research and design of control system of magnetic coupling centrifugal blood pump in our laboratory, and to briefly describe the structure of the magnetic coupling centrifugal blood pump and principles of the body circulation model. The performance of blood pump is not only related to materials and structure, but also depends on the control algorithm. We studied the algorithm about motor current double-loop control for brushless DC motor. In order to make the algorithm adjust parameter change in different situations, we used the self-tuning fuzzy PI control algorithm and gave the details about how to design fuzzy rules. We mainly used Matlab Simulink to simulate the motor control system to test the performance of algorithm, and briefly introduced how to implement these algorithms in hardware system. Finally, by building the platform and conducting experiments, we proved that self-tuning fuzzy PI control algorithm could greatly improve both dynamic and static performance of blood pump and make the motor speed and the blood pump flow stable and adjustable.
An Interval Type-2 Neural Fuzzy System for Online System Identification and Feature Elimination.
Lin, Chin-Teng; Pal, Nikhil R; Wu, Shang-Lin; Liu, Yu-Ting; Lin, Yang-Yin
2015-07-01
We propose an integrated mechanism for discarding derogatory features and extraction of fuzzy rules based on an interval type-2 neural fuzzy system (NFS)-in fact, it is a more general scheme that can discard bad features, irrelevant antecedent clauses, and even irrelevant rules. High-dimensional input variable and a large number of rules not only enhance the computational complexity of NFSs but also reduce their interpretability. Therefore, a mechanism for simultaneous extraction of fuzzy rules and reducing the impact of (or eliminating) the inferior features is necessary. The proposed approach, namely an interval type-2 Neural Fuzzy System for online System Identification and Feature Elimination (IT2NFS-SIFE), uses type-2 fuzzy sets to model uncertainties associated with information and data in designing the knowledge base. The consequent part of the IT2NFS-SIFE is of Takagi-Sugeno-Kang type with interval weights. The IT2NFS-SIFE possesses a self-evolving property that can automatically generate fuzzy rules. The poor features can be discarded through the concept of a membership modulator. The antecedent and modulator weights are learned using a gradient descent algorithm. The consequent part weights are tuned via the rule-ordered Kalman filter algorithm to enhance learning effectiveness. Simulation results show that IT2NFS-SIFE not only simplifies the system architecture by eliminating derogatory/irrelevant antecedent clauses, rules, and features but also maintains excellent performance.
Automating Rule Strengths in Expert Systems.
1987-05-01
systems were designed in an incremental, iterative way. One of the most easily identifiable phases in this process, sometimes called tuning, consists...attenuators. The designer of the knowledge-based system must determine (synthesize) or adjust (xfine, if estimates of the values are given) these...values. We consider two ways in which the designer can learn the values. We call the first model of learning the complete case and the second model the
Novel dynamic tuning of broadband visible metamaterial perfect absorber using graphene
NASA Astrophysics Data System (ADS)
Jia, Xiuli; Wang, Xiaoou; Yuan, Chengxun; Meng, Qingxin; Zhou, Zhongxiang
2016-07-01
We present a novel dynamic tuning of a broadband visible metamaterial absorber consisting of a multilayer-graphene-embedded nano-cross elliptical hole (MGENCEH) structure. It has multiple effects, including excitation of surface plasmon polaritons and extraordinary optical transmission in the first two metal layers. A numerical simulation shows that the MGENCEH structure can realize broadband perfect absorption (BPA) from 5.85 × 1014 to 6.5 × 1014 Hz over a wide incident angle range for transverse magnetic polarized light if the chemical potential of graphene (uc) is tuned to 1.0 eV. Furthermore, it has high broadband absorption (above 96%) from 4.6 × 1014 to 6.6 × 1014 Hz and three areas of narrowband perfect absorption around 4.65 × 1014, 5.1 × 1014, and 5.6 × 1014 Hz. The changes in the absorption spectra as a function of uc can be classically explained by simply considering plasmons as damped harmonic oscillators. This BPA is broader than the result of Zhou et al. [Opt. Express 23, A413-A418 (2015)] and is particularly desirable for various potential applications such as solar energy absorbers.
Zhang, BiTao; Pi, YouGuo; Luo, Ying
2012-09-01
A fractional order sliding mode control (FROSMC) scheme based on parameters auto-tuning for the velocity control of permanent magnet synchronous motor (PMSM) is proposed in this paper. The control law of the proposed F(R)OSMC scheme is designed according to Lyapunov stability theorem. Based on the property of transferring energy with adjustable type in F(R)OSMC, this paper analyzes the chattering phenomenon in classic sliding mode control (SMC) is attenuated with F(R)OSMC system. A fuzzy logic inference scheme (FLIS) is utilized to obtain the gain of switching control. Simulations and experiments demonstrate that the proposed FROSMC not only achieve better control performance with smaller chatting than that with integer order sliding mode control, but also is robust to external load disturbance and parameter variations. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Local tuning of the order parameter in superconducting weak links: A zero-inductance nanodevice
NASA Astrophysics Data System (ADS)
Winik, Roni; Holzman, Itamar; Dalla Torre, Emanuele G.; Buks, Eyal; Ivry, Yachin
2018-03-01
Controlling both the amplitude and the phase of the superconducting quantum order parameter (" separators="|ψ ) in nanostructures is important for next-generation information and communication technologies. The lack of electric resistance in superconductors, which may be advantageous for some technologies, hinders convenient voltage-bias tuning and hence limits the tunability of ψ at the microscopic scale. Here, we demonstrate the local tunability of the phase and amplitude of ψ, obtained by patterning with a single lithography step a Nb nano-superconducting quantum interference device (nano-SQUID) that is biased at its nanobridges. We accompany our experimental results by a semi-classical linearized model that is valid for generic nano-SQUIDs with multiple ports and helps simplify the modelling of non-linear couplings among the Josephson junctions. Our design helped us reveal unusual electric characteristics with effective zero inductance, which is promising for nanoscale magnetic sensing and quantum technologies.
Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning.
van Ginneken, Bram
2017-03-01
Half a century ago, the term "computer-aided diagnosis" (CAD) was introduced in the scientific literature. Pulmonary imaging, with chest radiography and computed tomography, has always been one of the focus areas in this field. In this study, I describe how machine learning became the dominant technology for tackling CAD in the lungs, generally producing better results than do classical rule-based approaches, and how the field is now rapidly changing: in the last few years, we have seen how even better results can be obtained with deep learning. The key differences among rule-based processing, machine learning, and deep learning are summarized and illustrated for various applications of CAD in the chest.
Line mixing effects in isotropic Raman spectra of pure N{sub 2}: A classical trajectory study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Sergey V., E-mail: serg.vict.ivanov@gmail.com; Boulet, Christian; Buzykin, Oleg G.
2014-11-14
Line mixing effects in the Q branch of pure N{sub 2} isotropic Raman scattering are studied at room temperature using a classical trajectory method. It is the first study using an extended modified version of Gordon's classical theory of impact broadening and shift of rovibrational lines. The whole relaxation matrix is calculated using an exact 3D classical trajectory method for binary collisions of rigid N{sub 2} molecules employing the most up-to-date intermolecular potential energy surface (PES). A simple symmetrizing procedure is employed to improve off-diagonal cross-sections to make them obeying exactly the principle of detailed balance. The adequacy of themore » results is confirmed by the sum rule. The comparison is made with available experimental data as well as with benchmark fully quantum close coupling [F. Thibault, C. Boulet, and Q. Ma, J. Chem. Phys. 140, 044303 (2014)] and refined semi-classical Robert-Bonamy [C. Boulet, Q. Ma, and F. Thibault, J. Chem. Phys. 140, 084310 (2014)] results. All calculations (classical, quantum, and semi-classical) were made using the same PES. The agreement between classical and quantum relaxation matrices is excellent, opening the way to the analysis of more complex molecular systems.« less
An information theory account of late frontoparietal ERP positivities in cognitive control.
Barceló, Francisco; Cooper, Patrick S
2018-03-01
ERP research on task switching has revealed distinct transient and sustained positive waveforms (latency circa 300-900 ms) while shifting task rules or stimulus-response (S-R) mappings. However, it remains unclear whether such switch-related positivities show similar scalp topography and index context-updating mechanisms akin to those posed for domain-general (i.e., classic P300) positivities in many task domains. To examine this question, ERPs were recorded from 31 young adults (18-30 years) while they were intermittently cued to switch or repeat their perceptual categorization of Gabor gratings varying in color and thickness (switch task), or else they performed two visually identical control tasks (go/no-go and oddball). Our task cueing paradigm examined two temporarily distinct stages of proactive rule updating and reactive rule execution. A simple information theory model helped us gauge cognitive demands under distinct temporal and task contexts in terms of low-level S-R pathways and higher-order rule updating operations. Task demands modulated domain-general (indexed by classic oddball P3) and switch positivities-indexed by both a cue-locked late positive complex and a sustained positivity ensuing task transitions. Topographic scalp analyses confirmed subtle yet significant split-second changes in the configuration of neural sources for both domain-general P3s and switch positivities as a function of both the temporal and task context. These findings partly meet predictions from information estimates, and are compatible with a family of P3-like potentials indexing functionally distinct neural operations within a common frontoparietal "multiple demand" system during the preparation and execution of simple task rules. © 2016 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Tavakoli, Armin; Cabello, Adán
2018-03-01
We consider an ideal experiment in which unlimited nonprojective quantum measurements are sequentially performed on a system that is initially entangled with a distant one. At each step of the sequence, the measurements are randomly chosen between two. However, regardless of which measurement is chosen or which outcome is obtained, the quantum state of the pair always remains entangled. We show that the classical simulation of the reduced state of the distant system requires not only unlimited rounds of communication, but also that the distant system has infinite memory. Otherwise, a thermodynamical argument predicts heating at a distance. Our proposal can be used for experimentally ruling out nonlocal finite-memory classical models of quantum theory.
NASA Astrophysics Data System (ADS)
Jeknić-Dugić, Jasmina; Petrović, Igor; Arsenijević, Momir; Dugić, Miroljub
2018-05-01
We investigate dynamical stability of a single propeller-like shaped molecular cogwheel modelled as the fixed-axis rigid rotator. In the realistic situations, rotation of the finite-size cogwheel is subject to the environmentally-induced Brownian-motion effect that we describe by utilizing the quantum Caldeira-Leggett master equation. Assuming the initially narrow (classical-like) standard deviations for the angle and the angular momentum of the rotator, we investigate the dynamics of the first and second moments depending on the size, i.e. on the number of blades of both the free rotator as well as of the rotator in the external harmonic field. The larger the standard deviations, the less stable (i.e. less predictable) rotation. We detect the absence of the simple and straightforward rules for utilizing the rotator’s stability. Instead, a number of the size-related criteria appear whose combinations may provide the optimal rules for the rotator dynamical stability and possibly control. In the realistic situations, the quantum-mechanical corrections, albeit individually small, may effectively prove non-negligible, and also revealing subtlety of the transition from the quantum to the classical dynamics of the rotator. As to the latter, we detect a strong size-dependence of the transition to the classical dynamics beyond the quantum decoherence process.
Barellini, A; Bogi, L; Licitra, G; Silvi, A M; Zari, A
2009-12-01
Air traffic control (ATC) primary radars are 'classical' radars that use echoes of radiofrequency (RF) pulses from aircraft to determine their position. High-power RF pulses radiated from radar antennas may produce high electromagnetic field levels in the surrounding area. Measurement of electromagnetic fields produced by RF-pulsed radar by means of a swept-tuned spectrum analyser are investigated here. Measurements have been carried out both in the laboratory and in situ on signals generated by an ATC primary radar.
NASA Astrophysics Data System (ADS)
Taleb, M.; Cherkaoui, M.; Hbib, M.
2018-05-01
Recently, renewable energy sources are impacting seriously power quality of the grids in term of frequency and voltage stability, due to their intermittence and less forecasting accuracy. Among these sources, wind energy conversion systems (WECS) received a great interest and especially the configuration with Doubly Fed Induction Generator. However, WECS strongly nonlinear, are making their control not easy by classical approaches such as a PI. In this paper, we continue deepen study of PI controller used in active and reactive power control of this kind of WECS. Particle Swarm Optimization (PSO) is suggested to improve its dynamic performances and its robustness against parameters variations. This work highlights the performances of PSO optimized PI control against classical PI tuned with poles compensation strategy. Simulations are carried out on MATLAB-SIMULINK software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haman, R.L.; Kerry, T.G.; Jarc, C.A.
1996-12-31
A technology provided by Ultramax Corporation and EPRI, based on sequential process optimization (SPO), is being used as a cost-effective tool to gain improvements prior to decisions for capital-intensive solutions. This empirical method of optimization, called the ULTRAMAX{reg_sign} Method, can determine the best boiler capabilities and help delay, or even avoid, expensive retrofits or repowering. SPO can serve as a least-cost way to attain the right degree of compliance with current and future phases of CAAA. Tuning ensures a staged strategy to stay ahead of emissions regulations, but not so far ahead as to cause regret for taking actions thatmore » ultimately are not mandated or warranted. One large utility investigating SPO as a tool to lower NO{sub x} emissions and to optimize boiler performance is Detroit Edison. The company has applied SPO to tune two coal-fired units at its River Rouge Power Plant to evaluate the technology for possible system-wide usage. Following the successful demonstration in reducing NO{sub x} from these units, SPO is being considered for use in other Detroit Edison fossil-fired plants. Tuning first will be used as a least-cost option to drive NO{sub x} to its lowest level with operating adjustment. In addition, optimization shows the true capability of the units and the margins available when the Phase 2 rules become effective in 2000. This paper includes a case study of the second tuning process and discusses the opportunities the technology affords.« less
The feature-weighted receptive field: an interpretable encoding model for complex feature spaces.
St-Yves, Ghislain; Naselaris, Thomas
2017-06-20
We introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map-a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: "where" parameters that characterize the location and extent of pooling over visual features, and "what" parameters that characterize tuning to visual features. The "where" parameters are analogous to classical receptive fields, while "what" parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation. We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model's application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep convolutional networks against brain activity. The ability to use whole networks in a single encoding model yields state-of-the-art prediction accuracy. Our results suggest a wide variety of uses for the feature-weighted receptive field model, from retinotopic mapping with natural scenes, to regressing the activities of whole deep neural networks onto measured brain activity. Copyright © 2017. Published by Elsevier Inc.
PDF-based heterogeneous multiscale filtration model.
Gong, Jian; Rutland, Christopher J
2015-04-21
Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.
Axions, inflation and the anthropic principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mack, Katherine J., E-mail: mack@ast.cam.ac.uk
2011-07-01
The QCD axion is the leading solution to the strong-CP problem, a dark matter candidate, and a possible result of string theory compactifications. However, for axions produced before inflation, symmetry-breaking scales of f{sub a}∼>10{sup 12} GeV (which are favored in string-theoretic axion models) are ruled out by cosmological constraints unless both the axion misalignment angle θ{sub 0} and the inflationary Hubble scale H{sub I} are extremely fine-tuned. We show that attempting to accommodate a high-f{sub a} axion in inflationary cosmology leads to a fine-tuning problem that is worse than the strong-CP problem the axion was originally invented to solve. Wemore » also show that this problem remains unresolved by anthropic selection arguments commonly applied to the high-f{sub a} axion scenario.« less
Egocentric and allocentric representations in auditory cortex
Brimijoin, W. Owen; Bizley, Jennifer K.
2017-01-01
A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796
Conversation on African Music.
ERIC Educational Resources Information Center
Saunders, Leslie R.
1985-01-01
A voice and music education teacher at the University of Lagos, Nigeria, talks about African music in this interview. Topics discussed include differences between African and Western music, African melody, rules for composing African music, the theory of counterpoint, and the popularity of classical composers in Nigeria. (RM)
Finite-size effects in simulations of electrolyte solutions under periodic boundary conditions
NASA Astrophysics Data System (ADS)
Thompson, Jeffrey; Sanchez, Isaac
The equilibrium properties of charged systems with periodic boundary conditions may exhibit pronounced system-size dependence due to the long range of the Coulomb force. As shown by others, the leading-order finite-size correction to the Coulomb energy of a charged fluid confined to a periodic box of volume V may be derived from sum rules satisfied by the charge-charge correlations in the thermodynamic limit V -> ∞ . In classical systems, the relevant sum rule is the Stillinger-Lovett second-moment (or perfect screening) condition. This constraint implies that for large V, periodicity induces a negative bias of -kB T(2 V) - 1 in the total Coulomb energy density of a homogeneous classical charged fluid of given density and temperature. We present a careful study of the impact of such finite-size effects on the calculation of solute chemical potentials from explicit-solvent molecular simulations of aqueous electrolyte solutions. National Science Foundation Graduate Research Fellowship Program, Grant No. DGE-1610403.
Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules
Frémaux, Nicolas; Gerstner, Wulfram
2016-01-01
Classical Hebbian learning puts the emphasis on joint pre- and postsynaptic activity, but neglects the potential role of neuromodulators. Since neuromodulators convey information about novelty or reward, the influence of neuromodulators on synaptic plasticity is useful not just for action learning in classical conditioning, but also to decide “when” to create new memories in response to a flow of sensory stimuli. In this review, we focus on timing requirements for pre- and postsynaptic activity in conjunction with one or several phasic neuromodulatory signals. While the emphasis of the text is on conceptual models and mathematical theories, we also discuss some experimental evidence for neuromodulation of Spike-Timing-Dependent Plasticity. We highlight the importance of synaptic mechanisms in bridging the temporal gap between sensory stimulation and neuromodulatory signals, and develop a framework for a class of neo-Hebbian three-factor learning rules that depend on presynaptic activity, postsynaptic variables as well as the influence of neuromodulators. PMID:26834568
NASA Technical Reports Server (NTRS)
Ramamoorthy, P. A.; Huang, Song; Govind, Girish
1991-01-01
In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.
Genetic reinforcement learning through symbiotic evolution for fuzzy controller design.
Juang, C F; Lin, J Y; Lin, C T
2000-01-01
An efficient genetic reinforcement learning algorithm for designing fuzzy controllers is proposed in this paper. The genetic algorithm (GA) adopted in this paper is based upon symbiotic evolution which, when applied to fuzzy controller design, complements the local mapping property of a fuzzy rule. Using this Symbiotic-Evolution-based Fuzzy Controller (SEFC) design method, the number of control trials, as well as consumed CPU time, are considerably reduced when compared to traditional GA-based fuzzy controller design methods and other types of genetic reinforcement learning schemes. Moreover, unlike traditional fuzzy controllers, which partition the input space into a grid, SEFC partitions the input space in a flexible way, thus creating fewer fuzzy rules. In SEFC, different types of fuzzy rules whose consequent parts are singletons, fuzzy sets, or linear equations (TSK-type fuzzy rules) are allowed. Further, the free parameters (e.g., centers and widths of membership functions) and fuzzy rules are all tuned automatically. For the TSK-type fuzzy rule especially, which put the proposed learning algorithm in use, only the significant input variables are selected to participate in the consequent of a rule. The proposed SEFC design method has been applied to different simulated control problems, including the cart-pole balancing system, a magnetic levitation system, and a water bath temperature control system. The proposed SEFC has been verified to be efficient and superior from these control problems, and from comparisons with some traditional GA-based fuzzy systems.
On Some Assumptions of the Null Hypothesis Statistical Testing
ERIC Educational Resources Information Center
Patriota, Alexandre Galvão
2017-01-01
Bayesian and classical statistical approaches are based on different types of logical principles. In order to avoid mistaken inferences and misguided interpretations, the practitioner must respect the inference rules embedded into each statistical method. Ignoring these principles leads to the paradoxical conclusions that the hypothesis…
Krieger, J R; Ogle, M E; McFaline-Figueroa, J; Segar, C E; Temenoff, J S; Botchwey, E A
2016-01-01
Tissue repair processes are characterized by the biphasic recruitment of distinct subpopulations of blood monocytes, including classical ("inflammatory") monocytes (IMs, Ly6C(hi)Gr1(+)CX3CR1(lo)) and non-classical anti-inflammatory monocytes (AMs, Ly6C(lo)Gr1(-)CX3CR1(hi)). Drug-eluting biomaterial implants can be used to tune the endogenous repair process by the preferential recruitment of pro-regenerative cells. To enhance recruitment of AMs during inflammatory injury, a novel N-desulfated heparin-containing poly(ethylene glycol) diacrylate (PEG-DA) hydrogel was engineered to deliver exogenous stromal derived factor-1α (SDF-1α), utilizing the natural capacity of heparin to sequester and release growth factors. SDF-1α released from the hydrogels maintained its bioactivity and stimulated chemotaxis of bone marrow cells in vitro. Intravital microscopy and flow cytometry demonstrated that SDF-1α hydrogels implanted in a murine dorsal skinfold window chamber promoted spatially-localized recruitment of AMs relative to unloaded internal control hydrogels. SDF-1α delivery stimulated arteriolar remodeling that was correlated with AM enrichment in the injury niche. SDF-1α, but not unloaded control hydrogels, supported sustained arteriogenesis and microvascular network growth through 7 days. The recruitment of AMs correlated with parameters of vascular remodeling suggesting that tuning the innate immune response by biomaterial SDF-1α release is a promising strategy for promoting vascular remodeling in a spatially controlled manner. Copyright © 2015 Elsevier Ltd. All rights reserved.
Purmann, Sascha; Pollmann, Stefan
2015-01-01
To process information selectively and to continuously fine-tune selectivity of information processing are important abilities for successful goal-directed behavior. One phenomenon thought to represent this fine-tuning are conflict adaptation effects in interference tasks, i.e., reduction of interference after an incompatible trial and when incompatible trials are frequent. The neurocognitive mechanisms of these effects are currently only partly understood and results from brainimaging studies so far are mixed. In our study we validate and extend recent findings by examining adaption to recent conflict in the classical Stroop task using functional magnetic resonance imaging. Consistent with previous research we found increased activity in a fronto-parietal network comprising the medial prefrontal cortex, ventro-lateral prefrontal cortex, and posterior parietal cortex when contrasting incompatible with compatible trials. These areas have been associated with attentional processes and might reflect increased cognitive conflict and resolution thereof during incompatible trials. While carefully controlling for non-attentional sequential effects we found smaller Stroop interference after an incompatible trial (conflict adaptation effect). These behavioral conflict adaptation effects were accompanied by changes in activity in visual color-selective areas (V4, V4α), while there was no modulation by previous trial compatibility in a visual word-selective area (VWFA). Our results provide further evidence for the notion, that adaptation to recent conflict seems to be based mainly on enhancement of processing of the task-relevant information.
NASA Technical Reports Server (NTRS)
Bellan, Josette; Harstad, Kenneth; Ohsaka, Kenichi
2003-01-01
Although the high pressure multicomponent fluid conservation equations have already been derived and approximately validated for binary mixtures by this PI, the validation of the multicomponent theory is hampered by the lack of existing mixing rules for property calculations. Classical gas dynamics theory can provide property mixing-rules at low pressures exclusively. While thermal conductivity and viscosity high-pressure mixing rules have been documented in the literature, there is no such equivalent for the diffusion coefficients and the thermal diffusion factors. The primary goal of this investigation is to extend the low pressure mixing rule theory to high pressures and validate the new theory with experimental data from levitated single drops. The two properties that will be addressed are the diffusion coefficients and the thermal diffusion factors. To validate/determine the property calculations, ground-based experiments from levitated drops are being conducted.
Design issues for a reinforcement-based self-learning fuzzy controller
NASA Technical Reports Server (NTRS)
Yen, John; Wang, Haojin; Dauherity, Walter
1993-01-01
Fuzzy logic controllers have some often cited advantages over conventional techniques such as PID control: easy implementation, its accommodation to natural language, the ability to cover wider range of operating conditions and others. One major obstacle that hinders its broader application is the lack of a systematic way to develop and modify its rules and as result the creation and modification of fuzzy rules often depends on try-error or pure experimentation. One of the proposed approaches to address this issue is self-learning fuzzy logic controllers (SFLC) that use reinforcement learning techniques to learn the desirability of states and to adjust the consequent part of fuzzy control rules accordingly. Due to the different dynamics of the controlled processes, the performance of self-learning fuzzy controller is highly contingent on the design. The design issue has not received sufficient attention. The issues related to the design of a SFLC for the application to chemical process are discussed and its performance is compared with that of PID and self-tuning fuzzy logic controller.
Adaptive Critic-based Neurofuzzy Controller for the Steam Generator Water Level
NASA Astrophysics Data System (ADS)
Fakhrazari, Amin; Boroushaki, Mehrdad
2008-06-01
In this paper, an adaptive critic-based neurofuzzy controller is presented for water level regulation of nuclear steam generators. The problem has been of great concern for many years as the steam generator is a highly nonlinear system showing inverse response dynamics especially at low operating power levels. Fuzzy critic-based learning is a reinforcement learning method based on dynamic programming. The only information available for the critic agent is the system feedback which is interpreted as the last action the controller has performed in the previous state. The signal produced by the critic agent is used alongside the backpropagation of error algorithm to tune online conclusion parts of the fuzzy inference rules. The critic agent here has a proportional-derivative structure and the fuzzy rule base has nine rules. The proposed controller shows satisfactory transient responses, disturbance rejection and robustness to model uncertainty. Its simple design procedure and structure, nominates it as one of the suitable controller designs for the steam generator water level control in nuclear power plant industry.
Using GO-WAR for mining cross-ontology weighted association rules.
Agapito, Giuseppe; Cannataro, Mario; Guzzi, Pietro Hiram; Milano, Marianna
2015-07-01
The Gene Ontology (GO) is a structured repository of concepts (GO terms) that are associated to one or more gene products. The process of association is referred to as annotation. The relevance and the specificity of both GO terms and annotations are evaluated by a measure defined as information content (IC). The analysis of annotated data is thus an important challenge for bioinformatics. There exist different approaches of analysis. From those, the use of association rules (AR) may provide useful knowledge, and it has been used in some applications, e.g. improving the quality of annotations. Nevertheless classical association rules algorithms do not take into account the source of annotation nor the importance yielding to the generation of candidate rules with low IC. This paper presents GO-WAR (Gene Ontology-based Weighted Association Rules) a methodology for extracting weighted association rules. GO-WAR can extract association rules with a high level of IC without loss of support and confidence from a dataset of annotated data. A case study on using of GO-WAR on publicly available GO annotation datasets is used to demonstrate that our method outperforms current state of the art approaches. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Born’s rule as signature of a superclassical current algebra
NASA Astrophysics Data System (ADS)
Fussy, S.; Mesa Pascasio, J.; Schwabl, H.; Grössing, G.
2014-04-01
We present a new tool for calculating the interference patterns and particle trajectories of a double-, three- and N-slit system on the basis of an emergent sub-quantum theory developed by our group throughout the last years. The quantum itself is considered as an emergent system representing an off-equilibrium steady state oscillation maintained by a constant throughput of energy provided by a classical zero-point energy field. We introduce the concept of a “relational causality” which allows for evaluating structural interdependences of different systems levels, i.e. in our case of the relations between partial and total probability density currents, respectively. Combined with the application of 21st century classical physics like, e.g., modern nonequilibrium thermodynamics, we thus arrive at a “superclassical” theory. Within this framework, the proposed current algebra directly leads to a new formulation of the guiding equation which is equivalent to the original one of the de Broglie-Bohm theory. By proving the absence of third order interferences in three-path systems it is shown that Born’s rule is a natural consequence of our theory. Considering the series of one-, double-, or, generally, of N-slit systems, with the first appearance of an interference term in the double slit case, we can explain the violation of Sorkin’s first order sum rule, just as the validity of all higher order sum rules. Moreover, the Talbot patterns and Talbot distance for an arbitrary N-slit device can be reproduced exactly by our model without any quantum physics tool.
Nanotechnology for the forest products industry
Theodore Wegner; Philip Jones
2005-01-01
Nanotechnology is defined as the manipulation of materials measuring 100 nanometers or less in at least one dimension. In addition, nanomaterials must display unique properties and characteristics that are different than their bulk properties. At the 1-nanometer (nm) level, quantum mechanics rules, and at dimensions above 100 nm, classical continuum mechanics, physics...
ERIC Educational Resources Information Center
Rommel-Esham, Katie; Constable, Susan D.
2006-01-01
In this article, the authors discuss a literature-based activity that helps students discover the importance of making detailed observations. In an inspiring children's classic book, "Everybody Needs a Rock" by Byrd Baylor (1974), the author invites readers to go "rock finding," laying out 10 rules for finding a "perfect" rock. In this way, the…
Summerhill School. A New View of Childhood.
ERIC Educational Resources Information Center
Neill, A. S.; Lamb, Albert, Ed.
This revised and expanded version of the 1960 classic "Summerhill," edited by Albert Lamb, portrays Summerhill School throughout its development. The book reveals A. S. Neill's fundamental belief in the self-regulated school in which children make their own rules and determine for themselves how much they will study. Neill's commitment…
War Coverage: The Case of the Falklands.
ERIC Educational Resources Information Center
Bellando, Edourado
The Falkland-Malvinas conflict is a classic example of how a government can manage news in wartime. The rules of the game as evinced by the British government and Ministry of Defense were simple and effective. They controlled access to the fighting, controlled all communications facilities, excluded all neutral correspondents and carefully…
ERIC Educational Resources Information Center
2002
This document contains three papers from a symposium on issues of human resource development (HRD). "The Complex Roots of Human Resource Development" (Monica Lee) discusses the roots of HRD within the framework of the following views of management: (1) classic (the view that managers must be able to create appropriate rules and…
A blueprint for demonstrating quantum supremacy with superconducting qubits.
Neill, C; Roushan, P; Kechedzhi, K; Boixo, S; Isakov, S V; Smelyanskiy, V; Megrant, A; Chiaro, B; Dunsworth, A; Arya, K; Barends, R; Burkett, B; Chen, Y; Chen, Z; Fowler, A; Foxen, B; Giustina, M; Graff, R; Jeffrey, E; Huang, T; Kelly, J; Klimov, P; Lucero, E; Mutus, J; Neeley, M; Quintana, C; Sank, D; Vainsencher, A; Wenner, J; White, T C; Neven, H; Martinis, J M
2018-04-13
A key step toward demonstrating a quantum system that can address difficult problems in physics and chemistry will be performing a computation beyond the capabilities of any classical computer, thus achieving so-called quantum supremacy. In this study, we used nine superconducting qubits to demonstrate a promising path toward quantum supremacy. By individually tuning the qubit parameters, we were able to generate thousands of distinct Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert space. As the number of qubits increases, the system continues to explore the exponentially growing number of states. Extending these results to a system of 50 qubits has the potential to address scientific questions that are beyond the capabilities of any classical computer. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Tuning quantum measurements to control chaos.
Eastman, Jessica K; Hope, Joseph J; Carvalho, André R R
2017-03-20
Environment-induced decoherence has long been recognised as being of crucial importance in the study of chaos in quantum systems. In particular, the exact form and strength of the system-environment interaction play a major role in the quantum-to-classical transition of chaotic systems. In this work we focus on the effect of varying monitoring strategies, i.e. for a given decoherence model and a fixed environmental coupling, there is still freedom on how to monitor a quantum system. We show here that there is a region between the deep quantum regime and the classical limit where the choice of the monitoring parameter allows one to control the complex behaviour of the system, leading to either the emergence or suppression of chaos. Our work shows that this is a result from the interplay between quantum interference effects induced by the nonlinear dynamics and the effectiveness of the decoherence for different measurement schemes.
Plasmonics of 2D Nanomaterials: Properties and Applications
Li, Yu; Li, Ziwei; Chi, Cheng; Shan, Hangyong; Zheng, Liheng
2017-01-01
Plasmonics has developed for decades in the field of condensed matter physics and optics. Based on the classical Maxwell theory, collective excitations exhibit profound light‐matter interaction properties beyond classical physics in lots of material systems. With the development of nanofabrication and characterization technology, ultra‐thin two‐dimensional (2D) nanomaterials attract tremendous interest and show exceptional plasmonic properties. Here, we elaborate the advanced optical properties of 2D materials especially graphene and monolayer molybdenum disulfide (MoS2), review the plasmonic properties of graphene, and discuss the coupling effect in hybrid 2D nanomaterials. Then, the plasmonic tuning methods of 2D nanomaterials are presented from theoretical models to experimental investigations. Furthermore, we reveal the potential applications in photocatalysis, photovoltaics and photodetections, based on the development of 2D nanomaterials, we make a prospect for the future theoretical physics and practical applications. PMID:28852608
A Laplacian based image filtering using switching noise detector.
Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar
2015-01-01
This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.
More on Weinberg's no-go theorem in quantum gravity
NASA Astrophysics Data System (ADS)
Nagahama, Munehiro; Oda, Ichiro
2018-05-01
We complement Weinberg's no-go theorem on the cosmological constant problem in quantum gravity by generalizing it to the case of a scale-invariant theory. Our analysis makes use of the effective action and the BRST symmetry in a manifestly covariant quantum gravity instead of the classical Lagrangian density and the G L (4 ) symmetry in classical gravity. In this sense, our proof is very general since it does not depend on details of quantum gravity and holds true for general gravitational theories which are invariant under diffeomorphisms. As an application of our theorem, we comment on an idea that in the asymptotic safety scenario the functional renormalization flow drives a cosmological constant to zero, solving the cosmological constant problem without reference to fine tuning of parameters. Finally, we also comment on the possibility of extending the Weinberg theorem in quantum gravity to the case where the translational invariance is spontaneously broken.
Efficient fractal-based mutation in evolutionary algorithms from iterated function systems
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.
2018-03-01
In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.
Quantum Oscillations Can Prevent the Big Bang Singularity in an Einstein-Dirac Cosmology
NASA Astrophysics Data System (ADS)
Finster, Felix; Hainzl, Christian
2010-01-01
We consider a spatially homogeneous and isotropic system of Dirac particles coupled to classical gravity. The dust and radiation dominated closed Friedmann-Robertson-Walker space-times are recovered as limiting cases. We find a mechanism where quantum oscillations of the Dirac wave functions can prevent the formation of the big bang or big crunch singularity. Thus before the big crunch, the collapse of the universe is stopped by quantum effects and reversed to an expansion, so that the universe opens up entering a new era of classical behavior. Numerical examples of such space-times are given, and the dependence on various parameters is discussed. Generically, one has a collapse after a finite number of cycles. By fine-tuning the parameters we construct an example of a space-time which satisfies the dominant energy condition and is time-periodic, thus running through an infinite number of contraction and expansion cycles.
Gravitational effective action at second order in curvature and gravitational waves
NASA Astrophysics Data System (ADS)
Calmet, Xavier; Capozziello, Salvatore; Pryer, Daniel
2017-09-01
We consider the full effective theory for quantum gravity at second order in curvature including non-local terms. We show that the theory contains two new degrees of freedom beyond the massless graviton: namely a massive spin-2 ghost and a massive scalar field. Furthermore, we show that it is impossible to fine-tune the parameters of the effective action to eliminate completely the classical spin-2 ghost because of the non-local terms in the effective action. Being a classical field, it is not clear anyway that this ghost is problematic. It simply implies a repulsive contribution to Newton's potential. We then consider how to extract the parameters of the effective action and show that it is possible to measure, at least in principle, the parameters of the local terms independently of each other using a combination of observations of gravitational waves and measurements performed by pendulum type experiments searching for deviations of Newton's potential.
Attention operates uniformly throughout the classical receptive field and the surround.
Verhoef, Bram-Ernst; Maunsell, John Hr
2016-08-22
Shifting attention among visual stimuli at different locations modulates neuronal responses in heterogeneous ways, depending on where those stimuli lie within the receptive fields of neurons. Yet how attention interacts with the receptive-field structure of cortical neurons remains unclear. We measured neuronal responses in area V4 while monkeys shifted their attention among stimuli placed in different locations within and around neuronal receptive fields. We found that attention interacts uniformly with the spatially-varying excitation and suppression associated with the receptive field. This interaction explained the large variability in attention modulation across neurons, and a non-additive relationship among stimulus selectivity, stimulus-induced suppression and attention modulation that has not been previously described. A spatially-tuned normalization model precisely accounted for all observed attention modulations and for the spatial summation properties of neurons. These results provide a unified account of spatial summation and attention-related modulation across both the classical receptive field and the surround.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maharaj, Akash V.; Rosenberg, Elliott W.; Hristov, Alexander T.
Here, the paradigmatic example of a continuous quantum phase transition is the transverse field Ising ferromagnet. In contrast to classical critical systems, whose properties depend only on symmetry and the dimension of space, the nature of a quantum phase transition also depends on the dynamics. In the transverse field Ising model, the order parameter is not conserved, and increasing the transverse field enhances quantum fluctuations until they become strong enough to restore the symmetry of the ground state. Ising pseudospins can represent the order parameter of any system with a twofold degenerate broken-symmetry phase, including electronic nematic order associated withmore » spontaneous point-group symmetry breaking. Here, we show for the representative example of orbital-nematic ordering of a non-Kramers doublet that an orthogonal strain or a perpendicular magnetic field plays the role of the transverse field, thereby providing a practical route for tuning appropriate materials to a quantum critical point. While the transverse fields are conjugate to seemingly unrelated order parameters, their nontrivial commutation relations with the nematic order parameter, which can be represented by a Berry-phase term in an effective field theory, intrinsically intertwine the different order parameters.« less
A cryostatic, fast scanning, wideband NQR spectrometer for the VHF range
NASA Astrophysics Data System (ADS)
Scharfetter, Hermann; Bödenler, Markus; Narnhofer, Dominik
2018-01-01
In the search for a novel MRI contrast agent which relies on T1 shortening due to quadrupolar interaction between Bi nuclei and protons, a fast scanning wideband system for zero-field nuclear quadrupole resonance (NQR) spectroscopy is required. Established NQR probeheads with motor-driven tune/match stages are usually bulky and slow, which can be prohibitive if it comes to Bi compounds with low SNR (excessive averaging) and long quadrupolar T1 times. Moreover many experiments yield better results at low temperatures such as 77 K (liquid nitrogen, LN) thus requiring easy to use cryo-probeheads. In this paper we present electronically tuned wideband probeheads for bands in the frequency range 20-120 MHz which can be immersed in LN and which enable very fast explorative scans over the whole range. To this end we apply an interleaved subspectrum sampling strategy (ISS) which relies on the electronic tuning capability. The superiority of the new concept is demonstrated with an experimental scan of triphenylbismuth from 24 to 116 MHz, both at room temperature and in LN. Especially for the first transition which exhibits extremely long T1 times (64 ms) the and low signal the new approach allows an acceleration factor by more than 100 when compared to classical methods.
Mixture-Tuned, Clutter Matched Filter for Remote Detection of Subpixel Spectral Signals
NASA Technical Reports Server (NTRS)
Thompson, David R.; Mandrake, Lukas; Green, Robert O.
2013-01-01
Mapping localized spectral features in large images demands sensitive and robust detection algorithms. Two aspects of large images that can harm matched-filter detection performance are addressed simultaneously. First, multimodal backgrounds may thwart the typical Gaussian model. Second, outlier features can trigger false detections from large projections onto the target vector. Two state-of-the-art approaches are combined that independently address outlier false positives and multimodal backgrounds. The background clustering models multimodal backgrounds, and the mixture tuned matched filter (MT-MF) addresses outliers. Combining the two methods captures significant additional performance benefits. The resulting mixture tuned clutter matched filter (MT-CMF) shows effective performance on simulated and airborne datasets. The classical MNF transform was applied, followed by k-means clustering. Then, each cluster s mean, covariance, and the corresponding eigenvalues were estimated. This yields a cluster-specific matched filter estimate as well as a cluster- specific feasibility score to flag outlier false positives. The technology described is a proof of concept that may be employed in future target detection and mapping applications for remote imaging spectrometers. It is of most direct relevance to JPL proposals for airborne and orbital hyperspectral instruments. Applications include subpixel target detection in hyperspectral scenes for military surveillance. Earth science applications include mineralogical mapping, species discrimination for ecosystem health monitoring, and land use classification.
Maharaj, Akash V.; Rosenberg, Elliott W.; Hristov, Alexander T.; ...
2017-12-05
Here, the paradigmatic example of a continuous quantum phase transition is the transverse field Ising ferromagnet. In contrast to classical critical systems, whose properties depend only on symmetry and the dimension of space, the nature of a quantum phase transition also depends on the dynamics. In the transverse field Ising model, the order parameter is not conserved, and increasing the transverse field enhances quantum fluctuations until they become strong enough to restore the symmetry of the ground state. Ising pseudospins can represent the order parameter of any system with a twofold degenerate broken-symmetry phase, including electronic nematic order associated withmore » spontaneous point-group symmetry breaking. Here, we show for the representative example of orbital-nematic ordering of a non-Kramers doublet that an orthogonal strain or a perpendicular magnetic field plays the role of the transverse field, thereby providing a practical route for tuning appropriate materials to a quantum critical point. While the transverse fields are conjugate to seemingly unrelated order parameters, their nontrivial commutation relations with the nematic order parameter, which can be represented by a Berry-phase term in an effective field theory, intrinsically intertwine the different order parameters.« less
The generation of arbitrary order, non-classical, Gauss-type quadrature for transport applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spence, Peter J., E-mail: peter.spence@awe.co.uk
A method is presented, based upon the Stieltjes method (1884), for the determination of non-classical Gauss-type quadrature rules, and the associated sets of abscissae and weights. The method is then used to generate a number of quadrature sets, to arbitrary order, which are primarily aimed at deterministic transport calculations. The quadrature rules and sets detailed include arbitrary order reproductions of those presented by Abu-Shumays in [4,8] (known as the QR sets, but labelled QRA here), in addition to a number of new rules and associated sets; these are generated in a similar way, and we label them the QRS quadraturemore » sets. The method presented here shifts the inherent difficulty (encountered by Abu-Shumays) associated with solving the non-linear moment equations, particular to the required quadrature rule, to one of the determination of non-classical weight functions and the subsequent calculation of various associated inner products. Once a quadrature rule has been written in a standard form, with an associated weight function having been identified, the calculation of the required inner products is achieved using specific variable transformations, in addition to the use of rapid, highly accurate quadrature suited to this purpose. The associated non-classical Gauss quadrature sets can then be determined, and this can be done to any order very rapidly. In this paper, instead of listing weights and abscissae for the different quadrature sets detailed (of which there are a number), the MATLAB code written to generate them is included as Appendix D. The accuracy and efficacy (in a transport setting) of the quadrature sets presented is not tested in this paper (although the accuracy of the QRA quadrature sets has been studied in [12,13]), but comparisons to tabulated results listed in [8] are made. When comparisons are made with one of the azimuthal QRA sets detailed in [8], the inherent difficulty in the method of generation, used there, becomes apparent, with the highest order tabulated sets showing unexpected anomalies. Although not in an actual transport setting, the accuracy of the sets presented here is assessed to some extent, by using them to approximate integrals (over an octant of the unit sphere) of various high order spherical harmonics. When this is done, errors in the tabulated QRA sets present themselves at the highest tabulated orders, whilst combinations of the new QRS quadrature sets offer some improvements in accuracy over the original QRA sets. Finally, in order to offer a quick, visual understanding of the various quadrature sets presented, when combined to give product sets for the purposes of integrating functions confined to the surface of a sphere, three-dimensional representations of points located on an octant of the unit sphere (as in [8,12]) are shown.« less
A Novel Modulation Classification Approach Using Gabor Filter Network
Ghauri, Sajjad Ahmed; Qureshi, Ijaz Mansoor; Cheema, Tanveer Ahmed; Malik, Aqdas Naveed
2014-01-01
A Gabor filter network based approach is used for feature extraction and classification of digital modulated signals by adaptively tuning the parameters of Gabor filter network. Modulation classification of digitally modulated signals is done under the influence of additive white Gaussian noise (AWGN). The modulations considered for the classification purpose are PSK 2 to 64, FSK 2 to 64, and QAM 4 to 64. The Gabor filter network uses the network structure of two layers; the first layer which is input layer constitutes the adaptive feature extraction part and the second layer constitutes the signal classification part. The Gabor atom parameters are tuned using Delta rule and updating of weights of Gabor filter using least mean square (LMS) algorithm. The simulation results show that proposed novel modulation classification algorithm has high classification accuracy at low signal to noise ratio (SNR) on AWGN channel. PMID:25126603
Bi-directional ROADM with one pair of NxN cyclic-AWGs for over N wavelength channels configuration
NASA Astrophysics Data System (ADS)
Tsai, Cheng-Mu
2018-01-01
This paper presents a bidirectional optical add-drop multiplexer (BROADM) with permitting white spectral channels input in bidirectional configuration. The filter routing rule of array waveguide grating (AWG) is applied for the wavelength channels (WCs) that need to be added and dropped by using the corresponding tunable fiber Bragg gratings (FBGs). The other WCs pass through output by tuning FBG filter spectra away from the WCs. The bandwidth between two adjacent WCs of each pair of ports in AWG is wider than one channel spacing so that the filter spectra of FBG is tuned to free spectral range (FSR) region to realize the wavelength routing function without interfering other WCs. The WCs can be flexibly handled by installing the corresponding tunable FBG. Therefore, the proposed BROADM is more flexible and has higher transmission capacity in the optical network.
Origin of the fundamental plane of elliptical galaxies in the Coma cluster without fine-tuning
NASA Astrophysics Data System (ADS)
Chiu, Mu-Chen; Ko, Chung-Ming; Shu, Chenggang
2017-03-01
Thirty years after the discovery of the fundamental plane, explanations of the tilt of the fundamental plane with respect to the virial plane are still in need of fine-tuning. In this paper, we try to explore the origin of this tilt from the perspective of modified Newtonian dynamics (MOND) by applying the 16 Coma galaxies available in J. Thomas et al. [Mon. Not. R. Astron. Soc. 415, 545 (2011), 10.1111/j.1365-2966.2011.18725.x]. Based on the mass models that can reproduce de Vaucouleurs' law closely, we find that the tilt of the traditional fundamental plane is naturally explained by the simple form of the MONDian interpolating function, if we assume a well motivated choice of anisotropic velocity distribution, and adopt the Kroupa or Salpeter stellar mass-to-light ratio. Our analysis does not necessarily rule out a varying stellar mass-to-light ratio.
Cornering natural SUSY at LHC Run II and beyond
NASA Astrophysics Data System (ADS)
Buckley, Matthew R.; Feld, David; Macaluso, Sebastian; Monteux, Angelo; Shih, David
2017-08-01
We derive the latest constraints on various simplified models of natural SUSY with light higgsinos, stops and gluinos, using a detailed and comprehensive reinterpretation of the most recent 13 TeV ATLAS and CMS searches with ˜ 15 fb-1 of data. We discuss the implications of these constraints for fine-tuning of the electroweak scale. While the most "vanilla" version of SUSY (the MSSM with R-parity and flavor-degenerate sfermions) with 10% fine-tuning is ruled out by the current constraints, models with decoupled valence squarks or reduced missing energy can still be fully natural. However, in all of these models, the mediation scale must be extremely low ( <100 TeV). We conclude by considering the prospects for the high-luminosity LHC era, where we expect the current limits on particle masses to improve by up to ˜ 1 TeV, and discuss further model-building directions for natural SUSY that are motivated by this work.
Multicriteria meta-heuristics for AGV dispatching control based on computational intelligence.
Naso, David; Turchiano, Biagio
2005-04-01
In many manufacturing environments, automated guided vehicles are used to move the processed materials between various pickup and delivery points. The assignment of vehicles to unit loads is a complex problem that is often solved in real-time with simple dispatching rules. This paper proposes an automated guided vehicles dispatching approach based on computational intelligence. We adopt a fuzzy multicriteria decision strategy to simultaneously take into account multiple aspects in every dispatching decision. Since the typical short-term view of dispatching rules is one of the main limitations of such real-time assignment heuristics, we also incorporate in the multicriteria algorithm a specific heuristic rule that takes into account the empty-vehicle travel on a longer time-horizon. Moreover, we also adopt a genetic algorithm to tune the weights associated to each decision criteria in the global decision algorithm. The proposed approach is validated by means of a comparison with other dispatching rules, and with other recently proposed multicriteria dispatching strategies also based on computational Intelligence. The analysis of the results obtained by the proposed dispatching approach in both nominal and perturbed operating conditions (congestions, faults) confirms its effectiveness.
Optimizing Reservoir Operation to Adapt to the Climate Change
NASA Astrophysics Data System (ADS)
Madadgar, S.; Jung, I.; Moradkhani, H.
2010-12-01
Climate change and upcoming variation in flood timing necessitates the adaptation of current rule curves developed for operation of water reservoirs as to reduce the potential damage from either flood or draught events. This study attempts to optimize the current rule curves of Cougar Dam on McKenzie River in Oregon addressing some possible climate conditions in 21th century. The objective is to minimize the failure of operation to meet either designated demands or flood limit at a downstream checkpoint. A simulation/optimization model including the standard operation policy and a global optimization method, tunes the current rule curve upon 8 GCMs and 2 greenhouse gases emission scenarios. The Precipitation Runoff Modeling System (PRMS) is used as the hydrology model to project the streamflow for the period of 2000-2100 using downscaled precipitation and temperature forcing from 8 GCMs and two emission scenarios. An ensemble of rule curves, each associated with an individual scenario, is obtained by optimizing the reservoir operation. The simulation of reservoir operation, for all the scenarios and the expected value of the ensemble, is conducted and performance assessment using statistical indices including reliability, resilience, vulnerability and sustainability is made.
Deficits in Category Learning in Older Adults: Rule-Based Versus Clustering Accounts
2017-01-01
Memory research has long been one of the key areas of investigation for cognitive aging researchers but only in the last decade or so has categorization been used to understand age differences in cognition. Categorization tasks focus more heavily on the grouping and organization of items in memory, and often on the process of learning relationships through trial and error. Categorization studies allow researchers to more accurately characterize age differences in cognition: whether older adults show declines in the way in which they represent categories with simple rules or declines in representing categories by similarity to past examples. In the current study, young and older adults participated in a set of classic category learning problems, which allowed us to distinguish between three hypotheses: (a) rule-complexity: categories were represented exclusively with rules and older adults had differential difficulty when more complex rules were required, (b) rule-specific: categories could be represented either by rules or by similarity, and there were age deficits in using rules, and (c) clustering: similarity was mainly used and older adults constructed a less-detailed representation by lumping more items into fewer clusters. The ordinal levels of performance across different conditions argued against rule-complexity, as older adults showed greater deficits on less complex categories. The data also provided evidence against rule-specificity, as single-dimensional rules could not explain age declines. Instead, computational modeling of the data indicated that older adults utilized fewer conceptual clusters of items in memory than did young adults. PMID:28816474
A least-squares finite element method for incompressible Navier-Stokes problems
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan
1992-01-01
A least-squares finite element method, based on the velocity-pressure-vorticity formulation, is developed for solving steady incompressible Navier-Stokes problems. This method leads to a minimization problem rather than to a saddle-point problem by the classic mixed method and can thus accommodate equal-order interpolations. This method has no parameter to tune. The associated algebraic system is symmetric, and positive definite. Numerical results for the cavity flow at Reynolds number up to 10,000 and the backward-facing step flow at Reynolds number up to 900 are presented.
High-frequency sum rules for the quasi-one-dimensional quantum plasma dielectric tensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genga, R.O.
A high-frequency sum-rule expansion is derived for all elements of the spinless quasi-one-dimensional quantum plasma response tensor at T = 0 K. As in the magnetized classical plasmas, we find that Omega/sub 4//sup 13/ is the only coefficient of omega/sup -4/ that has no correlational term. Further, we find that the correlations either enhance or reduce the negative quantum dispersion, depending on the direction of propagation. It is also noted that the quantum effect does not exist for the ordinary and the extraordinary modes for perpendicular and parallel propagation, respectively.
Greek classicism in living structure? Some deductive pathways in animal morphology.
Zweers, G A
1985-01-01
Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs actualization in the "free management space".
Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O
2013-03-01
Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.
Modeling NDT piezoelectric ultrasonic transmitters.
San Emeterio, J L; Ramos, A; Sanz, P T; Ruíz, A; Azbaid, A
2004-04-01
Ultrasonic NDT applications are frequently based on the spike excitation of piezoelectric transducers by means of efficient pulsers which usually include a power switching device (e.g. SCR or MOS-FET) and some rectifier components. In this paper we present an approximate frequency domain electro-acoustic model for pulsed piezoelectric ultrasonic transmitters which, by integrating partial models of the different stages (driving electronics, tuning/matching networks and broadband piezoelectric transducer), allows the computation of the emission transfer function and output force temporal waveform. An approximate frequency domain model is used for the evaluation of the electrical driving pulse from the spike generator. Tuning circuits, interconnecting cable and mechanical impedance matching layers are modeled by means of transmission lines and the classical quadripole approach. The KLM model is used for the piezoelectric transducer. In addition, a PSPICE scheme is used for an alternative simulation of the broadband driving spike, including the accurate evaluation of non-linear driving effects. Several examples illustrate the capabilities of the specifically developed software.
Levels of integration in cognitive control and sequence processing in the prefrontal cortex.
Bahlmann, Jörg; Korb, Franziska M; Gratton, Caterina; Friederici, Angela D
2012-01-01
Cognitive control is necessary to flexibly act in changing environments. Sequence processing is needed in language comprehension to build the syntactic structure in sentences. Functional imaging studies suggest that sequence processing engages the left ventrolateral prefrontal cortex (PFC). In contrast, cognitive control processes additionally recruit bilateral rostral lateral PFC regions. The present study aimed to investigate these two types of processes in one experimental paradigm. Sequence processing was manipulated using two different sequencing rules varying in complexity. Cognitive control was varied with different cue-sets that determined the choice of a sequencing rule. Univariate analyses revealed distinct PFC regions for the two types of processing (i.e. sequence processing: left ventrolateral PFC and cognitive control processing: bilateral dorsolateral and rostral PFC). Moreover, in a common brain network (including left lateral PFC and intraparietal sulcus) no interaction between sequence and cognitive control processing was observed. In contrast, a multivariate pattern analysis revealed an interaction of sequence and cognitive control processing, such that voxels in left lateral PFC and parietal cortex showed different tuning functions for tasks involving different sequencing and cognitive control demands. These results suggest that the difference between the process of rule selection (i.e. cognitive control) and the process of rule-based sequencing (i.e. sequence processing) find their neuronal underpinnings in distinct activation patterns in lateral PFC. Moreover, the combination of rule selection and rule sequencing can shape the response of neurons in lateral PFC and parietal cortex.
FAF-Drugs2: free ADME/tox filtering tool to assist drug discovery and chemical biology projects.
Lagorce, David; Sperandio, Olivier; Galons, Hervé; Miteva, Maria A; Villoutreix, Bruno O
2008-09-24
Drug discovery and chemical biology are exceedingly complex and demanding enterprises. In recent years there are been increasing awareness about the importance of predicting/optimizing the absorption, distribution, metabolism, excretion and toxicity (ADMET) properties of small chemical compounds along the search process rather than at the final stages. Fast methods for evaluating ADMET properties of small molecules often involve applying a set of simple empirical rules (educated guesses) and as such, compound collections' property profiling can be performed in silico. Clearly, these rules cannot assess the full complexity of the human body but can provide valuable information and assist decision-making. This paper presents FAF-Drugs2, a free adaptable tool for ADMET filtering of electronic compound collections. FAF-Drugs2 is a command line utility program (e.g., written in Python) based on the open source chemistry toolkit OpenBabel, which performs various physicochemical calculations, identifies key functional groups, some toxic and unstable molecules/functional groups. In addition to filtered collections, FAF-Drugs2 can provide, via Gnuplot, several distribution diagrams of major physicochemical properties of the screened compound libraries. We have developed FAF-Drugs2 to facilitate compound collection preparation, prior to (or after) experimental screening or virtual screening computations. Users can select to apply various filtering thresholds and add rules as needed for a given project. As it stands, FAF-Drugs2 implements numerous filtering rules (23 physicochemical rules and 204 substructure searching rules) that can be easily tuned.
Levels of Integration in Cognitive Control and Sequence Processing in the Prefrontal Cortex
Bahlmann, Jörg; Korb, Franziska M.; Gratton, Caterina; Friederici, Angela D.
2012-01-01
Cognitive control is necessary to flexibly act in changing environments. Sequence processing is needed in language comprehension to build the syntactic structure in sentences. Functional imaging studies suggest that sequence processing engages the left ventrolateral prefrontal cortex (PFC). In contrast, cognitive control processes additionally recruit bilateral rostral lateral PFC regions. The present study aimed to investigate these two types of processes in one experimental paradigm. Sequence processing was manipulated using two different sequencing rules varying in complexity. Cognitive control was varied with different cue-sets that determined the choice of a sequencing rule. Univariate analyses revealed distinct PFC regions for the two types of processing (i.e. sequence processing: left ventrolateral PFC and cognitive control processing: bilateral dorsolateral and rostral PFC). Moreover, in a common brain network (including left lateral PFC and intraparietal sulcus) no interaction between sequence and cognitive control processing was observed. In contrast, a multivariate pattern analysis revealed an interaction of sequence and cognitive control processing, such that voxels in left lateral PFC and parietal cortex showed different tuning functions for tasks involving different sequencing and cognitive control demands. These results suggest that the difference between the process of rule selection (i.e. cognitive control) and the process of rule-based sequencing (i.e. sequence processing) find their neuronal underpinnings in distinct activation patterns in lateral PFC. Moreover, the combination of rule selection and rule sequencing can shape the response of neurons in lateral PFC and parietal cortex. PMID:22952762
Quantum formalism for classical statistics
NASA Astrophysics Data System (ADS)
Wetterich, C.
2018-06-01
In static classical statistical systems the problem of information transport from a boundary to the bulk finds a simple description in terms of wave functions or density matrices. While the transfer matrix formalism is a type of Heisenberg picture for this problem, we develop here the associated Schrödinger picture that keeps track of the local probabilistic information. The transport of the probabilistic information between neighboring hypersurfaces obeys a linear evolution equation, and therefore the superposition principle for the possible solutions. Operators are associated to local observables, with rules for the computation of expectation values similar to quantum mechanics. We discuss how non-commutativity naturally arises in this setting. Also other features characteristic of quantum mechanics, such as complex structure, change of basis or symmetry transformations, can be found in classical statistics once formulated in terms of wave functions or density matrices. We construct for every quantum system an equivalent classical statistical system, such that time in quantum mechanics corresponds to the location of hypersurfaces in the classical probabilistic ensemble. For suitable choices of local observables in the classical statistical system one can, in principle, compute all expectation values and correlations of observables in the quantum system from the local probabilistic information of the associated classical statistical system. Realizing a static memory material as a quantum simulator for a given quantum system is not a matter of principle, but rather of practical simplicity.
2008-03-01
in subject areas that rely mostly on intuition, like marketing, sales , and customer relationship management (Berry and Linoff, 2004). Commonly...closely related to this study might be Amazon or iTunes ’ use of market basket analysis. Today, most e-commerce consumers are accustomed to receiving... sales is to minimize the costs and hassle of warranty-related repairs and replacements. Of course, the best way to minimize those liabilities is to
Quantum-Classical Connection for Hydrogen Atom-Like Systems
ERIC Educational Resources Information Center
Syam, Debapriyo; Roy, Arup
2011-01-01
The Bohr-Sommerfeld quantum theory specifies the rules of quantization for circular and elliptical orbits for a one-electron hydrogen atom-like system. This article illustrates how a formula connecting the principal quantum number "n" and the length of the major axis of an elliptical orbit may be arrived at starting from the quantum…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-10
..., 2009) (SR-NYSEArca-2009-83) (order approving listing of Grail American Beacon International Equity ETF... appreciation above international benchmarks, such as the BNY Mellon Classic ADR Index and the MSCI EAFE Index... process include demographics, global commerce, outsourcing, the growing global middle class and the...
Understanding Federal regulations as guidelines for classical biological control programs
Michael E. Montgomery
2011-01-01
This chapter reviews the legislation and rules that provide the foundation for federal regulation of the introduction of natural enemies of insects as biological control agents. It also outlines the steps for complying with regulatory requirements, using biological control of Adelges tsugae Annand, the hemlock woolly adelgid (HWA), as an example. The...
Capitalism in Six Westerns by John Ford
ERIC Educational Resources Information Center
Braun, Carlos Rodriguez
2011-01-01
The economic and institutional analysis of capitalism can be illustrated through John Ford's Westerns. This article focuses on six classics by Ford that show the move toward modern order, the creation of a new society, and the rule of law. Economic features are pervading, from property rights and contracts to markets, money, and trade. Ford has…
Intervention Strategies for the Child with Prenatal Drug Exposure.
ERIC Educational Resources Information Center
Cole, Jean Gardner
The behavior of the infant with prenatal drug exposure (PDE) is different from a nonexposed infant, and it is a difference that changes the rules of interaction for the caregiver. Infants exposed to opiates such as heroin or methadone demonstrate very specific signs of neurobehavioral dysfunction as they go through classic withdrawal symptoms.…
High-order harmonic generation from highly excited states in acetylene
NASA Astrophysics Data System (ADS)
Mulholland, Peter; Dundas, Daniel
2018-04-01
High-order harmonic generation (HHG) from aligned acetylene molecules interacting with mid infra-red (IR), linearly polarized laser pulses is studied theoretically using a mixed quantum-classical approach in which the electrons are described using time-dependent density-functional theory while the ions are treated classically. We find that for molecules aligned perpendicular to the laser polarization axis, HHG arises from the highest-occupied molecular orbital (HOMO), while for molecules aligned along the laser polarization axis, HHG is dominated by the HOMO-1. In the parallel orientation we observe a double plateau with an inner plateau that is produced by ionization from and recombination back to an autoionizing state. Two pieces of evidence support this idea. First, by choosing a suitably tuned vacuum ultraviolet pump pulse that directly excites the autoionizing state we observe a dramatic enhancement of all harmonics in the inner plateau. Second, in certain circumstances, the position of the inner plateau cutoff does not agree with the classical three-step model. We show that this discrepancy can be understood in terms of a minimum in the dipole recombination matrix element from the continuum to the autoionizing state.
Klok, C Jaco; Harrison, Jon F
2013-10-01
Temperature is a key factor that affects the rates of growth and development in animals, which ultimately determine body size. Although not universal, a widely documented and poorly understood pattern is the inverse relationship between the temperature at which an ectothermic animal is reared and its body size (temperature size rule [TSR]). The proximate and ultimate mechanisms for the TSR remain unclear. To explore possible explanations for the TSR, we tested for correlations between the magnitude/direction of the TSR and latitude, temperature, elevation, habitat, availability of oxygen, capacity for flight, and taxonomic grouping in 98 species/populations of arthropods. The magnitude and direction of the TSR was not correlated with any of the macro-environmental variables we examined, supporting the generality of the TSR. However, body size affected the magnitude and direction of the TSR, with smaller arthropods more likely to demonstrate a classic TSR. Considerable variation among species exists in the TSR, suggesting either strong interactions with nutrition, or selection based on microclimatic or seasonal variation not captured in classic macro-environmental variables.
Toropova, Alla P; Toropov, Andrey A
2013-11-01
The increasing use of nanomaterials incorporated into consumer products leads to the need for developing approaches to establish "quantitative structure-activity relationships" (QSARs) for various nanomaterials. However, the molecular structure as rule is not available for nanomaterials at least in its classic meaning. An possible alternative of classic QSAR (based on the molecular structure) is the using of data on physicochemical features of TiO(2) nanoparticles. The damage to cellular membranes (units L(-1)) by means of various TiO(2) nanoparticles is examined as the endpoint. Copyright © 2013 Elsevier Ltd. All rights reserved.
RULES OF COMPETITIVE STIMULUS SELECTION IN A CHOLINERGIC ISTHMIC NUCLEUS OF THE OWL MIDBRAIN
Asadollahi, Ali; Mysore, Shreesh P.; Knudsen, Eric I.
2011-01-01
In a natural scene, multiple stimuli compete for the control of gaze direction and attention. The nucleus isthmi pars parvocellularis (Ipc) is a cholinergic, midbrain nucleus that is reciprocally interconnected to the optic tectum, a structure known to be involved in the control of gaze and attention. Previous research has shown that the responses of many Ipc units to a visual stimulus presented inside the classical receptive field (RF) can be powerfully inhibited when the strength of a distant, competing stimulus becomes the stronger stimulus. This study investigated further the nature of competitive interactions in the Ipc of owls by employing two complementary protocols: in the first protocol, we measured the effects of a distant stimulus on responses to an RF stimulus located at different positions inside the RF; in the second protocol, we measured the effects of a distant stimulus on responses to RF stimuli of different strengths. The first protocol demonstrated that the effect of a competing stimulus is purely divisive: the competitor caused a proportional reduction in responses to the RF stimulus that did not alter either the location or sharpness of spatial tuning. The second protocol demonstrated that, for most units, the strength of this divisive inhibition is regulated powerfully by the relative strengths of the competing stimuli: inhibition was strong when the competitor was the stronger stimulus and weak when the competitor was the weaker stimulus. The data indicate that competitive interactions in the Ipc depend on feedback and a globally divisive inhibitory network. PMID:21508234
Electrokinetic mechanism of wettability alternation at oil-water-rock interface
NASA Astrophysics Data System (ADS)
Tian, Huanhuan; Wang, Moran
2017-12-01
Design of ions for injection water may change the wettability of oil-brine-rock (OBR) system, which has very important applications in enhanced oil recovery. Though ion-tuned wettability has been verified by various experiments, the mechanism is still not clear. In this review paper, we first present a comprehensive summarization of possible wettability alteration mechanisms, including fines migration or dissolution, multicomponent ion-exchange (MIE), electrical double layer (EDL) interaction between rock and oil, and repulsive hydration force. To clarify the key mechanism, we introduce a complete frame of theories to calculate attribution of EDL repulsion to wettability alteration by assuming constant binding forces (no MIE) and rigid smooth surface (no fines migration or dissolution). The frame consists of three parts: the classical Gouy-Chapman model coupled with interface charging mechanisms to describe EDL in oil-brine-rock systems, three methods with different boundary assumptions to evaluate EDL interaction energy, and the modified Young-Dupré equation to link EDL interaction energy with contact angle. The quantitative analysis for two typical oil-brine-rock systems provides two physical maps that show how the EDL interaction influences contact angle at different ionic composition. The result indicates that the contribution of EDL interaction to ion-tuned wettability for the studied system is not quite significant. The classical and advanced experimental work using microfabrication is reviewed briefly on the contribution of EDL repulsion to wettability alteration and compared with the theoretical results. It is indicated that the roughness of real rock surface may enhance EDL interaction. Finally we discuss some pending questions, perspectives and promising applications based on the mechanism.
Induced and evoked neural correlates of orientation selectivity in human visual cortex.
Koelewijn, Loes; Dumont, Julie R; Muthukumaraswamy, Suresh D; Rich, Anina N; Singh, Krish D
2011-02-14
Orientation discrimination is much better for patterns oriented along the horizontal or vertical (cardinal) axes than for patterns oriented obliquely, but the neural basis for this is not known. Previous animal neurophysiology and human neuroimaging studies have demonstrated only a moderate bias for cardinal versus oblique orientations, with fMRI showing a larger response to cardinals in primary visual cortex (V1) and EEG demonstrating both increased magnitudes and reduced latencies of transient evoked responses. Here, using MEG, we localised and characterised induced gamma and transient evoked responses to stationary circular grating patches of three orientations (0, 45, and 90° from vertical). Surprisingly, we found that the sustained gamma response was larger for oblique, compared to cardinal, stimuli. This "inverse oblique effect" was also observed in the earliest (80 ms) evoked response, whereas later responses (120 ms) showed a trend towards the reverse, "classic", oblique response. Source localisation demonstrated that the sustained gamma and early evoked responses were localised to medial visual cortex, whilst the later evoked responses came from both this early visual area and a source in a more inferolateral extrastriate region. These results suggest that (1) the early evoked and sustained gamma responses manifest the initial tuning of V1 neurons, with the stronger response to oblique stimuli possibly reflecting increased tuning widths for these orientations, and (2) the classic behavioural oblique effect is mediated by an extrastriate cortical area and may also implicate feedback from extrastriate to primary visual cortex. Copyright © 2010 Elsevier Inc. All rights reserved.
Computing exponentially faster: implementing a non-deterministic universal Turing machine using DNA
Currin, Andrew; Korovin, Konstantin; Ababi, Maria; Roper, Katherine; Kell, Douglas B.; Day, Philip J.
2017-01-01
The theory of computer science is based around universal Turing machines (UTMs): abstract machines able to execute all possible algorithms. Modern digital computers are physical embodiments of classical UTMs. For the most important class of problem in computer science, non-deterministic polynomial complete problems, non-deterministic UTMs (NUTMs) are theoretically exponentially faster than both classical UTMs and quantum mechanical UTMs (QUTMs). However, no attempt has previously been made to build an NUTM, and their construction has been regarded as impossible. Here, we demonstrate the first physical design of an NUTM. This design is based on Thue string rewriting systems, and thereby avoids the limitations of most previous DNA computing schemes: all the computation is local (simple edits to strings) so there is no need for communication, and there is no need to order operations. The design exploits DNA's ability to replicate to execute an exponential number of computational paths in P time. Each Thue rewriting step is embodied in a DNA edit implemented using a novel combination of polymerase chain reactions and site-directed mutagenesis. We demonstrate that the design works using both computational modelling and in vitro molecular biology experimentation: the design is thermodynamically favourable, microprogramming can be used to encode arbitrary Thue rules, all classes of Thue rule can be implemented, and non-deterministic rule implementation. In an NUTM, the resource limitation is space, which contrasts with classical UTMs and QUTMs where it is time. This fundamental difference enables an NUTM to trade space for time, which is significant for both theoretical computer science and physics. It is also of practical importance, for to quote Richard Feynman ‘there's plenty of room at the bottom’. This means that a desktop DNA NUTM could potentially utilize more processors than all the electronic computers in the world combined, and thereby outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy. PMID:28250099
A films based approach to intensity imbalance correction for 65nm node c:PSM
NASA Astrophysics Data System (ADS)
Cottle, Rand; Sixt, Pierre; Lassiter, Matt; Cangemi, Marc; Martin, Patrick; Progler, Chris
2005-11-01
Intensity imbalance between the 0 and π phase features of c:PSM cause gate CD control and edge placement problems. Strategies such as undercut, selective biasing, and combinations of undercut and bias are currently used in production to mitigate these problems. However, there are drawbacks to these strategies such as space CD delta through pitch, gate CD control through defocus, design rule restrictions, and reticle manufacturability. This paper investigates the application of an innovative films-based approach to intensity balancing known as the Transparent Etch Stop Layer (TESL). TESL, in addition to providing a host of reticle quality and manufacturability benefits, also can be tuned to significantly reduce imbalance. Rigorous 3D vector simulations and experimental data compare through pitch and defocus performance of TESL and conventional c:PSM for 65nm design rules.
A neural fuzzy controller learning by fuzzy error propagation
NASA Technical Reports Server (NTRS)
Nauck, Detlef; Kruse, Rudolf
1992-01-01
In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.
Optical Selection Rule of Excitons in Gapped Chiral Fermion Systems
NASA Astrophysics Data System (ADS)
Zhang, Xiaoou; Shan, Wen-Yu; Xiao, Di
2018-02-01
We show that the exciton optical selection rule in gapped chiral fermion systems is governed by their winding number w , a topological quantity of the Bloch bands. Specifically, in a CN-invariant chiral fermion system, the angular momentum of bright exciton states is given by w ±1 +n N with n being an integer. We demonstrate our theory by proposing two chiral fermion systems capable of hosting dark s -like excitons: gapped surface states of a topological crystalline insulator with C4 rotational symmetry and biased 3 R -stacked MoS2 bilayers. In the latter case, we show that gating can be used to tune the s -like excitons from bright to dark by changing the winding number. Our theory thus provides a pathway to electrical control of optical transitions in two-dimensional material.
Band-selective filter in a zigzag graphene nanoribbon.
Nakabayashi, Jun; Yamamoto, Daisuke; Kurihara, Susumu
2009-02-13
Electric transport of a zigzag graphene nanoribbon through a steplike potential and a barrier potential is investigated by using the recursive Green's function method. In the case of the steplike potential, we demonstrate numerically that scattering processes obey a selection rule for the band indices when the number of zigzag chains is even; the electrons belonging to the "even" ("odd") bands are scattered only into the even (odd) bands so that the parity of the wave functions is preserved. In the case of the barrier potential, by tuning the barrier height to be an appropriate value, we show that it can work as the "band-selective filter", which transmits electrons selectively with respect to the indices of the bands to which the incident electrons belong. Finally, we suggest that this selection rule can be observed in the conductance by applying two barrier potentials.
Tuning and predicting the wetting of nanoengineered material surface
NASA Astrophysics Data System (ADS)
Ramiasa-MacGregor, M.; Mierczynska, A.; Sedev, R.; Vasilev, K.
2016-02-01
The wetting of a material can be tuned by changing the roughness on its surface. Recent advances in the field of nanotechnology open exciting opportunities to control macroscopic wetting behaviour. Yet, the benchmark theories used to describe the wettability of macroscopically rough surfaces fail to fully describe the wetting behaviour of systems with topographical features at the nanoscale. To shed light on the events occurring at the nanoscale we have utilised model gradient substrata where surface nanotopography was tailored in a controlled and robust manner. The intrinsic wettability of the coatings was varied from hydrophilic to hydrophobic. The measured water contact angle could not be described by the classical theories. We developed an empirical model that effectively captures the experimental data, and further enables us to predict the wetting of surfaces with nanoscale roughness by considering the physical and chemical properties of the material. The fundamental insights presented here are important for the rational design of advanced materials having tailored surface nanotopography with predictable wettability.The wetting of a material can be tuned by changing the roughness on its surface. Recent advances in the field of nanotechnology open exciting opportunities to control macroscopic wetting behaviour. Yet, the benchmark theories used to describe the wettability of macroscopically rough surfaces fail to fully describe the wetting behaviour of systems with topographical features at the nanoscale. To shed light on the events occurring at the nanoscale we have utilised model gradient substrata where surface nanotopography was tailored in a controlled and robust manner. The intrinsic wettability of the coatings was varied from hydrophilic to hydrophobic. The measured water contact angle could not be described by the classical theories. We developed an empirical model that effectively captures the experimental data, and further enables us to predict the wetting of surfaces with nanoscale roughness by considering the physical and chemical properties of the material. The fundamental insights presented here are important for the rational design of advanced materials having tailored surface nanotopography with predictable wettability. Electronic supplementary information (ESI) available: Detailed characterization of the nanorough substrates and model derivation. See DOI: 10.1039/c5nr08329j
Quantum probabilities from quantum entanglement: experimentally unpacking the Born rule
Harris, Jérémie; Bouchard, Frédéric; Santamato, Enrico; ...
2016-05-11
The Born rule, a foundational axiom used to deduce probabilities of events from wavefunctions, is indispensable in the everyday practice of quantum physics. It is also key in the quest to reconcile the ostensibly inconsistent laws of the quantum and classical realms, as it confers physical significance to reduced density matrices, the essential tools of decoherence theory. Following Bohr's Copenhagen interpretation, textbooks postulate the Born rule outright. But, recent attempts to derive it from other quantum principles have been successful, holding promise for simplifying and clarifying the quantum foundational bedrock. Moreover, a major family of derivations is based on envariance,more » a recently discovered symmetry of entangled quantum states. Here, we identify and experimentally test three premises central to these envariance-based derivations, thus demonstrating, in the microworld, the symmetries from which the Born rule is derived. Furthermore, we demonstrate envariance in a purely local quantum system, showing its independence from relativistic causality.« less
Learning and transfer of category knowledge in an indirect categorization task.
Helie, Sebastien; Ashby, F Gregory
2012-05-01
Knowledge representations acquired during category learning experiments are 'tuned' to the task goal. A useful paradigm to study category representations is indirect category learning. In the present article, we propose a new indirect categorization task called the "same"-"different" categorization task. The same-different categorization task is a regular same-different task, but the question asked to the participants is about the stimulus category membership instead of stimulus identity. Experiment 1 explores the possibility of indirectly learning rule-based and information-integration category structures using the new paradigm. The results suggest that there is little learning about the category structures resulting from an indirect categorization task unless the categories can be separated by a one-dimensional rule. Experiment 2 explores whether a category representation learned indirectly can be used in a direct classification task (and vice versa). The results suggest that previous categorical knowledge acquired during a direct classification task can be expressed in the same-different categorization task only when the categories can be separated by a rule that is easily verbalized. Implications of these results for categorization research are discussed.
NASA Technical Reports Server (NTRS)
Yen, John; Wang, Haojin; Daugherity, Walter C.
1992-01-01
Fuzzy logic controllers have some often-cited advantages over conventional techniques such as PID control, including easier implementation, accommodation to natural language, and the ability to cover a wider range of operating conditions. One major obstacle that hinders the broader application of fuzzy logic controllers is the lack of a systematic way to develop and modify their rules; as a result the creation and modification of fuzzy rules often depends on trial and error or pure experimentation. One of the proposed approaches to address this issue is a self-learning fuzzy logic controller (SFLC) that uses reinforcement learning techniques to learn the desirability of states and to adjust the consequent part of its fuzzy control rules accordingly. Due to the different dynamics of the controlled processes, the performance of a self-learning fuzzy controller is highly contingent on its design. The design issue has not received sufficient attention. The issues related to the design of a SFLC for application to a petrochemical process are discussed, and its performance is compared with that of a PID and a self-tuning fuzzy logic controller.
NASA Astrophysics Data System (ADS)
Sardet, Laure; Patilea, Valentin
When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.
Proposal for founding mistrustful quantum cryptography on coin tossing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kent, Adrian; Hewlett-Packard Laboratories, Filton Road, Stoke Gifford, Bristol BS34 8QZ,
2003-07-01
A significant branch of classical cryptography deals with the problems which arise when mistrustful parties need to generate, process, or exchange information. As Kilian showed a while ago, mistrustful classical cryptography can be founded on a single protocol, oblivious transfer, from which general secure multiparty computations can be built. The scope of mistrustful quantum cryptography is limited by no-go theorems, which rule out, inter alia, unconditionally secure quantum protocols for oblivious transfer or general secure two-party computations. These theorems apply even to protocols which take relativistic signaling constraints into account. The best that can be hoped for, in general, aremore » quantum protocols which are computationally secure against quantum attack. Here a method is described for building a classically certified bit commitment, and hence every other mistrustful cryptographic task, from a secure coin-tossing protocol. No security proof is attempted, but reasons are sketched why these protocols might resist quantum computational attack.« less
Quantization of spinor fields. III. Fermions on coherent (Bose) domains
NASA Astrophysics Data System (ADS)
Garbaczewski, Piotr
1983-02-01
A formulation of the c-number classics-quanta correspondence rule for spinor systems requires all elements of the quantum field algebra to be expanded into power series with respect to the generators of the canonical commutation relation (CCR) algebra. On the other hand, the asymptotic completeness demand would result in the (Haag) expansions with respect to the canonical anticommutation relation (CAR) generators. We establish the conditions under which the above correspondence rule can be reconciled with the existence of Haag expansions in terms of asymptotic free Fermi fields. Then, the CAR become represented on the state space of the Bose (CCR) system.
Path integrals, the ABL rule and the three-box paradox
NASA Astrophysics Data System (ADS)
Sokolovski, D.; Puerto Giménez, I.; Sala Mayato, R.
2008-10-01
The three-box problem is analysed in terms of virtual pathways, interference between which is destroyed by a number of intermediate measurements. The Aharonov-Bergmann-Lebowitz (ABL) rule is shown to be a particular case of Feynman's recipe for assigning probabilities to exclusive alternatives. The ‘paradoxical’ features of the three box case arise in an attempt to attribute, in contradiction to the uncertainty principle, properties pertaining to different ensembles produced by different intermediate measurements to the same particle. The effect can be mimicked by a classical system, provided an observation is made to perturb the system in a non-local manner.
Dynamic Cross Domain Information Sharing - A Concept Paper on Flexible Adaptive Policy Management
2010-10-01
no read-up, no write-down” rule of the classical Bell-La Padula [1] model is becoming unten- able because of the increasing need to seamlessly handle...Elliott Bell, "Looking Back at the Bell-La Padula Model," , Washington, DC, USA, 2005. [2] (2009, Jan.) DISA NCES Website. [Online]. http://www.disa.mil
The Proper Sequence for Correcting Correlation Coefficients for Range Restriction and Unreliability.
ERIC Educational Resources Information Center
Stauffer, Joseph M.; Mendoza, Jorge L.
2001-01-01
Uses classical test theory to show that it is the nature of the range restriction, rather than the nature of the available reliability coefficient, that determines the sequence for applying corrections for range restriction and unreliability. Shows how the common rule of thumb for choosing the sequence is tenable only when the correction does not…
Does order matter? Investigating the effect of sequence on glance duration during on-road driving
Roberts, Shannon C.; Reimer, Bryan; Mehler, Bruce
2017-01-01
Previous literature has shown that vehicle crash risks increases as drivers’ off-road glance duration increases. Many factors influence drivers’ glance duration such as individual differences, driving environment, or task characteristics. Theories and past studies suggest that glance duration increases as the task progresses, but the exact relationship between glance sequence and glance durations is not fully understood. The purpose of this study was to examine the effect of glance sequence on glance duration among drivers completing a visual-manual radio tuning task and an auditory-vocal based multi-modal navigation entry task. Eighty participants drove a vehicle on urban highways while completing radio tuning and navigation entry tasks. Forty participants drove under an experimental protocol that required three button presses followed by rotation of a tuning knob to complete the radio tuning task while the other forty participants completed the task with one less button press. Multiple statistical analyses were conducted to measure the effect of glance sequence on glance duration. Results showed that across both tasks and a variety of statistical tests, glance sequence had inconsistent effects on glance duration—the effects varied according to the number of glances, task type, and data set that was being evaluated. Results suggest that other aspects of the task as well as interface design effect glance duration and should be considered in the context of examining driver attention or lack thereof. All in all, interface design and task characteristics have a more influential impact on glance duration than glance sequence, suggesting that classical design considerations impacting driver attention, such as the size and location of buttons, remain fundamental in designing in-vehicle interfaces. PMID:28158301
Automatic information extraction from unstructured mammography reports using distributed semantics.
Gupta, Anupama; Banerjee, Imon; Rubin, Daniel L
2018-02-01
To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F 1 -score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules. Copyright © 2018 Elsevier Inc. All rights reserved.
Building gene expression profile classifiers with a simple and efficient rejection option in R.
Benso, Alfredo; Di Carlo, Stefano; Politano, Gianfranco; Savino, Alessandro; Hafeezurrehman, Hafeez
2011-01-01
The collection of gene expression profiles from DNA microarrays and their analysis with pattern recognition algorithms is a powerful technology applied to several biological problems. Common pattern recognition systems classify samples assigning them to a set of known classes. However, in a clinical diagnostics setup, novel and unknown classes (new pathologies) may appear and one must be able to reject those samples that do not fit the trained model. The problem of implementing a rejection option in a multi-class classifier has not been widely addressed in the statistical literature. Gene expression profiles represent a critical case study since they suffer from the curse of dimensionality problem that negatively reflects on the reliability of both traditional rejection models and also more recent approaches such as one-class classifiers. This paper presents a set of empirical decision rules that can be used to implement a rejection option in a set of multi-class classifiers widely used for the analysis of gene expression profiles. In particular, we focus on the classifiers implemented in the R Language and Environment for Statistical Computing (R for short in the remaining of this paper). The main contribution of the proposed rules is their simplicity, which enables an easy integration with available data analysis environments. Since in the definition of a rejection model tuning of the involved parameters is often a complex and delicate task, in this paper we exploit an evolutionary strategy to automate this process. This allows the final user to maximize the rejection accuracy with minimum manual intervention. This paper shows how the use of simple decision rules can be used to help the use of complex machine learning algorithms in real experimental setups. The proposed approach is almost completely automated and therefore a good candidate for being integrated in data analysis flows in labs where the machine learning expertise required to tune traditional classifiers might not be available.
Telerobotic control of a mobile coordinated robotic server. M.S. Thesis Annual Technical Report
NASA Technical Reports Server (NTRS)
Lee, Gordon
1993-01-01
The annual report on telerobotic control of a mobile coordinated robotic server is presented. The goal of this effort is to develop advanced control methods for flexible space manipulator systems. As such, an adaptive fuzzy logic controller was developed in which model structure as well as parameter constraints are not required for compensation. The work builds upon previous work on fuzzy logic controllers. Fuzzy logic controllers have been growing in importance in the field of automatic feedback control. Hardware controllers using fuzzy logic have become available as an alternative to the traditional PID controllers. Software has also been introduced to aid in the development of fuzzy logic rule-bases. The advantages of using fuzzy logic controllers include the ability to merge the experience and intuition of expert operators into the rule-base and that a model of the system is not required to construct the controller. A drawback of the classical fuzzy logic controller, however, is the many parameters needed to be turned off-line prior to application in the closed-loop. In this report, an adaptive fuzzy logic controller is developed requiring no system model or model structure. The rule-base is defined to approximate a state-feedback controller while a second fuzzy logic algorithm varies, on-line, parameters of the defining controller. Results indicate the approach is viable for on-line adaptive control of systems when the model is too complex or uncertain for application of other more classical control techniques.
Generation of Nonclassical Biphoton States through Cascaded Quantum Walks on a Nonlinear Chip
NASA Astrophysics Data System (ADS)
Solntsev, Alexander S.; Setzpfandt, Frank; Clark, Alex S.; Wu, Che Wen; Collins, Matthew J.; Xiong, Chunle; Schreiber, Andreas; Katzschmann, Fabian; Eilenberger, Falk; Schiek, Roland; Sohler, Wolfgang; Mitchell, Arnan; Silberhorn, Christine; Eggleton, Benjamin J.; Pertsch, Thomas; Sukhorukov, Andrey A.; Neshev, Dragomir N.; Kivshar, Yuri S.
2014-07-01
We demonstrate a nonlinear optical chip that generates photons with reconfigurable nonclassical spatial correlations. We employ a quadratic nonlinear waveguide array, where photon pairs are generated through spontaneous parametric down-conversion and simultaneously spread through quantum walks between the waveguides. Because of the quantum interference of these cascaded quantum walks, the emerging photons can become entangled over multiple waveguide positions. We experimentally observe highly nonclassical photon-pair correlations, confirming the high fidelity of on-chip quantum interference. Furthermore, we demonstrate biphoton-state tunability by spatial shaping and frequency tuning of the classical pump beam.
Masking effects of speech and music: does the masker's hierarchical structure matter?
Shi, Lu-Feng; Law, Yvonne
2010-04-01
Speech and music are time-varying signals organized by parallel hierarchical rules. Through a series of four experiments, this study compared the masking effects of single-talker speech and instrumental music on speech perception while manipulating the complexity of hierarchical and temporal structures of the maskers. Listeners' word recognition was found to be similar between hierarchically intact and disrupted speech or classical music maskers (Experiment 1). When sentences served as the signal, significantly greater masking effects were observed with disrupted than intact speech or classical music maskers (Experiment 2), although not with jazz or serial music maskers, which differed from the classical music masker in their hierarchical structures (Experiment 3). Removing the classical music masker's temporal dynamics or partially restoring it affected listeners' sentence recognition; yet, differences in performance between intact and disrupted maskers remained robust (Experiment 4). Hence, the effect of structural expectancy was largely present across maskers when comparing them before and after their hierarchical structure was purposefully disrupted. This effect seemed to lend support to the auditory stream segregation theory.
Hyperresonance Unifying Theory and the resulting Law
NASA Astrophysics Data System (ADS)
Omerbashich, Mensur
2012-07-01
Hyperresonance Unifying Theory (HUT) is herein conceived based on theoretical and experimental geophysics, as that absolute extension of both Multiverse and String Theories, in which all universes (the Hyperverse) - of non-prescribed energies and scales - mutually orbit as well as oscillate in tune. The motivation for this is to explain oddities of "attraction at a distance" and physical unit(s) attached to the Newtonian gravitational constant G. In order to make sure HUT holds absolutely, we operate over non-temporal, unitless and quantities with derived units only. A HUT's harmonic geophysical localization (here for the Earth-Moon system; the Georesonator) is indeed achieved for mechanist and quantum scales, in form of the Moon's Equation of Levitation (of Anti-gravity). HUT holds true for our Solar system the same as its localized equation holds down to the precision of terrestrial G-experiments, regardless of the scale: to 10^-11 and 10^-39 for mechanist and quantum scales, respectively. Due to its absolute accuracy (within NIST experimental limits), the derived equation is regarded a law. HUT can indeed be demonstrated for our entire Solar system in various albeit empirical ways. In summary, HUT shows: (i) how classical gravity can be expressed in terms of scale and the speed of light; (ii) the tuning-forks principle is universal; (iii) the body's fundamental oscillation note is not a random number as previously believed; (iv) earthquakes of about M6 and stronger arise mainly due to Earth's alignments longer than three days to two celestial objects in our Solar system, whereas M7+ earthquakes occur mostly during two simultaneous such alignments; etc. HUT indicates: (v) quantum physics is objectocentric, i.e. trivial in absolute terms so it cannot be generalized beyond classical mass-bodies; (vi) geophysics is largely due to the magnification of mass resonance; etc. HUT can be extended to multiverse (10^17) and string scales (10^-67) too, providing a constraint to String Theory. HUT is the unifying theory as it demotes classical forces to states of stringdom. The String Theory's paradigm on vibrational rather than particlegenic reality has thus been confirmed.
The current role of high-resolution mass spectrometry in food analysis.
Kaufmann, Anton
2012-05-01
High-resolution mass spectrometry (HRMS), which is used for residue analysis in food, has gained wider acceptance in the last few years. This development is due to the availability of more rugged, sensitive, and selective instrumentation. The benefits provided by HRMS over classical unit-mass-resolution tandem mass spectrometry are considerable. These benefits include the collection of full-scan spectra, which provides greater insight into the composition of a sample. Consequently, the analyst has the freedom to measure compounds without previous compound-specific tuning, the possibility of retrospective data analysis, and the capability of performing structural elucidations of unknown or suspected compounds. HRMS strongly competes with classical tandem mass spectrometry in the field of quantitative multiresidue methods (e.g., pesticides and veterinary drugs). It is one of the most promising tools when moving towards nontargeted approaches. Certain hardware and software issues still have to be addressed by the instrument manufacturers for it to dislodge tandem mass spectrometry from its position as the standard trace analysis tool.
Attention operates uniformly throughout the classical receptive field and the surround
Verhoef, Bram-Ernst; Maunsell, John HR
2016-01-01
Shifting attention among visual stimuli at different locations modulates neuronal responses in heterogeneous ways, depending on where those stimuli lie within the receptive fields of neurons. Yet how attention interacts with the receptive-field structure of cortical neurons remains unclear. We measured neuronal responses in area V4 while monkeys shifted their attention among stimuli placed in different locations within and around neuronal receptive fields. We found that attention interacts uniformly with the spatially-varying excitation and suppression associated with the receptive field. This interaction explained the large variability in attention modulation across neurons, and a non-additive relationship among stimulus selectivity, stimulus-induced suppression and attention modulation that has not been previously described. A spatially-tuned normalization model precisely accounted for all observed attention modulations and for the spatial summation properties of neurons. These results provide a unified account of spatial summation and attention-related modulation across both the classical receptive field and the surround. DOI: http://dx.doi.org/10.7554/eLife.17256.001 PMID:27547989
Design of high-strength refractory complex solid-solution alloys
Singh, Prashant; Sharma, Aayush; Smirnov, A. V.; ...
2018-03-28
Nickel-based superalloys and near-equiatomic high-entropy alloys containing molybdenum are known for higher temperature strength and corrosion resistance. Yet, complex solid-solution alloys offer a huge design space to tune for optimal properties at slightly reduced entropy. For refractory Mo-W-Ta-Ti-Zr, we showcase KKR electronic structure methods via the coherent-potential approximation to identify alloys over five-dimensional design space with improved mechanical properties and necessary global (formation enthalpy) and local (short-range order) stability. Deformation is modeled with classical molecular dynamic simulations, validated from our first-principle data. We predict complex solid-solution alloys of improved stability with greatly enhanced modulus of elasticity (3× at 300 K)more » over near-equiatomic cases, as validated experimentally, and with higher moduli above 500 K over commercial alloys (2.3× at 2000 K). We also show that optimal complex solid-solution alloys are not described well by classical potentials due to critical electronic effects.« less
Design of high-strength refractory complex solid-solution alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Prashant; Sharma, Aayush; Smirnov, A. V.
Nickel-based superalloys and near-equiatomic high-entropy alloys containing molybdenum are known for higher temperature strength and corrosion resistance. Yet, complex solid-solution alloys offer a huge design space to tune for optimal properties at slightly reduced entropy. For refractory Mo-W-Ta-Ti-Zr, we showcase KKR electronic structure methods via the coherent-potential approximation to identify alloys over five-dimensional design space with improved mechanical properties and necessary global (formation enthalpy) and local (short-range order) stability. Deformation is modeled with classical molecular dynamic simulations, validated from our first-principle data. We predict complex solid-solution alloys of improved stability with greatly enhanced modulus of elasticity (3× at 300 K)more » over near-equiatomic cases, as validated experimentally, and with higher moduli above 500 K over commercial alloys (2.3× at 2000 K). We also show that optimal complex solid-solution alloys are not described well by classical potentials due to critical electronic effects.« less
Quantitative Reappraisal of the Helmholtz-Guyton Resonance Theory of Frequency Tuning in the Cochlea
Babbs, Charles F.
2011-01-01
To explore the fundamental biomechanics of sound frequency transduction in the cochlea, a two-dimensional analytical model of the basilar membrane was constructed from first principles. Quantitative analysis showed that axial forces along the membrane are negligible, condensing the problem to a set of ordered one-dimensional models in the radial dimension, for which all parameters can be specified from experimental data. Solutions of the radial models for asymmetrical boundary conditions produce realistic deformation patterns. The resulting second-order differential equations, based on the original concepts of Helmholtz and Guyton, and including viscoelastic restoring forces, predict a frequency map and amplitudes of deflections that are consistent with classical observations. They also predict the effects of an observation hole drilled in the surrounding bone, the effects of curvature of the cochlear spiral, as well as apparent traveling waves under a variety of experimental conditions. A quantitative rendition of the classical Helmholtz-Guyton model captures the essence of cochlear mechanics and unifies the competing resonance and traveling wave theories. PMID:22028708
Konias, Sokratis; Chouvarda, Ioanna; Vlahavas, Ioannis; Maglaveras, Nicos
2005-09-01
Current approaches for mining association rules usually assume that the mining is performed in a static database, where the problem of missing attribute values does not practically exist. However, these assumptions are not preserved in some medical databases, like in a home care system. In this paper, a novel uncertainty rule algorithm is illustrated, namely URG-2 (Uncertainty Rule Generator), which addresses the problem of mining dynamic databases containing missing values. This algorithm requires only one pass from the initial dataset in order to generate the item set, while new metrics corresponding to the notion of Support and Confidence are used. URG-2 was evaluated over two medical databases, introducing randomly multiple missing values for each record's attribute (rate: 5-20% by 5% increments) in the initial dataset. Compared with the classical approach (records with missing values are ignored), the proposed algorithm was more robust in mining rules from datasets containing missing values. In all cases, the difference in preserving the initial rules ranged between 30% and 60% in favour of URG-2. Moreover, due to its incremental nature, URG-2 saved over 90% of the time required for thorough re-mining. Thus, the proposed algorithm can offer a preferable solution for mining in dynamic relational databases.
NASA Astrophysics Data System (ADS)
TayyebTaher, M.; Esmaeilzadeh, S. Majid
2017-07-01
This article presents an application of Model Predictive Controller (MPC) to the attitude control of a geostationary flexible satellite. SIMO model has been used for the geostationary satellite, using the Lagrange equations. Flexibility is also included in the modelling equations. The state space equations are expressed in order to simplify the controller. Naturally there is no specific tuning rule to find the best parameters of an MPC controller which fits the desired controller. Being an intelligence method for optimizing problem, Genetic Algorithm has been used for optimizing the performance of MPC controller by tuning the controller parameter due to minimum rise time, settling time, overshoot of the target point of the flexible structure and its mode shape amplitudes to make large attitude maneuvers possible. The model included geosynchronous orbit environment and geostationary satellite parameters. The simulation results of the flexible satellite with attitude maneuver shows the efficiency of proposed optimization method in comparison with LQR optimal controller.
Cornering natural SUSY at LHC Run II and beyond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, Matthew R.; Feld, David; Macaluso, Sebastian
We derive the latest constraints on various simplified models of natural SUSY with light higgsinos, stops and gluinos, using a detailed and comprehensive reinterpretation of the most recent 13 TeV ATLAS and CMS searches with ~ 15 fb -1 of data. We discuss the implications of these constraints for fine-tuning of the electroweak scale. While the most “vanilla” version of SUSY (the MSSM with R-parity and flavor-degenerate sfermions) with 10% fine-tuning is ruled out by the current constraints, models with decoupled valence squarks or reduced missing energy can still be fully natural. However, in all of these models, the mediationmore » scale must be extremely low (<100 TeV). We conclude by considering the prospects for the high-luminosity LHC era, where we expect the current limits on particle masses to improve by up to ~ 1 TeV, and discuss further model-building directions for natural SUSY that are motivated by this work.« less
Cornering natural SUSY at LHC Run II and beyond
Buckley, Matthew R.; Feld, David; Macaluso, Sebastian; ...
2017-08-25
We derive the latest constraints on various simplified models of natural SUSY with light higgsinos, stops and gluinos, using a detailed and comprehensive reinterpretation of the most recent 13 TeV ATLAS and CMS searches with ~ 15 fb -1 of data. We discuss the implications of these constraints for fine-tuning of the electroweak scale. While the most “vanilla” version of SUSY (the MSSM with R-parity and flavor-degenerate sfermions) with 10% fine-tuning is ruled out by the current constraints, models with decoupled valence squarks or reduced missing energy can still be fully natural. However, in all of these models, the mediationmore » scale must be extremely low (<100 TeV). We conclude by considering the prospects for the high-luminosity LHC era, where we expect the current limits on particle masses to improve by up to ~ 1 TeV, and discuss further model-building directions for natural SUSY that are motivated by this work.« less
The Interferometric Measurement of Phase Mismatch in Potential Second Harmonic Generators.
NASA Astrophysics Data System (ADS)
Sinofsky, Edward Lawrence
This dissertation combines aspects of lasers, nonlinear optics and interferometry to measure the linear optical properties involved in phase matched second harmonic generation, (SHG). A new measuring technique has been developed to rapidly analyze the phase matching performance of potential SHGs. The data taken is in the form of interferograms produced by the self referencing nonlinear Fizeau interferometer (NLF), and correctly predicts when phase matched SHG will occur in the sample wedge. Data extracted from the interferograms produced by the NLF, allows us to predict both phase matching temperatures for noncritically phase matchable crystals and crystal orientation for angle tuned crystals. Phase matching measurements can be made for both Type I and Type II configurations. Phase mismatch measurements were made at the fundamental wavelength of 1.32 (mu)m, for: calcite, lithium niobate, and gadolinium molybdate (GMO). Similar measurements were made at 1.06 (mu)m. for calcite. Phase matched SHG was demonstrated in calcite, lithium niobate and KTP, while phase matching by temperature tuning is ruled out for GMO.
Extended theory of harmonic maps connects general relativity to chaos and quantum mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Gang; Duan, Yi-Shi
General relativity and quantum mechanism are two separate rules of modern physics explaining how nature works. Both theories are accurate, but the direct connection between two theories was not yet clarified. Recently, researchers blur the line between classical and quantum physics by connecting chaos and entanglement equation. Here in this paper, we showed the Duan's extended HM theory, which has the solution of the general relativity, can also have the solutions of the classic chaos equations and even the solution of Schrödinger equation in quantum physics, suggesting the extended theory of harmonic maps may act as a universal theory ofmore » physics.« less
Extended theory of harmonic maps connects general relativity to chaos and quantum mechanism
Ren, Gang; Duan, Yi-Shi
2017-07-20
General relativity and quantum mechanism are two separate rules of modern physics explaining how nature works. Both theories are accurate, but the direct connection between two theories was not yet clarified. Recently, researchers blur the line between classical and quantum physics by connecting chaos and entanglement equation. Here in this paper, we showed the Duan's extended HM theory, which has the solution of the general relativity, can also have the solutions of the classic chaos equations and even the solution of Schrödinger equation in quantum physics, suggesting the extended theory of harmonic maps may act as a universal theory ofmore » physics.« less
Quantum Corrections in Nanoplasmonics: Shape, Scale, and Material
NASA Astrophysics Data System (ADS)
Christensen, Thomas; Yan, Wei; Jauho, Antti-Pekka; Soljačić, Marin; Mortensen, N. Asger
2017-04-01
The classical treatment of plasmonics is insufficient at the nanometer-scale due to quantum mechanical surface phenomena. Here, an extension of the classical paradigm is reported which rigorously remedies this deficiency through the incorporation of first-principles surface response functions—the Feibelman d parameters—in general geometries. Several analytical results for the leading-order plasmonic quantum corrections are obtained in a first-principles setting; particularly, a clear separation of the roles of shape, scale, and material is established. The utility of the formalism is illustrated by the derivation of a modified sum rule for complementary structures, a rigorous reformulation of Kreibig's phenomenological damping prescription, and an account of the small-scale resonance shifting of simple and noble metal nanostructures.
Three-dimensional shape perception from chromatic orientation flows
Zaidi, Qasim; Li, Andrea
2010-01-01
The role of chromatic information in 3-D shape perception is controversial. We resolve this controversy by showing that chromatic orientation flows are sufficient for accurate perception of 3-D shape. Chromatic flows required less cone contrast to convey shape than did achromatic flows, thus ruling out luminance artifacts as a problem. Luminance artifacts were also ruled out by a protanope’s inability to see 3-D shape from chromatic flows. Since chromatic orientation flows can only be extracted from retinal images by neurons that are responsive to color modulations and selective for orientation, the psychophysical results also resolve the controversy over the existence of such neurons. In addition, we show that identification of 3-D shapes from chromatic flows can be masked by luminance modulations, indicating that it is subserved by orientation-tuned neurons sensitive to both chromatic and luminance modulations. PMID:16961963
Synchrony detection and amplification by silicon neurons with STDP synapses.
Bofill-i-petit, Adria; Murray, Alan F
2004-09-01
Spike-timing dependent synaptic plasticity (STDP) is a form of plasticity driven by precise spike-timing differences between presynaptic and postsynaptic spikes. Thus, the learning rules underlying STDP are suitable for learning neuronal temporal phenomena such as spike-timing synchrony. It is well known that weight-independent STDP creates unstable learning processes resulting in balanced bimodal weight distributions. In this paper, we present a neuromorphic analog very large scale integration (VLSI) circuit that contains a feedforward network of silicon neurons with STDP synapses. The learning rule implemented can be tuned to have a moderate level of weight dependence. This helps stabilise the learning process and still generates binary weight distributions. From on-chip learning experiments we show that the chip can detect and amplify hierarchical spike-timing synchrony structures embedded in noisy spike trains. The weight distributions of the network emerging from learning are bimodal.
Generalized rules for the optimization of elastic network models
NASA Astrophysics Data System (ADS)
Lezon, Timothy; Eyal, Eran; Bahar, Ivet
2009-03-01
Elastic network models (ENMs) are widely employed for approximating the coarse-grained equilibrium dynamics of proteins using only a few parameters. An area of current focus is improving the predictive accuracy of ENMs by fine-tuning their force constants to fit specific systems. Here we introduce a set of general rules for assigning ENM force constants to residue pairs. Using a novel method, we construct ENMs that optimally reproduce experimental residue covariances from NMR models of 68 proteins. We analyze the optimal interactions in terms of amino acid types, pair distances and local protein structures to identify key factors in determining the effective spring constants. When applied to several unrelated globular proteins, our method shows an improved correlation with experiment over a standard ENM. We discuss the physical interpretation of our findings as well as its implications in the fields of protein folding and dynamics.
Workplace etiquette for the medical practice employee.
Hills, Laura
2010-01-01
Medical practice workplace etiquette is slowly being modified and fine-tuned. New workplace etiquette rules have become necessary because of advances in communications technology, shifting norms, and expectations of what constitutes good manners. Today's medical practice employees must concern themselves with traditional workplace manners but also the manners that come into play when they make or receive cell phone calls, text messages, and e-mails, and when they use social networking media outside of work. This article offers 25 rules for good manners in the medical practice that relate to the ways employees interact with people today, whether face-to-face or when using electronic communications technologies. It offers practical guidelines for making introductions both inside and outside the medical practice. This article also provides a self-quiz to help medical practice employees assess their workplace etiquette intelligence and 12 tips for good workplace table manners.
An integrated theory of the mind.
Anderson, John R; Bothell, Daniel; Byrne, Michael D; Douglass, Scott; Lebiere, Christian; Qin, Yulin
2004-10-01
Adaptive control of thought-rational (ACT-R; J. R. Anderson & C. Lebiere, 1998) has evolved into a theory that consists of multiple modules but also explains how these modules are integrated to produce coherent cognition. The perceptual-motor modules, the goal module, and the declarative memory module are presented as examples of specialized systems in ACT-R. These modules are associated with distinct cortical regions. These modules place chunks in buffers where they can be detected by a production system that responds to patterns of information in the buffers. At any point in time, a single production rule is selected to respond to the current pattern. Subsymbolic processes serve to guide the selection of rules to fire as well as the internal operations of some modules. Much of learning involves tuning of these subsymbolic processes. A number of simple and complex empirical examples are described to illustrate how these modules function singly and in concert. 2004 APA
Hybrid neural network and fuzzy logic approaches for rendezvous and capture in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Castellano, Timothy
1991-01-01
The nonlinear behavior of many practical systems and unavailability of quantitative data regarding the input-output relations makes the analytical modeling of these systems very difficult. On the other hand, approximate reasoning-based controllers which do not require analytical models have demonstrated a number of successful applications such as the subway system in the city of Sendai. These applications have mainly concentrated on emulating the performance of a skilled human operator in the form of linguistic rules. However, the process of learning and tuning the control rules to achieve the desired performance remains a difficult task. Fuzzy Logic Control is based on fuzzy set theory. A fuzzy set is an extension of a crisp set. Crisp sets only allow full membership or no membership at all, whereas fuzzy sets allow partial membership. In other words, an element may partially belong to a set.
Probabilistic Fatigue Damage Program (FATIG)
NASA Technical Reports Server (NTRS)
Michalopoulos, Constantine
2012-01-01
FATIG computes fatigue damage/fatigue life using the stress rms (root mean square) value, the total number of cycles, and S-N curve parameters. The damage is computed by the following methods: (a) traditional method using Miner s rule with stress cycles determined from a Rayleigh distribution up to 3*sigma; and (b) classical fatigue damage formula involving the Gamma function, which is derived from the integral version of Miner's rule. The integration is carried out over all stress amplitudes. This software solves the problem of probabilistic fatigue damage using the integral form of the Palmgren-Miner rule. The software computes fatigue life using an approach involving all stress amplitudes, up to N*sigma, as specified by the user. It can be used in the design of structural components subjected to random dynamic loading, or by any stress analyst with minimal training for fatigue life estimates of structural components.
A symbolic/subsymbolic interface protocol for cognitive modeling
Simen, Patrick; Polk, Thad
2009-01-01
Researchers studying complex cognition have grown increasingly interested in mapping symbolic cognitive architectures onto subsymbolic brain models. Such a mapping seems essential for understanding cognition under all but the most extreme viewpoints (namely, that cognition consists exclusively of digitally implemented rules; or instead, involves no rules whatsoever). Making this mapping reduces to specifying an interface between symbolic and subsymbolic descriptions of brain activity. To that end, we propose parameterization techniques for building cognitive models as programmable, structured, recurrent neural networks. Feedback strength in these models determines whether their components implement classically subsymbolic neural network functions (e.g., pattern recognition), or instead, logical rules and digital memory. These techniques support the implementation of limited production systems. Though inherently sequential and symbolic, these neural production systems can exploit principles of parallel, analog processing from decision-making models in psychology and neuroscience to explain the effects of brain damage on problem solving behavior. PMID:20711520
Finite-width Laplace sum rules for 0-+ pseudoscalar glueball in the instanton vacuum model
NASA Astrophysics Data System (ADS)
Wang, Feng; Chen, Junlong; Liu, Jueping
2015-10-01
The correlation function of the 0-+ pseudoscalar glueball current is calculated based on the semiclassical expansion for quantum chromodynamics (QCD) in the instanton liquid background. Besides taking the pure classical contribution from instantons and the perturbative one into account, we calculate the contribution arising from the interaction (or the interference) between instantons and the quantum gluon fields, which is infrared free and more important than the pure perturbative one. Instead of the usual zero-width approximation for the resonances, the Breit-Wigner form with a correct threshold behavior for the spectral function of the finite-width resonance is adopted. The properties of the 0-+ pseudoscalar glueball are investigated via a family of the QCD Laplacian sum rules. A consistency between the subtracted and unsubtracted sum rules is very well justified. The values of the mass, decay width, and coupling constants for the 0-+ resonance in which the glueball fraction is dominant are obtained.
A cybernetic theory of morality and moral autonomy.
Chambers, J
2001-04-01
Human morality may be thought of as a negative feedback control system in which moral rules are reference values, and moral disapproval, blame, and punishment are forms of negative feedback given for violations of the moral rules. In such a system, if moral agents held each other accountable, moral norms would be enforced effectively. However, even a properly functioning social negative feedback system could not explain acts in which individual agents uphold moral rules in the face of contrary social pressure. Dr. Frances Kelsey, who withheld FDA approval for thalidomide against intense social pressure, is an example of the degree of individual moral autonomy possible in a hostile environment. Such extreme moral autonomy is possible only if there is internal, psychological negative feedback, in addition to external, social feedback. Such a cybernetic model of morality and moral autonomy is consistent with certain aspects of classical ethical theories.
A networked voting rule for democratic representation
NASA Astrophysics Data System (ADS)
Hernández, Alexis R.; Gracia-Lázaro, Carlos; Brigatti, Edgardo; Moreno, Yamir
2018-03-01
We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals' interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process.
White Pre-Service Teachers and "De-Privileged Spaces"
ERIC Educational Resources Information Center
Adair, Jennifer
2008-01-01
In their classic article, "Culture as Disability," McDermott and Varenne (1995) retell the fable of the seeing man who, upon finding himself in the "country of the blind" thought he could easily rule it. His efforts were fruitless because he could not make sense of their world. Daily life was set up for the blind to be successful. The seeing man…
Manipulating Public Expectations; Pre- and Postprimary Statements in the '76 Campaign.
ERIC Educational Resources Information Center
Freshley, Dwight L.
Predicting the outcome of a primary election gives a candidate more exposure to the press, gives him or her a chance to predict modestly and then look better than the prediction, and helps create interest in the election and thereby increase voter turnout. During the 1976 Presidential primaries, most candidates adhered to the classic rule to make…
Analysis and Synthesis of Adaptive Neural Elements and Assembles
1993-09-30
of an Aplysia sensory neuron was developed that reflects the subcellular processes underlying activity-dependent neuromodulation . This single- Page -3... neuromodulation learning rule could simulate some higher-order features of classical conditioning, such second-order conditioning and blocking. During the...reporting period, simulations were used to test the hypothesis that activity-dependent neuromodulation could also support operant conditioning. A
NASA Astrophysics Data System (ADS)
Plastun, A. T.; Tikhonova, O. V.; Malygin, I. V.
2018-02-01
The paper presents methods of making a periodically varying different-pole magnetic field in low-power electrical machines. Authors consider classical designs of electrical machines and machines with ring windings in armature, structural features and calculated parameters of magnetic circuit for these machines.
Dense matter theory: A simple classical approach
NASA Astrophysics Data System (ADS)
Savić, P.; Čelebonović, V.
1994-07-01
In the sixties, the first author and by P. Savić and R. Kašanin started developing a mean-field theory of dense matter. It is based on the Coulomb interaction, supplemented by a microscopic selection rule and a set of experimentally founded postulates. Applications of the theory range from the calculation of models of planetary internal structure to DAC experiments.
NASA Astrophysics Data System (ADS)
Lee, Myeong H.; Dunietz, Barry D.; Geva, Eitan
2014-03-01
We present a methodology to obtain the photo-induced electron transfer rate constant in organic photovoltaic (OPV) materials within the framework of Fermi's golden rule, using inputs obtained from first-principles electronic structure calculation. Within this approach, the nuclear vibrational modes are treated quantum-mechanically and a short-time approximation is avoided in contrast to the classical Marcus theory where these modes are treated classically within the high-temperature and short-time limits. We demonstrate our methodology on boron-subphthalocyanine-chloride/C60 OPV system to determine the rate constants of electron transfer and electron recombination processes upon photo-excitation. We consider two representative donor/acceptor interface configurations to investigate the effect of interface configuration on the charge transfer characteristics of OPV materials. In addition, we determine the time scale of excited states population by employing a master equation after obtaining the rate constants for all accessible electronic transitions. This work is pursued as part of the Center for Solar and Thermal Energy Conversion, an Energy Frontier Research Center funded by the US Department of Energy Office of Science, Office of Basic Energy Sciences under 390 Award No. DE-SC0000957.
Quantum algorithm for association rules mining
NASA Astrophysics Data System (ADS)
Yu, Chao-Hua; Gao, Fei; Wang, Qing-Le; Wen, Qiao-Yan
2016-10-01
Association rules mining (ARM) is one of the most important problems in knowledge discovery and data mining. Given a transaction database that has a large number of transactions and items, the task of ARM is to acquire consumption habits of customers by discovering the relationships between itemsets (sets of items). In this paper, we address ARM in the quantum settings and propose a quantum algorithm for the key part of ARM, finding frequent itemsets from the candidate itemsets and acquiring their supports. Specifically, for the case in which there are Mf(k ) frequent k -itemsets in the Mc(k ) candidate k -itemsets (Mf(k )≤Mc(k ) ), our algorithm can efficiently mine these frequent k -itemsets and estimate their supports by using parallel amplitude estimation and amplitude amplification with complexity O (k/√{Mc(k )Mf(k ) } ɛ ) , where ɛ is the error for estimating the supports. Compared with the classical counterpart, i.e., the classical sampling-based algorithm, whose complexity is O (k/Mc(k ) ɛ2) , our quantum algorithm quadratically improves the dependence on both ɛ and Mc(k ) in the best case when Mf(k )≪Mc(k ) and on ɛ alone in the worst case when Mf(k )≈Mc(k ) .
NASA Astrophysics Data System (ADS)
Botyánszki, János; Kasen, Daniel; Plewa, Tomasz
2018-01-01
The classic single-degenerate model for the progenitors of Type Ia supernova (SN Ia) predicts that the supernova ejecta should be enriched with solar-like abundance material stripped from the companion star. Spectroscopic observations of normal SNe Ia at late times, however, have not resulted in definite detection of hydrogen. In this Letter, we study line formation in SNe Ia at nebular times using non-LTE spectral modeling. We present, for the first time, multidimensional radiative transfer calculations of SNe Ia with stripped material mixed in the ejecta core, based on hydrodynamical simulations of ejecta–companion interaction. We find that interaction models with main-sequence companions produce significant Hα emission at late times, ruling out these types of binaries being viable progenitors of SNe Ia. We also predict significant He I line emission at optical and near-infrared wavelengths for both hydrogen-rich or helium-rich material, providing an additional observational probe of stripped ejecta. We produce models with reduced stripped masses and find a more stringent mass limit of M st ≲ 1 × 10‑4 M ⊙ of stripped companion material for SN 2011fe.
New fundamental evidence of non-classical structure in the combination of natural concepts.
Aerts, D; Sozzo, S; Veloz, T
2016-01-13
We recently performed cognitive experiments on conjunctions and negations of two concepts with the aim of investigating the combination problem of concepts. Our experiments confirmed the deviations (conceptual vagueness, underextension, overextension etc.) from the rules of classical (fuzzy) logic and probability theory observed by several scholars in concept theory, while our data were successfully modelled in a quantum-theoretic framework developed by ourselves. In this paper, we isolate a new, very stable and systematic pattern of violation of classicality that occurs in concept combinations. In addition, the strength and regularity of this non-classical effect leads us to believe that it occurs at a more fundamental level than the deviations observed up to now. It is our opinion that we have identified a deep non-classical mechanism determining not only how concepts are combined but, rather, how they are formed. We show that this effect can be faithfully modelled in a two-sector Fock space structure, and that it can be exactly explained by assuming that human thought is the superposition of two processes, a 'logical reasoning', guided by 'logic', and a 'conceptual reasoning', guided by 'emergence', and that the latter generally prevails over the former. All these findings provide new fundamental support to our quantum-theoretic approach to human cognition. © 2015 The Author(s).
Passive control of discrete-frequency tones generated by coupled detuned cascades
NASA Astrophysics Data System (ADS)
Sawyer, S.; Fleeter, S.
2003-07-01
Discrete-frequency tones generated by rotor-stator interactions are of particular concern in the design of fans and compressors. Classical theory considers an isolated flat-plate cascade of identical uniformly spaced airfoils. The current analysis extends this tuned isolated cascade theory to consider coupled aerodynamically detuned cascades where aerodynamic detuning is accomplished by changing the chord of alternate rotor blades and stator vanes. In a coupled cascade analysis, the configuration of the rotor influences the downstream acoustic response of the stator, and the stator configuration influences the upstream acoustic response of the rotor. This coupled detuned cascade unsteady aerodynamic model is first applied to a baseline tuned stage. This baseline stage is then aerodynamically detuned by replacing alternate rotor blades and stator vanes with decreased chord airfoils. The nominal aerodynamically detuned stage configuration is then optimized, with the stage acoustic response decreased 13 dB upstream and 1 dB downstream at the design operating condition. A reduction in the acoustic response of the optimized aerodynamically detuned stage is then demonstrated over a range of operating conditions.
Comtet, Jean; Niguès, Antoine; Kaiser, Vojtech; Coasne, Benoit; Bocquet, Lydéric; Siria, Alessandro
2017-01-01
Room temperature Ionic liquids (RTIL) are new materials with fundamental importance for energy storage and active lubrication. They are unsual liquids, which challenge the classical frameworks of electrolytes, whose behavior at electrified interfaces remains elusive with exotic responses relevant to their electrochemical activity. By means of tuning fork based AFM nanorheological measurements, we explore here the properties of confined RTIL, unveiling a dramatic change of the RTIL towards a solid-like phase below a threshold thickness, pointing to capillary freezing in confinement. This threshold is related to the metallic nature of the confining materials, with more metallic surfaces facilitating freezing. This is interpreted in terms of the shift of freezing transition, taking into account the influence of the electronic screening on RTIL wetting of the confining surfaces. Our findings provide fresh views on the properties of confined RTIL with implications for their properties inside nanoporous metallic structures and suggests applications to tune nanoscale lubrication with phase-changing RTIL, by varying the nature and patterning of the substrate, and application of active polarisation. PMID:28346432
Electronic and thermoelectric properties of atomically thin C3Si3/C and C3Ge3/C superlattices.
Ali, Muhammad; Pi, Xiaodong; Liu, Yong; Yang, Deren
2017-12-01
The nanostructuring of graphene into superlattices offers the possibility of tuning both the electronic and thermal properties of graphene. Using classical and quantum mechanical calculations, we have investigated the electronic and thermoelectric properties of the atomically thin superlattice of C3Si3/C (C3Ge3/C) formed by the incorporation of Si (Ge) atoms into graphene. The bandgap and phonon thermal conductivity of C3Si3/C (C3Ge3/C) are 0.54 (0.51) eV and 15.48 (12.64) Wm-1K-1, respectively, while the carrier mobility of C3Si3/C (C3Ge3/C) is 1.285 x 105 (1.311 x 105) cm2V-1s-1 at 300 K. The thermoelectric figure of merit for C3Si3/C (C3Ge3/C) can be optimized via the tuning of carrier concentration to obtain the prominent ZT value of 1.95 (2.72). © 2017 IOP Publishing Ltd.
Comtet, Jean; Niguès, Antoine; Kaiser, Vojtech; Coasne, Benoit; Bocquet, Lydéric; Siria, Alessandro
2017-06-01
Room-temperature ionic liquids (RTILs) are new materials with fundamental importance for energy storage and active lubrication. They are unusual liquids, which challenge the classical frameworks of electrolytes, whose behaviour at electrified interfaces remains elusive, with exotic responses relevant to their electrochemical activity. Using tuning-fork-based atomic force microscope nanorheological measurements, we explore here the properties of confined RTILs, unveiling a dramatic change of the RTIL towards a solid-like phase below a threshold thickness, pointing to capillary freezing in confinement. This threshold is related to the metallic nature of the confining materials, with more metallic surfaces facilitating freezing. This behaviour is interpreted in terms of the shift of the freezing transition, taking into account the influence of the electronic screening on RTIL wetting of the confining surfaces. Our findings provide fresh views on the properties of confined RTIL with implications for their properties inside nanoporous metallic structures, and suggests applications to tune nanoscale lubrication with phase-changing RTILs, by varying the nature and patterning of the substrate, and application of active polarization.
NASA Astrophysics Data System (ADS)
Comtet, Jean; Niguès, Antoine; Kaiser, Vojtech; Coasne, Benoit; Bocquet, Lydéric; Siria, Alessandro
2017-06-01
Room-temperature ionic liquids (RTILs) are new materials with fundamental importance for energy storage and active lubrication. They are unusual liquids, which challenge the classical frameworks of electrolytes, whose behaviour at electrified interfaces remains elusive, with exotic responses relevant to their electrochemical activity. Using tuning-fork-based atomic force microscope nanorheological measurements, we explore here the properties of confined RTILs, unveiling a dramatic change of the RTIL towards a solid-like phase below a threshold thickness, pointing to capillary freezing in confinement. This threshold is related to the metallic nature of the confining materials, with more metallic surfaces facilitating freezing. This behaviour is interpreted in terms of the shift of the freezing transition, taking into account the influence of the electronic screening on RTIL wetting of the confining surfaces. Our findings provide fresh views on the properties of confined RTIL with implications for their properties inside nanoporous metallic structures, and suggests applications to tune nanoscale lubrication with phase-changing RTILs, by varying the nature and patterning of the substrate, and application of active polarization.
Pagano, Justin K.; Dorhout, Jacquelyn M.; Czerwinski, Kenneth R.; ...
2016-03-18
Here, this work demonstrates that the oxidation state and chemistry of uranium hydrides can be tuned with temperature and the stoichiometry of phenylsilane. The trivalent uranium hydride [(C 5Me 5) 2U–H] x (5) was found to be comprised of an equilibrium mixture of U(III) hydrides in solution at ambient temperature. A single U(III) species can be selectively prepared by treating (C 5Me5)2UMe2 (4) with 2 equiv of phenylsilane at 50 °C. The U(III) system is a potent reducing agent and displayed chemistry distinct from the U(IV) system [(C 5Me 5) 2U(H)(μ-H)] 2 (2), which was harnessed to prepare a varietymore » of organometallic complexes, including (C 5Me 5) 2U(dmpe)(H) (6), and the novel uranium(IV) metallacyclopentadiene complex (C 5Me 5) 2U(C 4Me 4) (11).« less
Bacterial hybrid histidine kinases in plant-bacteria interactions.
Borland, Stéphanie; Prigent-Combaret, Claire; Wisniewski-Dyé, Florence
2016-10-01
Two-component signal transduction systems are essential for many bacteria to maintain homeostasis and adapt to environmental changes. Two-component signal transduction systems typically involve a membrane-bound histidine kinase that senses stimuli, autophosphorylates in the transmitter region and then transfers the phosphoryl group to the receiver domain of a cytoplasmic response regulator that mediates appropriate changes in bacterial physiology. Although usually found on distinct proteins, the transmitter and receiver modules are sometimes fused into a so-called hybrid histidine kinase (HyHK). Such structure results in multiple phosphate transfers that are believed to provide extra-fine-tuning mechanisms and more regulatory checkpoints than classical phosphotransfers. HyHK-based regulation may be crucial for finely tuning gene expression in a heterogeneous environment such as the rhizosphere, where intricate plant-bacteria interactions occur. In this review, we focus on roles fulfilled by bacterial HyHKs in plant-associated bacteria, providing recent findings on the mechanistic of their signalling properties. Recent insights into understanding additive regulatory properties fulfilled by the tethered receiver domain of HyHKs are also addressed.
NASA Astrophysics Data System (ADS)
Fomin, Fedor V.
Preprocessing (data reduction or kernelization) as a strategy of coping with hard problems is universally used in almost every implementation. The history of preprocessing, like applying reduction rules simplifying truth functions, can be traced back to the 1950's [6]. A natural question in this regard is how to measure the quality of preprocessing rules proposed for a specific problem. For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomaly was that if we start with an instance I of an NP-hard problem and can show that in polynomial time we can replace this with an equivalent instance I' with |I'| < |I| then that would imply P=NP in classical complexity.
NASA Astrophysics Data System (ADS)
Kuiper, K.; Condon, D.; Hilgen, F.; Laskar, J.; Mezger, K.; Pälike, H.; Quidelleur, X.; Schaltegger, U.; Sprovieri, M.; Storey, M.; Wijbrans, J. R.
2009-12-01
The principal scientific objective of the Marie Curie Initial Trainings Network GTSnext is to establish the next generation standard Geological Time Scale with unprecedented accuracy, precision and resolution through integration and intercalibration of state-of-the-art numerical dating techniques. Such time scales underlie all fields in the Earth Sciences and their application will significantly contribute to a much enhanced understanding of Earth System evolution. During the last decade deep marine successions were successfully employed to establish an astronomical tuning for the entire Neogene, as incorporated in the standard Geological Time Scale (ATNTS2004). In GTSnext we aim to fine-tune this Neogene time scale, before it can reliably be used to accurately determine phase relations between astronomical forcing and climate response in the Neogene and possibly also the Oligocene. Radio-isotopic dating of late Neogene ash layers offers excellent opportunities for gaining insight into isotope systematics via their independent dating by astronomical tuning. An example of this synergy is the development of astronomically calibrated standards for 40Ar/39Ar geochronology. The cross-calibration between the different methods might also yield information on the fundamental problem of potential residence times in U/Pb dating. Extension of the astronomical time scale into the Paleogene seems limited to ~40 Ma due to the accuracy of the current astronomical solution. However, the 405 kyr eccentricity component is very stable permitting its use in time scale calibrations back to 250 Ma using only this frequency. This cycle is strong and well developed in Oligocene and even Eocene records. Phase relations between cyclic paleo-climate records and the 405 kyr eccentricity cycle are typically straightforward and unambiguous. Therefore, a first-order tuning to ~405 kyr eccentricity can only be revised by shifting the tuning with (multiples of) ~405 kyr. Isotopic age constraints of both U/Pb and 40Ar/39Ar will be used to anchor floating astronomical tunings, but absolute uncertainties in isotopic ages should be less than ± 200 kyr. The Cretaceous is famous for its remarkable cyclic successions of marine pelagic sediments which bear the unmistakable imprint of astronomical climate forcing. As a consequence floating astrochronologies which are based on number of cycles have been developed for significant portions of the Cretaceous, covering a number of geological stages. Unfortunately, such floating time scales provide us only with the duration of stages but not with their age. However, due to significant improvements in numerical astronomical solutions for the Solar System and in the accuracy of radio-isotopic dating we will try to establish a tuned time scale for the Late Cretaceous. Classical cyclic sections in Europe (e.g. Sopelana, Spain) will be used for the tuning, but lack ash beds. Therefore, radio-isotopic age constraints necessary for the tuning will come from ash beds in the Western Interior Basin in North America. Here we will present the first results of the GTSnext project.
On the Construction and Dynamics of Knotted Fields
NASA Astrophysics Data System (ADS)
Kedia, Hridesh
Representing a physical field in terms of its field lines has often enabled a deeper understanding of complex physical phenomena, from Faraday's law of magnetic induction, to the Helmholtz laws of vortex motion, to the free energy density of liquid crystals in terms of the distortions of the lines of the director field. At the same time, the application of ideas from topology--the study of properties that are invariant under continuous deformations--has led to robust insights into the nature of complex physical systems from defects in crystal structures, to the earth's magnetic field, to topological conservation laws. The study of knotted fields, physical fields in which the field lines encode knots, emerges naturally from the application of topological ideas to the investigation of the physical phenomena best understood in terms of the lines of a field. A knot--a closed loop tangled with itself which can not be untangled without cutting the loop--is the simplest topologically non-trivial object constructed from a line. Remarkably, knots in the vortex (magnetic field) lines of a dissipationless fluid (plasma), persist forever as they are transported by the flow, stretching and rotating as they evolve. Moreover, deeply entwined with the topology-preserving dynamics of dissipationless fluids and plasmas, is an additional conserved quantity--helicity, a measure of the average linking of the vortex (magnetic field) lines in a fluid (plasma)--which has had far-reaching consequences for fluids and plasmas. Inspired by the persistence of knots in dissipationless flows, and their far-reaching physical consequences, we seek to understand the interplay between the dynamics of a field and the topology of its field lines in a variety of systems. While it is easy to tie a knot in a shoelace, tying a knot in the the lines of a space-filling field requires contorting the lines everywhere to match the knotted region. The challenge of analytically constructing knotted field configurations has impeded a deeper understanding of the interplay between topology and dynamics in fluids and plasmas. We begin by analytically constructing knotted field configurations which encode a desired knot in the lines of the field, and show that their helicity can be tuned independently of the encoded knot. The nonlinear nature of the physical systems in which these knotted field configurations arise, makes their analytical study challenging. We ask if a linear theory such as electromagnetism can allow knotted field configurations to persist with time. We find analytical expressions for an infinite family of knotted solutions to Maxwell's equations in vacuum and elucidate their connections to dissipationless flows. We present a design rule for constructing such persistently knotted electromagnetic fields, which could possibly be used to transfer knottedness to matter such as quantum fluids and plasmas. An important consequence of the persistence of knots in classical dissipationless flows is the existence of an additional conserved quantity, helicity, which has had far-reaching implications. To understand the existence of analogous conserved quantities, we ask if superfluids, which flow without dissipation just like classical dissipationless flows, have an additional conserved quantity akin to helicity. We address this question using an analytical approach based on defining the particle relabeling symmetry--the symmetry underlying helicity conservation--in superfluids, and find that an analogous conserved quantity exists but vanishes identically owing to the intrinsic geometry of complex scalar fields. Furthermore, to address the question of a ``classical limit'' of superfluid vortices which recovers classical helicity conservation, we perform numerical simulations of \\emph{bundles} of superfluid vortices, and find behavior akin to classical viscous flows.
Hammad, Mohanad M; Elshenawy, Ahmed K; El Singaby, M I
2017-01-01
In this work a design for self-tuning non-linear Fuzzy Proportional Integral Derivative (FPID) controller is presented to control position and speed of Multiple Input Multiple Output (MIMO) fully-actuated Autonomous Underwater Vehicles (AUV) to follow desired trajectories. Non-linearity that results from the hydrodynamics and the coupled AUV dynamics makes the design of a stable controller a very difficult task. In this study, the control scheme in a simulation environment is validated using dynamic and kinematic equations for the AUV model and hydrodynamic damping equations. An AUV configuration with eight thrusters and an inverse kinematic model from a previous work is utilized in the simulation. In the proposed controller, Mamdani fuzzy rules are used to tune the parameters of the PID. Nonlinear fuzzy Gaussian membership functions are selected to give better performance and response in the non-linear system. A control architecture with two feedback loops is designed such that the inner loop is for velocity control and outer loop is for position control. Several test scenarios are executed to validate the controller performance including different complex trajectories with and without injection of ocean current disturbances. A comparison between the proposed FPID controller and the conventional PID controller is studied and shows that the FPID controller has a faster response to the reference signal and more stable behavior in a disturbed non-linear environment.
Elshenawy, Ahmed K.; El Singaby, M.I.
2017-01-01
In this work a design for self-tuning non-linear Fuzzy Proportional Integral Derivative (FPID) controller is presented to control position and speed of Multiple Input Multiple Output (MIMO) fully-actuated Autonomous Underwater Vehicles (AUV) to follow desired trajectories. Non-linearity that results from the hydrodynamics and the coupled AUV dynamics makes the design of a stable controller a very difficult task. In this study, the control scheme in a simulation environment is validated using dynamic and kinematic equations for the AUV model and hydrodynamic damping equations. An AUV configuration with eight thrusters and an inverse kinematic model from a previous work is utilized in the simulation. In the proposed controller, Mamdani fuzzy rules are used to tune the parameters of the PID. Nonlinear fuzzy Gaussian membership functions are selected to give better performance and response in the non-linear system. A control architecture with two feedback loops is designed such that the inner loop is for velocity control and outer loop is for position control. Several test scenarios are executed to validate the controller performance including different complex trajectories with and without injection of ocean current disturbances. A comparison between the proposed FPID controller and the conventional PID controller is studied and shows that the FPID controller has a faster response to the reference signal and more stable behavior in a disturbed non-linear environment. PMID:28683071
NASA Astrophysics Data System (ADS)
Allabakash, S.; Yasodha, P.; Bianco, L.; Venkatramana Reddy, S.; Srinivasulu, P.; Lim, S.
2017-09-01
This paper presents the efficacy of a "tuned" fuzzy logic method at determining the height of the boundary layer using the measurements from a 1280 MHz lower atmospheric radar wind profiler located in Gadanki (13.5°N, 79°E, 375 mean sea level), India, and discusses the diurnal and seasonal variations of the measured convective boundary layer over this tropical station. The original fuzzy logic (FL) method estimates the height of the atmospheric boundary layer combining the information from the range-corrected signal-to-noise ratio, the Doppler spectral width of the vertical velocity, and the vertical velocity itself, measured by the radar, through a series of thresholds and rules, which did not prove to be optimal for our radar system and geographical location. For this reason the algorithm was tuned to perform better on our data set. Atmospheric boundary layer heights obtained by this tuned FL method, the original FL method, and by a "standard method" (that only uses the information from the range-corrected signal-to-noise ratio) are compared with those obtained from potential temperature profiles measured by collocated Global Positioning System Radio Sonde during years 2011 and 2013. The comparison shows that the tuned FL method is more accurate than the other methods. Maximum convective boundary layer heights are observed between 14:00 and 15:00 local time (LT = UTC + 5:30) for clear-sky days. These daily maxima are found to be lower during winter and postmonsoon seasons and higher during premonsoon and monsoon seasons, due to net surface radiation and convective processes over this region being more intense during premonsoon and monsoon seasons and less intense in winter and postmonsoon seasons.
A weak pattern random creation and scoring method for lithography process tuning
NASA Astrophysics Data System (ADS)
Zhang, Meili; Deng, Guogui; Wang, Mudan; Yu, Shirui; Hu, Xinyi; Du, Chunshan; Wan, Qijian; Liu, Zhengfang; Gao, Gensheng; Kabeel, Aliaa; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe
2018-03-01
As the IC technology node moves forward, critical dimension becomes smaller and smaller, which brings huge challenge to IC manufacturing. Lithography is one of the most important steps during the whole manufacturing process and litho hotspots become a big source of yield detractors. Thus tuning lithographic recipes to cover a big range of litho hotspots is very essential to yield enhancing. During early technology developing stage, foundries only have limited customer layout data for recipe tuning. So collecting enough patterns is significant for process optimization. After accumulating enough patterns, a general way to treat them is not precise and applicable. Instead, an approach to scoring these patterns could provide a priority and reference to address different patterns more effectively. For example, the weakest group of patterns could be applied the most limited specs to ensure process robustness. This paper presents a new method of creation of real design alike patterns of multiple layers based on design rules using Layout Schema Generator (LSG) utility and a pattern scoring flow using Litho-friendly Design (LFD) and Pattern Matching. Through LSG, plenty of new unknown patterns could be created for further exploration. Then, litho simulation through LFD and topological matches by using Pattern Matching is applied on the output patterns of LSG. Finally, lithographical severity, printability properties and topological distribution of every pattern are collected. After a statistical analysis of pattern data, every pattern is given a relative score representing the pattern's yield detracting level. By sorting the output pattern score tables, weak patterns could be filtered out for further research and process tuning. This pattern generation and scoring flow is demonstrated on 28nm logic technology node. A weak pattern library is created and scored to help improve recipe coverage of litho hotspots and enhance the reliability of process.
Yang, Jiong; Xi, Lili; Qiu, Wujie; ...
2016-02-26
During the last two decades, we have witnessed great progress in research on thermoelectrics. There are two primary focuses. One is the fundamental understanding of electrical and thermal transport, enabled by the interplay of theory and experiment; the other is the substantial enhancement of the performance of various thermoelectric materials, through synergistic optimisation of those intercorrelated transport parameters. In this article, we review some of the successful strategies for tuning electrical and thermal transport. For electrical transport, we start from the classical but still very active strategy of tuning band degeneracy (or band convergence), then discuss the engineering of carriermore » scattering, and finally address the concept of conduction channels and conductive networks that emerge in complex thermoelectric materials. For thermal transport, we summarise the approaches for studying thermal transport based on phonon–phonon interactions valid for conventional solids, as well as some quantitative efforts for nanostructures. We also discuss the thermal transport in complex materials with chemical-bond hierarchy, in which a portion of the atoms (or subunits) are weakly bonded to the rest of the structure, leading to an intrinsic manifestation of part-crystalline part-liquid state at elevated temperatures. In this review, we provide a summary of achievements made in recent studies of thermoelectric transport properties, and demonstrate how they have led to improvements in thermoelectric performance by the integration of modern theory and experiment, and point out some challenges and possible directions.« less
Adaptive robotic control driven by a versatile spiking cerebellar network.
Casellato, Claudia; Antonietti, Alberto; Garrido, Jesus A; Carrillo, Richard R; Luque, Niceto R; Ros, Eduardo; Pedrocchi, Alessandra; D'Angelo, Egidio
2014-01-01
The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning), a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.
Willmore, Ben D.B.; Bulstrode, Harry; Tolhurst, David J.
2012-01-01
Neuronal populations in the primary visual cortex (V1) of mammals exhibit contrast normalization. Neurons that respond strongly to simple visual stimuli – such as sinusoidal gratings – respond less well to the same stimuli when they are presented as part of a more complex stimulus which also excites other, neighboring neurons. This phenomenon is generally attributed to generalized patterns of inhibitory connections between nearby V1 neurons. The Bienenstock, Cooper and Munro (BCM) rule is a neural network learning rule that, when trained on natural images, produces model neurons which, individually, have many tuning properties in common with real V1 neurons. However, when viewed as a population, a BCM network is very different from V1 – each member of the BCM population tends to respond to the same dominant features of visual input, producing an incomplete, highly redundant code for visual information. Here, we demonstrate that, by adding contrast normalization into the BCM rule, we arrive at a neurally-plausible Hebbian learning rule that can learn an efficient sparse, overcomplete representation that is a better model for stimulus selectivity in V1. This suggests that one role of contrast normalization in V1 is to guide the neonatal development of receptive fields, so that neurons respond to different features of visual input. PMID:22230381
Deep Logic Networks: Inserting and Extracting Knowledge From Deep Belief Networks.
Tran, Son N; d'Avila Garcez, Artur S
2018-02-01
Developments in deep learning have seen the use of layerwise unsupervised learning combined with supervised learning for fine-tuning. With this layerwise approach, a deep network can be seen as a more modular system that lends itself well to learning representations. In this paper, we investigate whether such modularity can be useful to the insertion of background knowledge into deep networks, whether it can improve learning performance when it is available, and to the extraction of knowledge from trained deep networks, and whether it can offer a better understanding of the representations learned by such networks. To this end, we use a simple symbolic language-a set of logical rules that we call confidence rules-and show that it is suitable for the representation of quantitative reasoning in deep networks. We show by knowledge extraction that confidence rules can offer a low-cost representation for layerwise networks (or restricted Boltzmann machines). We also show that layerwise extraction can produce an improvement in the accuracy of deep belief networks. Furthermore, the proposed symbolic characterization of deep networks provides a novel method for the insertion of prior knowledge and training of deep networks. With the use of this method, a deep neural-symbolic system is proposed and evaluated, with the experimental results indicating that modularity through the use of confidence rules and knowledge insertion can be beneficial to network performance.
Color vision: "OH-site" rule for seeing red and green.
Sekharan, Sivakumar; Katayama, Kota; Kandori, Hideki; Morokuma, Keiji
2012-06-27
Eyes gather information, and color forms an extremely important component of the information, more so in the case of animals to forage and navigate within their immediate environment. By using the ONIOM (QM/MM) (ONIOM = our own N-layer integrated molecular orbital plus molecular mechanics) method, we report a comprehensive theoretical analysis of the structure and molecular mechanism of spectral tuning of monkey red- and green-sensitive visual pigments. We show that interaction of retinal with three hydroxyl-bearing amino acids near the β-ionone ring part of the retinal in opsin, A164S, F261Y, and A269T, increases the electron delocalization, decreases the bond length alternation, and leads to variation in the wavelength of maximal absorbance of the retinal in the red- and green-sensitive visual pigments. On the basis of the analysis, we propose the "OH-site" rule for seeing red and green. This rule is also shown to account for the spectral shifts obtained from hydroxyl-bearing amino acids near the Schiff base in different visual pigments: at site 292 (A292S, A292Y, and A292T) in bovine and at site 111 (Y111) in squid opsins. Therefore, the OH-site rule is shown to be site-specific and not pigment-specific and thus can be used for tracking spectral shifts in any visual pigment.
Dyon proliferation in interacting quantum spin Hall edges
NASA Astrophysics Data System (ADS)
Lee, Shu-Ping; Maciejko, Joseph
We show that a quantum spin Hall system with intra-edge multiparticle backscattering and inter-edge exchange interactions exhibits a modular invariant zero-temperature phase diagram. We establish this through mapping to a classical 2D Coulomb gas with electrically and magnetically charged particles; strong coupling phases in the quantum edge problem correspond to the proliferation of various dyons in the Coulomb gas. Distinct dyon proliferated phases can be accessed by tuning the edge Luttinger parameters, for example using a split gate geometry. This research was supported by NSERC Grant #RGPIN-2014-4608, the Canada Research Chair Program (CRC) and the Canadian Institute for Advanced Research (CIFAR).
Geometric tuning of self-propulsion for Janus catalytic particles
NASA Astrophysics Data System (ADS)
Michelin, Sébastien; Lauga, Eric
2017-02-01
Catalytic swimmers have attracted much attention as alternatives to biological systems for examining collective microscopic dynamics and the response to physico-chemical signals. Yet, understanding and predicting even the most fundamental characteristics of their individual propulsion still raises important challenges. While chemical asymmetry is widely recognized as the cornerstone of catalytic propulsion, different experimental studies have reported that particles with identical chemical properties may propel in opposite directions. Here, we show that, beyond its chemical properties, the detailed shape of a catalytic swimmer plays an essential role in determining its direction of motion, demonstrating the compatibility of the classical theoretical framework with experimental observations.
Marginal elasticity of periodic triangulated origami
NASA Astrophysics Data System (ADS)
Chen, Bryan; Sussman, Dan; Lubensky, Tom; Santangelo, Chris
Origami, the classical art of folding paper, has inspired much recent work on assembling complex 3D structures from planar sheets. Origami, and more generally hinged structures with rigid panels, where all faces are triangles have special properties due to having a bulk balance of mechanical degrees of freedom and constraints. We study two families of periodic triangulated origami structures, one based on the Miura ori and one based on a kagome-like pattern due to Ron Resch. We point out the consequences of the balance of degrees of freedom and constraints for these ''metamaterial plates'' and show how the elasticity can be tuned by changing the unit cell geometry.
Geometric tuning of self-propulsion for Janus catalytic particles
Michelin, Sébastien; Lauga, Eric
2017-01-01
Catalytic swimmers have attracted much attention as alternatives to biological systems for examining collective microscopic dynamics and the response to physico-chemical signals. Yet, understanding and predicting even the most fundamental characteristics of their individual propulsion still raises important challenges. While chemical asymmetry is widely recognized as the cornerstone of catalytic propulsion, different experimental studies have reported that particles with identical chemical properties may propel in opposite directions. Here, we show that, beyond its chemical properties, the detailed shape of a catalytic swimmer plays an essential role in determining its direction of motion, demonstrating the compatibility of the classical theoretical framework with experimental observations. PMID:28205563
Geometric tuning of self-propulsion for Janus catalytic particles.
Michelin, Sébastien; Lauga, Eric
2017-02-13
Catalytic swimmers have attracted much attention as alternatives to biological systems for examining collective microscopic dynamics and the response to physico-chemical signals. Yet, understanding and predicting even the most fundamental characteristics of their individual propulsion still raises important challenges. While chemical asymmetry is widely recognized as the cornerstone of catalytic propulsion, different experimental studies have reported that particles with identical chemical properties may propel in opposite directions. Here, we show that, beyond its chemical properties, the detailed shape of a catalytic swimmer plays an essential role in determining its direction of motion, demonstrating the compatibility of the classical theoretical framework with experimental observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zanotto, Simone; Melloni, Andrea
By hybrid integration of plasmonic and dielectric waveguide concepts, it is shown that nearly perfect coherent absorption can be achieved in a co-propagating coupler geometry. First, the operating principle of the proposed device is detailed in the context of a more general 2 × 2 lossy coupler formalism. Then, it is shown how to tune the device in a wide region of possible working points, its broadband operation, and the tolerance to fabrication uncertainties. Finally, a complete picture of the electromagnetic modes inside the hybrid structure is analyzed, shining light onto the potentials which the proposed device holds in viewmore » of classical and quantum signal processing, nonlinear optics, polarization control, and sensing.« less
Radiation and the classical double copy for color charges
NASA Astrophysics Data System (ADS)
Goldberger, Walter D.; Ridgway, Alexander K.
2017-06-01
We construct perturbative classical solutions of the Yang-Mills equations coupled to dynamical point particles carrying color charge. By applying a set of color to kinematics replacement rules first introduced by Bern, Carrasco and Johansson, these are shown to generate solutions of d -dimensional dilaton gravity, which we also explicitly construct. Agreement between the gravity result and the gauge theory double copy implies a correspondence between non-Abelian particles and gravitating sources with dilaton charge. When the color sources are highly relativistic, dilaton exchange decouples, and the solutions we obtain match those of pure gravity. We comment on possible implications of our findings to the calculation of gravitational waveforms in astrophysical black hole collisions, directly from computationally simpler gluon radiation in Yang-Mills theory.
Radiation of a nonrelativistic particle during its finite motion in a central field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnakov, B. M., E-mail: karnak@theor.mephi.ru; Korneev, Ph. A., E-mail: korneev@theor.mephi.ru; Popruzhenko, S. V.
The spectrum and expressions for the intensity of dipole radiation lines are obtained for a classical nonrelativistic charged particle that executes a finite aperiodic motion in an arbitrary central field along a non-closed trajectory. It is shown that, in this case of a conditionally periodic motion, the radiaton spectrum consists of two series of equally spaced lines. It is pointed out that, according to the correspondence principle, the rise of two such series in the classical theory corresponds to the well-known selection rule |{delta}l = 1 for the dipole radiation in a central field in quantum theory, where l ismore » the orbital angular momentum of the particle. The results obtained can be applied to the description of the radiation and the absorption of a classical collisionless electron plasma in nanoparticles irradiated by an intense laser field. As an example, the rate of collisionless absorption of electromagnetic wave energy in equilibrium isotropic nanoplasma is calculated.« less
Statistical measures of Planck scale signal correlations in interferometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig J.; Kwon, Ohkyung
2015-06-22
A model-independent statistical framework is presented to interpret data from systems where the mean time derivative of positional cross correlation between world lines, a measure of spreading in a quantum geometrical wave function, is measured with a precision smaller than the Planck time. The framework provides a general way to constrain possible departures from perfect independence of classical world lines, associated with Planck scale bounds on positional information. A parametrized candidate set of possible correlation functions is shown to be consistent with the known causal structure of the classical geometry measured by an apparatus, and the holographic scaling of informationmore » suggested by gravity. Frequency-domain power spectra are derived that can be compared with interferometer data. As a result, simple projections of sensitivity for specific experimental set-ups suggests that measurements will directly yield constraints on a universal time derivative of the correlation function, and thereby confirm or rule out a class of Planck scale departures from classical geometry.« less
Quantum information theory of the Bell-state quantum eraser
NASA Astrophysics Data System (ADS)
Glick, Jennifer R.; Adami, Christoph
2017-01-01
Quantum systems can display particle- or wavelike properties, depending on the type of measurement that is performed on them. The Bell-state quantum eraser is an experiment that brings the duality to the forefront, as a single measurement can retroactively be made to measure particlelike or wavelike properties (or anything in between). Here we develop a unitary information-theoretic description of this and several related quantum measurement situations that sheds light on the trade-off between the quantum and classical features of the measurement. In particular, we show that both the coherence of the quantum state and the classical information obtained from it can be described using only quantum-information-theoretic tools and that those two measures satisfy an equality on account of the chain rule for entropies. The coherence information and the which-path information have simple interpretations in terms of state preparation and state determination and suggest ways to account for the relationship between the classical and the quantum world.
Ochoa-Gondar, O; Vila-Corcoles, A; Rodriguez-Blanco, T; Hospital, I; Salsench, E; Ansa, X; Saun, N
2014-04-01
This study compares the ability of two simpler severity rules (classical CRB65 vs. proposed CORB75) in predicting short-term mortality in elderly patients with community-acquired pneumonia (CAP). A population-based study was undertaken involving 610 patients ≥ 65 years old with radiographically confirmed CAP diagnosed between 2008 and 2011 in Tarragona, Spain (350 cases in the derivation cohort, 260 cases in the validation cohort). Severity rules were calculated at the time of diagnosis, and 30-day mortality was considered as the dependent variable. The area under the receiver operating characteristic curves (AUC) was used to compare the discriminative power of the severity rules. Eighty deaths (46 in the derivation and 34 in the validation cohorts) were observed, which gives a mortality rate of 13.1 % (15.6 % for hospitalized and 3.3 % for outpatient cases). After multivariable analyses, besides CRB (confusion, respiration rate ≥ 30/min, systolic blood pressure <90 mmHg or diastolic ≤ 60 mmHg), peripheral oxygen saturation (≤ 90 %) and age ≥ 75 years appeared to be associated with increasing 30-day mortality in the derivation cohort. The model showed adequate calibration for the derivation and validation cohorts. A modified CORB75 scoring system (similar to the classical CRB65, but adding oxygen saturation and increasing the age to 75 years) was constructed. The AUC statistics for predicting mortality in the derivation and validation cohorts were 0.79 and 0.82, respectively. In the derivation cohort, a CORB75 score ≥ 2 showed 78.3 % sensitivity and 65.5 % specificity for mortality (in the validation cohort, these were 82.4 and 71.7 %, respectively). The proposed CORB75 scoring system has good discriminative power in predicting short-term mortality among elderly people with CAP, which supports its use for severity assessment of these patients in primary care.
Giraldo, Sergio I; Ramirez, Rafael
2016-01-01
Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules.
Mande, Sharmila S.
2016-01-01
The nature of inter-microbial metabolic interactions defines the stability of microbial communities residing in any ecological niche. Deciphering these interaction patterns is crucial for understanding the mode/mechanism(s) through which an individual microbial community transitions from one state to another (e.g. from a healthy to a diseased state). Statistical correlation techniques have been traditionally employed for mining microbial interaction patterns from taxonomic abundance data corresponding to a given microbial community. In spite of their efficiency, these correlation techniques can capture only 'pair-wise interactions'. Moreover, their emphasis on statistical significance can potentially result in missing out on several interactions that are relevant from a biological standpoint. This study explores the applicability of one of the earliest association rule mining algorithm i.e. the 'Apriori algorithm' for deriving 'microbial association rules' from the taxonomic profile of given microbial community. The classical Apriori approach derives association rules by analysing patterns of co-occurrence/co-exclusion between various '(subsets of) features/items' across various samples. Using real-world microbiome data, the efficiency/utility of this rule mining approach in deciphering multiple (biologically meaningful) association patterns between 'subsets/subgroups' of microbes (constituting microbiome samples) is demonstrated. As an example, association rules derived from publicly available gut microbiome datasets indicate an association between a group of microbes (Faecalibacterium, Dorea, and Blautia) that are known to have mutualistic metabolic associations among themselves. Application of the rule mining approach on gut microbiomes (sourced from the Human Microbiome Project) further indicated similar microbial association patterns in gut microbiomes irrespective of the gender of the subjects. A Linux implementation of the Association Rule Mining (ARM) software (customised for deriving 'microbial association rules' from microbiome data) is freely available for download from the following link: http://metagenomics.atc.tcs.com/arm. PMID:27124399
Tandon, Disha; Haque, Mohammed Monzoorul; Mande, Sharmila S
2016-01-01
The nature of inter-microbial metabolic interactions defines the stability of microbial communities residing in any ecological niche. Deciphering these interaction patterns is crucial for understanding the mode/mechanism(s) through which an individual microbial community transitions from one state to another (e.g. from a healthy to a diseased state). Statistical correlation techniques have been traditionally employed for mining microbial interaction patterns from taxonomic abundance data corresponding to a given microbial community. In spite of their efficiency, these correlation techniques can capture only 'pair-wise interactions'. Moreover, their emphasis on statistical significance can potentially result in missing out on several interactions that are relevant from a biological standpoint. This study explores the applicability of one of the earliest association rule mining algorithm i.e. the 'Apriori algorithm' for deriving 'microbial association rules' from the taxonomic profile of given microbial community. The classical Apriori approach derives association rules by analysing patterns of co-occurrence/co-exclusion between various '(subsets of) features/items' across various samples. Using real-world microbiome data, the efficiency/utility of this rule mining approach in deciphering multiple (biologically meaningful) association patterns between 'subsets/subgroups' of microbes (constituting microbiome samples) is demonstrated. As an example, association rules derived from publicly available gut microbiome datasets indicate an association between a group of microbes (Faecalibacterium, Dorea, and Blautia) that are known to have mutualistic metabolic associations among themselves. Application of the rule mining approach on gut microbiomes (sourced from the Human Microbiome Project) further indicated similar microbial association patterns in gut microbiomes irrespective of the gender of the subjects. A Linux implementation of the Association Rule Mining (ARM) software (customised for deriving 'microbial association rules' from microbiome data) is freely available for download from the following link: http://metagenomics.atc.tcs.com/arm.
Giraldo, Sergio I.; Ramirez, Rafael
2016-01-01
Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules. PMID:28066290
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vanzi, M.
1995-12-31
This paper, it is evident, is mostly a joke, based on the fascinating (but not original) consideration of any failure analysis as a detective story. The Poe`s tale is a perfect instrument (but surely not the only possible one) for playing the game. If any practical application of ``The Rules of the Rue Morgue`` may be expected, it is on the possibility of defining what leaves us unsatisfied when a Failure Analyst`s result sounds out of tune. The reported Violations to the Dupin Postulate summarize the objections that the author would like to repeat for his own analyses, and formore » those cases in which he is required to review the work of other. On the constructive side, the proposed Rules, it has been repeatedly said, are common sense indications, and are surely not exhaustive, on a practical ground. Skill, patience, luck and memory are also required, but, unfortunately, not always and not together available. It should be of the greatest aid for the Failure Analyst community, in any case, that each public report could point out how it obeyed to a widely accepted set of failure analysis rules. Maybe -- why not? -- the Rules of the Rue Morgue. As a last consideration, for concluding the joke, the author invites his readers to open the original Poe`s tale at the very beginning of the story, when Monsieur Dupin is introduced. Thinking of the Failure Analyst as a member of the excellent family of the Scientists, many of us will sigh and smile.« less
A recurrent self-organizing neural fuzzy inference network.
Juang, C F; Lin, C T
1999-01-01
A recurrent self-organizing neural fuzzy inference network (RSONFIN) is proposed in this paper. The RSONFIN is inherently a recurrent multilayered connectionist network for realizing the basic elements and functions of dynamic fuzzy inference, and may be considered to be constructed from a series of dynamic fuzzy rules. The temporal relations embedded in the network are built by adding some feedback connections representing the memory elements to a feedforward neural fuzzy network. Each weight as well as node in the RSONFIN has its own meaning and represents a special element in a fuzzy rule. There are no hidden nodes (i.e., no membership functions and fuzzy rules) initially in the RSONFIN. They are created on-line via concurrent structure identification (the construction of dynamic fuzzy if-then rules) and parameter identification (the tuning of the free parameters of membership functions). The structure learning together with the parameter learning forms a fast learning algorithm for building a small, yet powerful, dynamic neural fuzzy network. Two major characteristics of the RSONFIN can thus be seen: 1) the recurrent property of the RSONFIN makes it suitable for dealing with temporal problems and 2) no predetermination, like the number of hidden nodes, must be given, since the RSONFIN can find its optimal structure and parameters automatically and quickly. Moreover, to reduce the number of fuzzy rules generated, a flexible input partition method, the aligned clustering-based algorithm, is proposed. Various simulations on temporal problems are done and performance comparisons with some existing recurrent networks are also made. Efficiency of the RSONFIN is verified from these results.
NASA Astrophysics Data System (ADS)
Jarkeh, Mohammad Reza; Mianabadi, Ameneh; Mianabadi, Hojjat
2016-10-01
Mismanagement and uneven distribution of water may lead to or increase conflict among countries. Allocation of water among trans-boundary river neighbours is a key issue in utilization of shared water resources. The bankruptcy theory is a cooperative Game Theory method which is used when the amount of demand of riparian states is larger than total available water. In this study, we survey the application of seven methods of Classical Bankruptcy Rules (CBRs) including Proportional (CBR-PRO), Adjusted Proportional (CBR-AP), Constrained Equal Awards (CBR-CEA), Constrained Equal Losses (CBR-CEL), Piniles (CBR-Piniles), Minimal Overlap (CBR-MO), Talmud (CBR-Talmud) and four Sequential Sharing Rules (SSRs) including Proportional (SSR-PRO), Constrained Equal Awards (SSR-CEA), Constrained Equal Losses (SSR-CEL) and Talmud (SSR-Talmud) methods in allocation of the Euphrates River among three riparian countries: Turkey, Syria and Iraq. However, there is not a certain documented method to find more equitable allocation rule. Therefore, in this paper, a new method is established for choosing the most appropriate allocating rule which seems to be more equitable than other allocation rules to satisfy the stakeholders. The results reveal that, based on the new propose model, the CBR-AP seems to be more equitable to allocate the Euphrates River water among Turkey, Syria and Iraq.
Counter-ions at single charged wall: Sum rules.
Samaj, Ladislav
2013-09-01
For inhomogeneous classical Coulomb fluids in thermal equilibrium, like the jellium or the two-component Coulomb gas, there exists a variety of exact sum rules which relate the particle one-body and two-body densities. The necessary condition for these sum rules is that the Coulomb fluid possesses good screening properties, i.e. the particle correlation functions or the averaged charge inhomogeneity, say close to a wall, exhibit a short-range (usually exponential) decay. In this work, we study equilibrium statistical mechanics of an electric double layer with counter-ions only, i.e. a globally neutral system of equally charged point-like particles in the vicinity of a plain hard wall carrying a fixed uniform surface charge density of opposite sign. At large distances from the wall, the one-body and two-body counter-ion densities go to zero slowly according to the inverse-power law. In spite of the absence of screening, all known sum rules are shown to hold for two exactly solvable cases of the present system: in the weak-coupling Poisson-Boltzmann limit (in any spatial dimension larger than one) and at a special free-fermion coupling constant in two dimensions. This fact indicates an extended validity of the sum rules and provides a consistency check for reasonable theoretical approaches.
Design of a developmental dual fail operational redundant strapped down inertial measurement unit
NASA Technical Reports Server (NTRS)
Morrell, F. R.; Russell, J. G.
1980-01-01
An experimental redundant strap-down inertial measurement unit (RSDIMU) is being developed at NASA-Langley as a link to satisfy safety and reliability considerations in the integrated avionics concept. The unit consists of four two-degrees-of-freedom (TDOF) tuned-rotor gyros, and four TDOF pendulous accelerometers in a skewed and separable semi-octahedron array. The system will be used to examine failure detection and isolation techniques, redundancy management rules, and optimal threshold levels for various flight configurations. The major characteristics of the RSDIMU hardware and software design, and its use as a research tool are described.
Artificial neural networks and approximate reasoning for intelligent control in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.
Is the Lorentz signature of the metric of spacetime electromagnetic in origin?
NASA Astrophysics Data System (ADS)
Itin, Yakov; Hehl, Friedrich W.
2004-07-01
We formulate a premetric version of classical electrodynamics in terms of the excitation H=( H, D) and the field strength F=( E, B). A local, linear, and symmetric spacetime relation between H and F is assumed. It yields, if electric/magnetic reciprocity is postulated, a Lorentzian metric of spacetime thereby excluding Euclidean signature (which is, nevertheless, discussed in some detail). Moreover, we determine the Dufay law (repulsion of like charges and attraction of opposite ones), the Lenz rule (the relative sign in Faraday's law), and the sign of the electromagnetic energy. In this way, we get a systematic understanding of the sign rules and the sign conventions in electrodynamics. The question in the title of the paper is answered affirmatively.
NASA Astrophysics Data System (ADS)
Couto, W. R. M.; Miwa, R. H.; Fazzio, A.
2017-10-01
Van der Waals (vdW) metal/semiconductor heterostructures have been investigated through first-principles calculations. We have considered the recently synthesized borophene (Mannix et al 2015 Science 350 1513), and the planar boron sheets (S1 and S2) (Feng et al 2016 Nat. Chem. 8 563) as the 2D metal layer, and the transition metal dichalcogenides (TMDCs) MoSe2, and WSe2 as the semiconductor monolayer. We find that the energetic stability of those 2D metal/semiconductor heterojunctions is mostly ruled by the vdW interactions; however, chemical interactions also take place in borophene/TMDC. The electronic charge transfer at the metal/semiconductor interface has been mapped, where we find a a net charge transfer from the TMDCs to the boron sheets. Further electronic structure calculations reveal that the metal/semiconductor interfaces, composed by planar boron sheets S1 and S2, present a p-type Schottky barrier which can be tuned to a p-type ohmic contact by an external electric field.
Exacerbating the Cosmological Constant Problem with Interacting Dark Energy Models.
Marsh, M C David
2017-01-06
Future cosmological surveys will probe the expansion history of the Universe and constrain phenomenological models of dark energy. Such models do not address the fine-tuning problem of the vacuum energy, i.e., the cosmological constant problem (CCP), but can make it spectacularly worse. We show that this is the case for "interacting dark energy" models in which the masses of the dark matter states depend on the dark energy sector. If realized in nature, these models have far-reaching implications for proposed solutions to the CCP that require the number of vacua to exceed the fine-tuning of the vacuum energy density. We show that current estimates of the number of flux vacua in string theory, N_{vac}∼O(10^{272 000}), are far too small to realize certain simple models of interacting dark energy and solve the cosmological constant problem anthropically. These models admit distinctive observational signatures that can be targeted by future gamma-ray observatories, hence making it possible to observationally rule out the anthropic solution to the cosmological constant problem in theories with a finite number of vacua.
Fuzzy logic-based flight control system design
NASA Astrophysics Data System (ADS)
Nho, Kyungmoon
The application of fuzzy logic to aircraft motion control is studied in this dissertation. The self-tuning fuzzy techniques are developed by changing input scaling factors to obtain a robust fuzzy controller over a wide range of operating conditions and nonlinearities for a nonlinear aircraft model. It is demonstrated that the properly adjusted input scaling factors can meet the required performance and robustness in a fuzzy controller. For a simple demonstration of the easy design and control capability of a fuzzy controller, a proportional-derivative (PD) fuzzy control system is compared to the conventional controller for a simple dynamical system. This thesis also describes the design principles and stability analysis of fuzzy control systems by considering the key features of a fuzzy control system including the fuzzification, rule-base and defuzzification. The wing-rock motion of slender delta wings, a linear aircraft model and the six degree of freedom nonlinear aircraft dynamics are considered to illustrate several self-tuning methods employing change in input scaling factors. Finally, this dissertation is concluded with numerical simulation of glide-slope capture in windshear demonstrating the robustness of the fuzzy logic based flight control system.
NASA Astrophysics Data System (ADS)
de Astis, Silvia; Corradini, Irene; Morini, Raffaella; Rodighiero, Simona; Tomasoni, Romana; Lenardi, Cristina; Verderio, Claudia; Milani, Paolo; Matteoli, Michela
2013-10-01
Activation of glial cells, including astrocytes and microglia, has been implicated in the inflammatory responses underlying brain injury and neurodegenerative diseases including Alzheimer's and Parkinson's diseases. The classic activation state (M1) is characterized by high capacity to present antigens, high production of nitric oxide (NO) and reactive oxygen species (ROS) and proinflammatory cytokines. Classically activated cells act as potent effectors that drive the inflammatory response and may mediate detrimental effects on neural cells. The second phenotype (M2) is an alternative, apparently beneficial, activation state, more related to a fine tuning of inflammation, scavenging of debris, promotion of angiogenesis, tissue remodeling and repair. Specific environmental chemical signals are able to induce these different polarization states. We provide here evidence that nanostructured substrates are able, exclusively in virtue of their physical properties, to push microglia toward the proinflammatory activation phenotype, with an efficacy which reflects the graded nanoscale rugosity. The acquisition of a proinflammatory phenotype appears specific for microglia and not astrocytes, indicating that these two cell types, although sharing common innate immune responses, respond differently to external physical stimuli.
Volume weighting the measure of the universe from classical slow-roll expansion
NASA Astrophysics Data System (ADS)
Sloan, David; Silk, Joseph
2016-05-01
One of the most frustrating issues in early universe cosmology centers on how to reconcile the vast choice of universes in string theory and in its most plausible high energy sibling, eternal inflation, which jointly generate the string landscape with the fine-tuned and hence relatively small number of universes that have undergone a large expansion and can accommodate observers and, in particular, galaxies. We show that such observations are highly favored for any system whereby physical parameters are distributed at a high energy scale, due to the conservation of the Liouville measure and the gauge nature of volume, asymptotically approaching a period of large isotropic expansion characterized by w =-1 . Our interpretation predicts that all observational probes for deviations from w =-1 in the foreseeable future are doomed to failure. The purpose of this paper is not to introduce a new measure for the multiverse, but rather to show how what is perhaps the most natural and well-known measure, volume weighting, arises as a consequence of the conservation of the Liouville measure on phase space during the classical slow-roll expansion.
Classical and quantum stability in putative landscapes
Dine, Michael
2017-01-18
Landscape analyses often assume the existence of large numbers of fields, N, with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N, eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N; scaling of couplings with N may also be necessary for perturbativity.more » We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. Finally, we consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.« less
Classical and quantum stability in putative landscapes
NASA Astrophysics Data System (ADS)
Dine, Michael
2017-01-01
Landscape analyses often assume the existence of large numbers of fields, N , with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N , eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N ; scaling of couplings with N may also be necessary for perturbativity. We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. We consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.
A networked voting rule for democratic representation
Brigatti, Edgardo; Moreno, Yamir
2018-01-01
We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals’ interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process. PMID:29657817
Learning in Artificial Neural Systems
NASA Technical Reports Server (NTRS)
Matheus, Christopher J.; Hohensee, William E.
1987-01-01
This paper presents an overview and analysis of learning in Artificial Neural Systems (ANS's). It begins with a general introduction to neural networks and connectionist approaches to information processing. The basis for learning in ANS's is then described, and compared with classical Machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomized, and where possible, the limitations inherent to specific classes of rules are outlined.
Syntactic processing in the absence of awareness and semantics.
Hung, Shao-Min; Hsieh, Po-Jang
2015-10-01
The classical view that multistep rule-based operations require consciousness has recently been challenged by findings that both multiword semantic processing and multistep arithmetic equations can be processed unconsciously. It remains unclear, however, whether pure rule-based cognitive processes can occur unconsciously in the absence of semantics. Here, after presenting 2 words consciously, we suppressed the third with continuous flash suppression. First, we showed that the third word in the subject-verb-verb format (syntactically incongruent) broke suppression significantly faster than the third word in the subject-verb-object format (syntactically congruent). Crucially, the same effect was observed even with sentences composed of pseudowords (pseudo subject-verb-adjective vs. pseudo subject-verb-object) without any semantic information. This is the first study to show that syntactic congruency can be processed unconsciously in the complete absence of semantics. Our findings illustrate how abstract rule-based processing (e.g., syntactic categories) can occur in the absence of visual awareness, even when deprived of semantics. (c) 2015 APA, all rights reserved).
Somato-dendritic Synaptic Plasticity and Error-backpropagation in Active Dendrites
Schiess, Mathieu; Urbanczik, Robert; Senn, Walter
2016-01-01
In the last decade dendrites of cortical neurons have been shown to nonlinearly combine synaptic inputs by evoking local dendritic spikes. It has been suggested that these nonlinearities raise the computational power of a single neuron, making it comparable to a 2-layer network of point neurons. But how these nonlinearities can be incorporated into the synaptic plasticity to optimally support learning remains unclear. We present a theoretically derived synaptic plasticity rule for supervised and reinforcement learning that depends on the timing of the presynaptic, the dendritic and the postsynaptic spikes. For supervised learning, the rule can be seen as a biological version of the classical error-backpropagation algorithm applied to the dendritic case. When modulated by a delayed reward signal, the same plasticity is shown to maximize the expected reward in reinforcement learning for various coding scenarios. Our framework makes specific experimental predictions and highlights the unique advantage of active dendrites for implementing powerful synaptic plasticity rules that have access to downstream information via backpropagation of action potentials. PMID:26841235
Expectations for inflationary observables: simple or natural?
NASA Astrophysics Data System (ADS)
Musoke, Nathan; Easther, Richard
2017-12-01
We describe the general inflationary dynamics that can arise with a single, canonically coupled field where the inflaton potential is a 4-th order polynomial. This scenario yields a wide range of combinations of the empirical spectral observables, ns, r and αs. However, not all combinations are possible and next-generation cosmological experiments have the ability to rule out all inflationary scenarios based on this potential. Further, we construct inflationary priors for this potential based on physically motivated choices for its free parameters. These can be used to determine the degree of tuning associated with different combinations of ns, r and αs and will facilitate treatments of the inflationary model selection problem. Finally, we comment on the implications of these results for the naturalness of the overall inflationary paradigm. We argue that ruling out all simple, renormalizable potentials would not necessarily imply that the inflationary paradigm itself was unnatural, but that this eventuality would increase the importance of building inflationary scenarios in the context of broader paradigms of ultra-high energy physics.
Strain-engineered diffusive atomic switching in two-dimensional crystals
Kalikka, Janne; Zhou, Xilin; Dilcher, Eric; Wall, Simon; Li, Ju; Simpson, Robert E.
2016-01-01
Strain engineering is an emerging route for tuning the bandgap, carrier mobility, chemical reactivity and diffusivity of materials. Here we show how strain can be used to control atomic diffusion in van der Waals heterostructures of two-dimensional (2D) crystals. We use strain to increase the diffusivity of Ge and Te atoms that are confined to 5 Å thick 2D planes within an Sb2Te3–GeTe van der Waals superlattice. The number of quintuple Sb2Te3 2D crystal layers dictates the strain in the GeTe layers and consequently its diffusive atomic disordering. By identifying four critical rules for the superlattice configuration we lay the foundation for a generalizable approach to the design of switchable van der Waals heterostructures. As Sb2Te3–GeTe is a topological insulator, we envision these rules enabling methods to control spin and topological properties of materials in reversible and energy efficient ways. PMID:27329563
Region growing using superpixels with learned shape prior
NASA Astrophysics Data System (ADS)
Borovec, Jiří; Kybic, Jan; Sugimoto, Akihiro
2017-11-01
Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed method differs from classical region growing in three important aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speed-up. Second, our method uses learned statistical shape properties that encourage plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as an energy minimization and is solved either greedily or iteratively using graph cuts. We demonstrate the performance of the proposed method and compare it with alternative approaches on the task of segmenting individual eggs in microscopy images of Drosophila ovaries.
Collisional excitation of HC3N by para- and ortho-H2
NASA Astrophysics Data System (ADS)
Faure, Alexandre; Lique, François; Wiesenfeld, Laurent
2016-08-01
New calculations for rotational excitation of cyanoacetylene by collisions with hydrogen molecules are performed to include the lowest 38 rotational levels of HC3N and kinetic temperatures to 300 K. Calculations are based on the interaction potential of Wernli et al. whose accuracy is checked against spectroscopic measurements of the HC3N-H2 complex. The quantum coupled-channel approach is employed and complemented by quasi-classical trajectory calculations. Rate coefficients for ortho-H2 are provided for the first time. Hyperfine resolved rate coefficients are also deduced. Collisional propensity rules are discussed and comparisons between quantum and classical rate coefficients are presented. This collisional data should prove useful in interpreting HC3N observations in the cold and warm ISM, as well as in protoplanetary discs.
[The succession of the Hippocratic corpus in modern Greece].
Sugano, Yukiko; Honda, Katsuya
2010-03-01
This paper examines how the Hippocratic corpus was passed on during the Enlightenment of modern Greece, introducing part of the latest Greek research on the history of medicine. Although classical studies at large had stagnated at the time under the rule of the Ottoman Empire, with the movement toward independence in the second half of the 18th century the Greeks raised their consciousness of the fact that they were the successors to their ancestral great achievements. From that time classical studies, including the history of medicine, had been activated. From some medical dissertations and books written by Greek doctors or researchers of those days, we will recognize that they made efforts to deepen the substance of modern Greek medicine, seeking the principles of medical practice from the ancient heritage.
scoringRules - A software package for probabilistic model evaluation
NASA Astrophysics Data System (ADS)
Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian
2016-04-01
Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.
Cellular Automata Generalized To An Inferential System
NASA Astrophysics Data System (ADS)
Blower, David J.
2007-11-01
Stephen Wolfram popularized elementary one-dimensional cellular automata in his book, A New Kind of Science. Among many remarkable things, he proved that one of these cellular automata was a Universal Turing Machine. Such cellular automata can be interpreted in a different way by viewing them within the context of the formal manipulation rules from probability theory. Bayes's Theorem is the most famous of such formal rules. As a prelude, we recapitulate Jaynes's presentation of how probability theory generalizes classical logic using modus ponens as the canonical example. We emphasize the important conceptual standing of Boolean Algebra for the formal rules of probability manipulation and give an alternative demonstration augmenting and complementing Jaynes's derivation. We show the complementary roles played in arguments of this kind by Bayes's Theorem and joint probability tables. A good explanation for all of this is afforded by the expansion of any particular logic function via the disjunctive normal form (DNF). The DNF expansion is a useful heuristic emphasized in this exposition because such expansions point out where relevant 0s should be placed in the joint probability tables for logic functions involving any number of variables. It then becomes a straightforward exercise to rely on Boolean Algebra, Bayes's Theorem, and joint probability tables in extrapolating to Wolfram's cellular automata. Cellular automata are seen as purely deductive systems, just like classical logic, which probability theory is then able to generalize. Thus, any uncertainties which we might like to introduce into the discussion about cellular automata are handled with ease via the familiar inferential path. Most importantly, the difficult problem of predicting what cellular automata will do in the far future is treated like any inferential prediction problem.
Rule-based spatial modeling with diffusing, geometrically constrained molecules.
Gruenert, Gerd; Ibrahim, Bashar; Lenser, Thorsten; Lohel, Maiko; Hinze, Thomas; Dittrich, Peter
2010-06-07
We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined. For our implementation (based on LAMMPS), we have chosen an already existing formalism (BioNetGen) for the implicit specification of the reaction network. This compatibility allows to import existing models easily, i.e., only additional geometry data files have to be provided. Our simulations show that the obtained dynamics can be fundamentally different from those simulations that use classical reaction-diffusion approaches like Partial Differential Equations or Gillespie-type spatial stochastic simulation. We show, for example, that the combination of combinatorial complexity and geometric effects leads to the emergence of complex self-assemblies and transportation phenomena happening faster than diffusion (using a model of molecular walkers on microtubules). When the mentioned classical simulation approaches are applied, these aspects of modeled systems cannot be observed without very special treatment. Further more, we show that the geometric information can even change the organizational structure of the reaction system. That is, a set of chemical species that can in principle form a stationary state in a Differential Equation formalism, is potentially unstable when geometry is considered, and vice versa. We conclude that our approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial simulation systems like those for DNA or virus capsid self-assembly.
Rule-based spatial modeling with diffusing, geometrically constrained molecules
2010-01-01
Background We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined. For our implementation (based on LAMMPS), we have chosen an already existing formalism (BioNetGen) for the implicit specification of the reaction network. This compatibility allows to import existing models easily, i.e., only additional geometry data files have to be provided. Results Our simulations show that the obtained dynamics can be fundamentally different from those simulations that use classical reaction-diffusion approaches like Partial Differential Equations or Gillespie-type spatial stochastic simulation. We show, for example, that the combination of combinatorial complexity and geometric effects leads to the emergence of complex self-assemblies and transportation phenomena happening faster than diffusion (using a model of molecular walkers on microtubules). When the mentioned classical simulation approaches are applied, these aspects of modeled systems cannot be observed without very special treatment. Further more, we show that the geometric information can even change the organizational structure of the reaction system. That is, a set of chemical species that can in principle form a stationary state in a Differential Equation formalism, is potentially unstable when geometry is considered, and vice versa. Conclusions We conclude that our approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial simulation systems like those for DNA or virus capsid self-assembly. PMID:20529264
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
Pure single-photon emission from In(Ga)As QDs in a tunable fiber-based external mirror microcavity
NASA Astrophysics Data System (ADS)
Herzog, T.; Sartison, M.; Kolatschek, S.; Hepp, S.; Bommer, A.; Pauly, C.; Mücklich, F.; Becher, C.; Jetter, M.; Portalupi, S. L.; Michler, P.
2018-07-01
Cavity quantum electrodynamics is widely used in many solid-state systems for improving quantum emitter performances or accessing specific physical regimes. For these purposes it is fundamental that the non-classical emitter, like a quantum dot or an NV center, matches the cavity mode, both spatially and spectrally. In the present work, we couple single photons stemming from In(Ga)As quantum dots into an open fiber-based Fabry–Pérot cavity. Such a system allows for reaching an optimal spatial and spectral matching for every present emitter and every optical transition, by precisely tuning the cavity geometry. In addition to that, the capability of deterministically and repeatedly locating a single quantum dot enables to compare the behavior of the quantum emitter inside the cavity with respect to before it is placed inside. The presented open-cavity system shows full flexibility by precisely tuning in resonance different QD transitions, namely excitons, biexcitons and trions. A measured Purcell enhancement of 4.4 ± 0.5 is obtained with a cavity finesse of about 140, while still demonstrating a single-photon source with vanishing multi-photon emission probability.
Transverse fields to tune an Ising-nematic quantum phase transition
NASA Astrophysics Data System (ADS)
Maharaj, Akash V.; Rosenberg, Elliott W.; Hristov, Alexander T.; Berg, Erez; Fernandes, Rafael M.; Fisher, Ian R.; Kivelson, Steven A.
2017-12-01
The paradigmatic example of a continuous quantum phase transition is the transverse field Ising ferromagnet. In contrast to classical critical systems, whose properties depend only on symmetry and the dimension of space, the nature of a quantum phase transition also depends on the dynamics. In the transverse field Ising model, the order parameter is not conserved, and increasing the transverse field enhances quantum fluctuations until they become strong enough to restore the symmetry of the ground state. Ising pseudospins can represent the order parameter of any system with a twofold degenerate broken-symmetry phase, including electronic nematic order associated with spontaneous point-group symmetry breaking. Here, we show for the representative example of orbital-nematic ordering of a non-Kramers doublet that an orthogonal strain or a perpendicular magnetic field plays the role of the transverse field, thereby providing a practical route for tuning appropriate materials to a quantum critical point. While the transverse fields are conjugate to seemingly unrelated order parameters, their nontrivial commutation relations with the nematic order parameter, which can be represented by a Berry-phase term in an effective field theory, intrinsically intertwine the different order parameters.
Strategies for Optimal MAC Parameters Tuning in IEEE 802.15.6 Wearable Wireless Sensor Networks.
Alam, Muhammad Mahtab; Ben Hamida, Elyes
2015-09-01
Wireless body area networks (WBAN) has penetrated immensely in revolutionizing the classical heath-care system. Recently, number of WBAN applications has emerged which introduce potential limits to existing solutions. In particular, IEEE 802.15.6 standard has provided great flexibility, provisions and capabilities to deal emerging applications. In this paper, we investigate the application-specific throughput analysis by fine-tuning the physical (PHY) and medium access control (MAC) parameters of the IEEE 802.15.6 standard. Based on PHY characterizations in narrow band, at the MAC layer, carrier sense multiple access collision avoidance (CSMA/CA) and scheduled access protocols are extensively analyzed. It is concluded that, IEEE 802.15.6 standard can satisfy most of the WBANs applications throughput requirements by maximum achieving 680 Kbps. However, those emerging applications which require high quality audio or video transmissions, standard is not able to meet their constraints. Moreover, delay, energy efficiency and successful packet reception are considered as key performance metrics for comparing the MAC protocols. CSMA/CA protocol provides the best results to meet the delay constraints of medical and non-medical WBAN applications. Whereas, the scheduled access approach, performs very well both in energy efficiency and packet reception ratio.
Strain-controlled electrocatalysis on multimetallic nanomaterials
NASA Astrophysics Data System (ADS)
Luo, Mingchuan; Guo, Shaojun
2017-11-01
Electrocatalysis is crucial for the development of clean and renewable energy technologies, which may reduce our reliance on fossil fuels. Multimetallic nanomaterials serve as state-of-the-art electrocatalysts as a consequence of their unique physico-chemical properties. One method of enhancing the electrocatalytic performance of multimetallic nanomaterials is to tune or control the surface strain of the nanomaterials, and tremendous progress has been made in this area in the past decade. In this Review, we summarize advances in the introduction, tuning and quantification of strain in multimetallic nanocrystals to achieve more efficient energy conversion by electrocatalysis. First, we introduce the concept of strain and its correlation with other key physico-chemical properties. Then, using the electrocatalytic reduction of oxygen as a model reaction, we discuss the underlying mechanisms behind the strain-adsorption-reactivity relationship based on combined classical theories and models. We describe how this knowledge can be harnessed to design multimetallic nanocrystals with optimized strain to increase the efficiency of oxygen reduction. In particular, we highlight the unexpectedly beneficial (and previously overlooked) role of tensile strain from multimetallic nanocrystals in improving electrocatalysis. We conclude by outlining the challenges and offering our perspectives on the research directions in this burgeoning field.
The Essential Role of Primate Orbitofrontal Cortex in Conflict-Induced Executive Control Adjustment
Buckley, Mark J.; Tanaka, Keiji
2014-01-01
Conflict in information processing evokes trial-by-trial behavioral modulations. Influential models suggest that adaptive tuning of executive control, mediated by mid-dorsal lateral prefrontal cortex (mdlPFC) and anterior cingulate cortex (ACC), underlies these modulations. However, mdlPFC and ACC are parts of distributed brain networks including orbitofrontal cortex (OFC), posterior cingulate cortex (PCC), and superior-dorsal lateral prefrontal cortex (sdlPFC). Contributions of these latter areas in adaptive tuning of executive control are unknown. We trained monkeys to perform a matching task in which they had to resolve the conflict between two behavior-guiding rules. Here, we report that bilateral lesions in OFC, but not in PCC or sdlPFC, impaired selection between these competing rules. In addition, the behavioral adaptation that is normally induced by experiencing conflict disappeared in OFC-lesioned, but remained normal in PCC-lesioned or sdlPFC-lesioned monkeys. Exploring underlying neuronal processes, we found that the activity of neurons in OFC represented the conflict between behavioral options independent from the other aspects of the task. Responses of OFC neurons to rewards also conveyed information of the conflict level that the monkey had experienced along the course to obtain the reward. Our findings indicate dissociable functions for five closely interconnected cortical areas suggesting that OFC and mdlPFC, but not PCC or sdlPFC or ACC, play indispensable roles in conflict-dependent executive control of on-going behavior. Both mdlPFC and OFC support detection of conflict and its integration with the task goal, but in contrast to mdlPFC, OFC does not retain the necessary information for conflict-induced modulation of future decisions. PMID:25122901
NASA Astrophysics Data System (ADS)
Mukherjee, Bijoy K.; Metia, Santanu
2009-10-01
The paper is divided into three parts. The first part gives a brief introduction to the overall paper, to fractional order PID (PIλDμ) controllers and to Genetic Algorithm (GA). In the second part, first it has been studied how the performance of an integer order PID controller deteriorates when implemented with lossy capacitors in its analog realization. Thereafter it has been shown that the lossy capacitors can be effectively modeled by fractional order terms. Then, a novel GA based method has been proposed to tune the controller parameters such that the original performance is retained even though realized with the same lossy capacitors. Simulation results have been presented to validate the usefulness of the method. Some Ziegler-Nichols type tuning rules for design of fractional order PID controllers have been proposed in the literature [11]. In the third part, a novel GA based method has been proposed which shows how equivalent integer order PID controllers can be obtained which will give performance level similar to those of the fractional order PID controllers thereby removing the complexity involved in the implementation of the latter. It has been shown with extensive simulation results that the equivalent integer order PID controllers more or less retain the robustness and iso-damping properties of the original fractional order PID controllers. Simulation results also show that the equivalent integer order PID controllers are more robust than the normal Ziegler-Nichols tuned PID controllers.
The neural code for face orientation in the human fusiform face area.
Ramírez, Fernando M; Cichy, Radoslaw M; Allefeld, Carsten; Haynes, John-Dylan
2014-09-03
Humans recognize faces and objects with high speed and accuracy regardless of their orientation. Recent studies have proposed that orientation invariance in face recognition involves an intermediate representation where neural responses are similar for mirror-symmetric views. Here, we used fMRI, multivariate pattern analysis, and computational modeling to investigate the neural encoding of faces and vehicles at different rotational angles. Corroborating previous studies, we demonstrate a representation of face orientation in the fusiform face-selective area (FFA). We go beyond these studies by showing that this representation is category-selective and tolerant to retinal translation. Critically, by controlling for low-level confounds, we found the representation of orientation in FFA to be compatible with a linear angle code. Aspects of mirror-symmetric coding cannot be ruled out when FFA mean activity levels are considered as a dimension of coding. Finally, we used a parametric family of computational models, involving a biased sampling of view-tuned neuronal clusters, to compare different face angle encoding models. The best fitting model exhibited a predominance of neuronal clusters tuned to frontal views of faces. In sum, our findings suggest a category-selective and monotonic code of face orientation in the human FFA, in line with primate electrophysiology studies that observed mirror-symmetric tuning of neural responses at higher stages of the visual system, beyond the putative homolog of human FFA. Copyright © 2014 the authors 0270-6474/14/3412155-13$15.00/0.
Aniseikonia quantification: error rate of rule of thumb estimation.
Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P
1999-01-01
To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.
Invariance of the bit error rate in the ancilla-assisted homodyne detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshida, Yuhsuke; Takeoka, Masahiro; Sasaki, Masahide
2010-11-15
We investigate the minimum achievable bit error rate of the discrimination of binary coherent states with the help of arbitrary ancillary states. We adopt homodyne measurement with a common phase of the local oscillator and classical feedforward control. After one ancillary state is measured, its outcome is referred to the preparation of the next ancillary state and the tuning of the next mixing with the signal. It is shown that the minimum bit error rate of the system is invariant under the following operations: feedforward control, deformations, and introduction of any ancillary state. We also discuss the possible generalization ofmore » the homodyne detection scheme.« less
Mirrorless Optical Parametric Oscillation with Tunable Threshold in Cold Atoms.
Mei, Yefeng; Guo, Xianxin; Zhao, Luwei; Du, Shengwang
2017-10-13
We report the demonstration of a mirrorless optical parametric oscillator with a tunable threshold in laser-cooled atoms with four-wave mixing (FWM) using electromagnetically induced transparency. Driven by two classical laser beams, the generated Stokes and anti-Stokes fields counterpropagate and build up efficient intrinsic feedback through the nonlinear FWM process. This feedback does not involve any cavity or spatially distributed microstructures. We observe the transition of photon correlation properties from the biphoton quantum regime (below the threshold) to the oscillation regime (above the threshold). The pump threshold can be tuned by varying the operating parameters. We achieve the oscillation with a threshold as low as 15 μW.
Narrow-band tunable terahertz emission from ferrimagnetic Mn{sub 3-x}Ga thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awari, N.; University of Groningen, 9747 AG Groningen; Kovalev, S., E-mail: s.kovalev@hzdr.de, E-mail: c.fowley@hzdr.de, E-mail: rodek@tcd.ie
2016-07-18
Narrow-band terahertz emission from coherently excited spin precession in metallic ferrimagnetic Mn{sub 3-x}Ga Heusler alloy nanofilms has been observed. The efficiency of the emission, per nanometer film thickness, is comparable or higher than that of classical laser-driven terahertz sources based on optical rectification. The center frequency of the emission from the films can be tuned precisely via the film composition in the range of 0.20–0.35 THz, making this type of metallic film a candidate for efficient on-chip terahertz emitters. Terahertz emission spectroscopy is furthermore shown to be a sensitive probe of magnetic properties of ultra-thin films.
NASA Astrophysics Data System (ADS)
Nazarenko, Sergey
2015-07-01
Wave turbulence is the statistical mechanics of random waves with a broadband spectrum interacting via non-linearity. To understand its difference from non-random well-tuned coherent waves, one could compare the sound of thunder to a piece of classical music. Wave turbulence is surprisingly common and important in a great variety of physical settings, starting with the most familiar ocean waves to waves at quantum scales or to much longer waves in astrophysics. We will provide a basic overview of the wave turbulence ideas, approaches and main results emphasising the physics of the phenomena and using qualitative descriptions avoiding, whenever possible, involved mathematical derivations. In particular, dimensional analysis will be used for obtaining the key scaling solutions in wave turbulence - Kolmogorov-Zakharov (KZ) spectra.
Designed tools for analysis of lithography patterns and nanostructures
NASA Astrophysics Data System (ADS)
Dervillé, Alexandre; Baderot, Julien; Bernard, Guilhem; Foucher, Johann; Grönqvist, Hanna; Labrosse, Aurélien; Martinez, Sergio; Zimmermann, Yann
2017-03-01
We introduce a set of designed tools for the analysis of lithography patterns and nano structures. The classical metrological analysis of these objects has the drawbacks of being time consuming, requiring manual tuning and lacking robustness and user friendliness. With the goal of improving the current situation, we propose new image processing tools at different levels: semi automatic, automatic and machine-learning enhanced tools. The complete set of tools has been integrated into a software platform designed to transform the lab into a virtual fab. The underlying idea is to master nano processes at the research and development level by accelerating the access to knowledge and hence speed up the implementation in product lines.
Tuning the Curie temperature of FeCo compounds by tetragonal distortion
NASA Astrophysics Data System (ADS)
Jakobsson, A.; Şaşıoǧlu, E.; Mavropoulos, Ph.; Ležaić, M.; Sanyal, B.; Bihlmayer, G.; Blügel, S.
2013-09-01
Combining density-functional theory calculations with a classical Monte Carlo method, we show that for B2-type FeCo compounds, tetragonal distortion gives rise to a strong reduction of the Curie temperature TC. The TC monotonically decreases from 1575 K (for c /a=1) to 940 K (for c /a=√2 ). We find that the nearest neighbor Fe-Co exchange interaction is sufficient to explain the c/a behavior of the TC. Combination of high magnetocrystalline anisotropy energy with a moderate TC value suggests tetragonal FeCo grown on the Rh substrate with c /a=1.24 to be a promising material for heat-assisted magnetic recording applications.
Extension of the classical classification of β-turns
de Brevern, Alexandre G.
2016-01-01
The functional properties of a protein primarily depend on its three-dimensional (3D) structure. These properties have classically been assigned, visualized and analysed on the basis of protein secondary structures. The β-turn is the third most important secondary structure after helices and β-strands. β-turns have been classified according to the values of the dihedral angles φ and ψ of the central residue. Conventionally, eight different types of β-turns have been defined, whereas those that cannot be defined are classified as type IV β-turns. This classification remains the most widely used. Nonetheless, the miscellaneous type IV β-turns represent 1/3rd of β-turn residues. An unsupervised specific clustering approach was designed to search for recurrent new turns in the type IV category. The classical rules of β-turn type assignment were central to the approach. The four most frequently occurring clusters defined the new β-turn types. Unexpectedly, these types, designated IV1, IV2, IV3 and IV4, represent half of the type IV β-turns and occur more frequently than many of the previously established types. These types show convincing particularities, in terms of both structures and sequences that allow for the classical β-turn classification to be extended for the first time in 25 years. PMID:27627963
Extension of the classical classification of β-turns.
de Brevern, Alexandre G
2016-09-15
The functional properties of a protein primarily depend on its three-dimensional (3D) structure. These properties have classically been assigned, visualized and analysed on the basis of protein secondary structures. The β-turn is the third most important secondary structure after helices and β-strands. β-turns have been classified according to the values of the dihedral angles φ and ψ of the central residue. Conventionally, eight different types of β-turns have been defined, whereas those that cannot be defined are classified as type IV β-turns. This classification remains the most widely used. Nonetheless, the miscellaneous type IV β-turns represent 1/3(rd) of β-turn residues. An unsupervised specific clustering approach was designed to search for recurrent new turns in the type IV category. The classical rules of β-turn type assignment were central to the approach. The four most frequently occurring clusters defined the new β-turn types. Unexpectedly, these types, designated IV1, IV2, IV3 and IV4, represent half of the type IV β-turns and occur more frequently than many of the previously established types. These types show convincing particularities, in terms of both structures and sequences that allow for the classical β-turn classification to be extended for the first time in 25 years.
NASA Astrophysics Data System (ADS)
Divi, Srikanth; Agrahari, Gargi; Ranjan Kadulkar, Sanket; Kumar, Sanjeet; Chatterjee, Abhijit
2017-12-01
Capturing segregation behavior in metal alloy nanoparticles accurately using computer simulations is contingent upon the availability of high-fidelity interatomic potentials. The embedded atom method (EAM) potential is a widely trusted interatomic potential form used with pure metals and their alloys. When limited experimental data is available, the A-B EAM cross-interaction potential for metal alloys AxB 1-x are often constructed from pure metal A and B potentials by employing a pre-defined ‘mixing rule’ without any adjustable parameters. While this approach is convenient, we show that for AuPt, NiPt, AgAu, AgPd, AuNi, NiPd, PtPd and AuPd such mixing rules may not even yield the correct alloy properties, e.g., heats of mixing, that are closely related to the segregation behavior. A general theoretical formulation based on scaling invariance arguments is introduced that addresses this issue by tuning the mixing rule to better describe alloy properties. Starting with an existing pure metal EAM potential that is used extensively in literature, we find that the mixing rule fitted to heats of mixing for metal solutions usually provides good estimates of segregation energies, lattice parameters and cohesive energy, as well as equilibrium distribution of metals within a nanoparticle using Monte Carlo simulations. While the tunable mixing rule generally performs better than non-adjustable mixing rules, the use of the tunable mixing rule may still require some caution. For e.g., in Pt-Ni system we find that the segregation behavior can deviate from the experimentally observed one at Ni-rich compositions. Despite this the overall results suggest that the same approach may be useful for developing improved cross-potentials with other existing pure metal EAM potentials as well. As a further test of our approach, mixing rule estimated from binary data is used to calculate heat of mixing in AuPdPt, AuNiPd, AuPtNi, AgAuPd and NiPtPd. Excellent agreement with experiments is observed for AuPdPt.
Toward simulating complex systems with quantum effects
NASA Astrophysics Data System (ADS)
Kenion-Hanrath, Rachel Lynn
Quantum effects like tunneling, coherence, and zero point energy often play a significant role in phenomena on the scales of atoms and molecules. However, the exact quantum treatment of a system scales exponentially with dimensionality, making it impractical for characterizing reaction rates and mechanisms in complex systems. An ongoing effort in the field of theoretical chemistry and physics is extending scalable, classical trajectory-based simulation methods capable of capturing quantum effects to describe dynamic processes in many-body systems; in the work presented here we explore two such techniques. First, we detail an explicit electron, path integral (PI)-based simulation protocol for predicting the rate of electron transfer in condensed-phase transition metal complex systems. Using a PI representation of the transferring electron and a classical representation of the transition metal complex and solvent atoms, we compute the outer sphere free energy barrier and dynamical recrossing factor of the electron transfer rate while accounting for quantum tunneling and zero point energy effects. We are able to achieve this employing only a single set of force field parameters to describe the system rather than parameterizing along the reaction coordinate. Following our success in describing a simple model system, we discuss our next steps in extending our protocol to technologically relevant materials systems. The latter half focuses on the Mixed Quantum-Classical Initial Value Representation (MQC-IVR) of real-time correlation functions, a semiclassical method which has demonstrated its ability to "tune'' between quantum- and classical-limit correlation functions while maintaining dynamic consistency. Specifically, this is achieved through a parameter that determines the quantumness of individual degrees of freedom. Here, we derive a semiclassical correction term for the MQC-IVR to systematically characterize the error introduced by different choices of simulation parameters, and demonstrate the ability of this approach to optimize MQC-IVR simulations.
A drop in performance on a fluid intelligence test due to instructed-rule mindset.
ErEl, Hadas; Meiran, Nachshon
2017-09-01
A 'mindset' is a configuration of processing resources that are made available for the task at hand as well as their suitable tuning for carrying it out. Of special interest, remote-relation abstract mindsets are introduced by activities sharing only general control processes with the task. To test the effect of a remote-relation mindset on performance on a Fluid Intelligence test (Raven's Advanced Progressive Matrices, RAPM), we induced a mindset associated with little usage of executive processing by requiring participants to execute a well-defined classification rule 12 times, a manipulation known from previous work to drastically impair rule-generation performance and associated cognitive processes. In Experiment 1, this manipulation led to a drop in RAPM performance equivalent to 10.1 IQ points. No drop was observed in a General Knowledge task. In Experiment 2, a similar drop in RAPM performance was observed (equivalent to 7.9 and 9.2 IQ points) regardless if participants were pre-informed about the upcoming RAPM test. These results indicate strong (most likely, transient) adverse effects of a remote-relation mindset on test performance. They imply that although the trait of Fluid Intelligence has probably not changed, mindsets can severely distort estimates of this trait.
Modelling Of Flotation Processes By Classical Mathematical Methods - A Review
NASA Astrophysics Data System (ADS)
Jovanović, Ivana; Miljanović, Igor
2015-12-01
Flotation process modelling is not a simple task, mostly because of the process complexity, i.e. the presence of a large number of variables that (to a lesser or a greater extent) affect the final outcome of the mineral particles separation based on the differences in their surface properties. The attempts toward the development of the quantitative predictive model that would fully describe the operation of an industrial flotation plant started in the middle of past century and it lasts to this day. This paper gives a review of published research activities directed toward the development of flotation models based on the classical mathematical rules. The description and systematization of classical flotation models were performed according to the available references, with emphasize exclusively given to the flotation process modelling, regardless of the model application in a certain control system. In accordance with the contemporary considerations, models were classified as the empirical, probabilistic, kinetic and population balance types. Each model type is presented through the aspects of flotation modelling at the macro and micro process levels.
What is Quantum Mechanics? A Minimal Formulation
NASA Astrophysics Data System (ADS)
Friedberg, R.; Hohenberg, P. C.
2018-03-01
This paper presents a minimal formulation of nonrelativistic quantum mechanics, by which is meant a formulation which describes the theory in a succinct, self-contained, clear, unambiguous and of course correct manner. The bulk of the presentation is the so-called "microscopic theory", applicable to any closed system S of arbitrary size N, using concepts referring to S alone, without resort to external apparatus or external agents. An example of a similar minimal microscopic theory is the standard formulation of classical mechanics, which serves as the template for a minimal quantum theory. The only substantive assumption required is the replacement of the classical Euclidean phase space by Hilbert space in the quantum case, with the attendant all-important phenomenon of quantum incompatibility. Two fundamental theorems of Hilbert space, the Kochen-Specker-Bell theorem and Gleason's theorem, then lead inevitably to the well-known Born probability rule. For both classical and quantum mechanics, questions of physical implementation and experimental verification of the predictions of the theories are the domain of the macroscopic theory, which is argued to be a special case or application of the more general microscopic theory.
Depression-Biased Reverse Plasticity Rule Is Required for Stable Learning at Top-Down Connections
Burbank, Kendra S.; Kreiman, Gabriel
2012-01-01
Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP) rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP) where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP) produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body. PMID:22396630
Cost efficiency of the non-associative flow rule simulation of an industrial component
NASA Astrophysics Data System (ADS)
Galdos, Lander; de Argandoña, Eneko Saenz; Mendiguren, Joseba
2017-10-01
In the last decade, metal forming industry is becoming more and more competitive. In this context, the FEM modeling has become a primary tool of information for the component and process design. Numerous researchers have been focused on improving the accuracy of the material models implemented on the FEM in order to improve the efficiency of the simulations. Aimed at increasing the efficiency of the anisotropic behavior modelling, in the last years the use of non-associative flow rule models (NAFR) has been presented as an alternative to the classic associative flow rule models (AFR). In this work, the cost efficiency of the used flow rule model has been numerically analyzed by simulating an industrial drawing operation with two different models of the same degree of flexibility: one AFR model and one NAFR model. From the present study, it has been concluded that the flow rule has a negligible influence on the final drawing prediction; this is mainly driven by the model parameter identification procedure. Even though the NAFR formulation is complex when compared to the AFR, the present study shows that the total simulation time while using explicit FE solvers has been reduced without loss of accuracy. Furthermore, NAFR formulations have an advantage over AFR formulations in parameter identification because the formulation decouples the yield stress and the Lankford coefficients.
NASA Astrophysics Data System (ADS)
Kehagias, Alex; Riotto, Antonio
2017-04-01
We investigate the recently proposed clockwork mechanism delivering light degrees of freedom with suppressed interactions and show, with various examples, that it can be efficiently implemented in inflationary scenarios to generate flat inflaton potentials and small density perturbations without fine-tunings. We also study the clockwork graviton in de Sitter and, interestingly, we find that the corresponding clockwork charge is site-dependent. As a consequence, the amount of tensor modes is generically suppressed with respect to the standard cases where the clockwork set-up is not adopted. This point can be made a virtue in resurrecting models of inflation which were supposed to be ruled out because of the excessive amount of tensor modes from inflation.
The Philosophical and Mathematical Context of two Gerbert's Musical Letters to Constantine
NASA Astrophysics Data System (ADS)
Otisk, Marek
2015-04-01
The paper deals with two letters written by Gerbert of Aurillac to Constantine of Fleury. In these letters Gerbert points out some passages from Boethius’s Introduction to Music (II, 10; respectively IV, 2 and II, 21) concerning mathematical operations (multiplication and subtraction) with superparticular ratios i.e. ratios of the type (n+1) : n. The musical harmonies rule the Cosmos and the Celestial Spheres according to Martianus Capella De nuptiis Philologiae et Mercurii; Music is the basis for understanting Astronomy. This paper follows two main aims: philosophical importance of music as liberal art and mathematical basis of the Pythagorean tuning.
Tuning topological phases in the XMnSb2 system via chemical substitution from first principles
NASA Astrophysics Data System (ADS)
Griffin, Sinead M.; Neaton, Jeffrey B.
New Dirac materials are sought for their interesting fundamental physics and for their potential technological applications. Protected symmetries offer a route to potential zero mass Dirac and Weyl fermions, and can lead unique transport properties and spectroscopic signatures. In this work, we use first-principles calculations to study the XMnSb2 family of materials and show how varying X changes the nature of bulk protected topological features in their electronic structure. We further discuss new design rules for predicting new topological materials suggested by our calculations. SG is supported by the Early Postdoc Mobility Fellowship of the SNF.
NASA Astrophysics Data System (ADS)
Duret, Q.; Machet, B.
2010-10-01
Starting from Wigner's symmetry representation theorem, we give a general account of discrete symmetries (parity P, charge conjugation C, time-reversal T), focusing on fermions in Quantum Field Theory. We provide the rules of transformation of Weyl spinors, both at the classical level (grassmanian wave functions) and quantum level (operators). Making use of Wightman's definition of invariance, we outline ambiguities linked to the notion of classical fermionic Lagrangian. We then present the general constraints cast by these transformations and their products on the propagator of the simplest among coupled fermionic system, the one made with one fermion and its antifermion. Last, we put in correspondence the propagation of C eigenstates (Majorana fermions) and the criteria cast on their propagator by C and CP invariance.
Experimental Non-Violation of the Bell Inequality
NASA Astrophysics Data System (ADS)
Palmer, Tim
2018-05-01
A finite non-classical framework for physical theory is described which challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the universe as a deterministic locally causal system evolving on a measure-zero fractal-like geometry $I_U$ in cosmological state space. Consistent with the assumed primacy of $I_U$, and $p$-adic number theory, a non-Euclidean (and hence non-classical) metric $g_p$ is defined on cosmological state space, where $p$ is a large but finite Pythagorean prime. Using number-theoretic properties of spherical triangles, the inequalities violated experimentally are shown to be $g_p$-distant from the CHSH inequality, whose violation would rule out local realism. This result fails in the singular limit $p=\\infty$, at which $g_p$ is Euclidean. Broader implications are discussed.
Experimental generalized quantum suppression law in Sylvester interferometers
NASA Astrophysics Data System (ADS)
Viggianiello, Niko; Flamini, Fulvio; Innocenti, Luca; Cozzolino, Daniele; Bentivegna, Marco; Spagnolo, Nicolò; Crespi, Andrea; Brod, Daniel J.; Galvão, Ernesto F.; Osellame, Roberto; Sciarrino, Fabio
2018-03-01
Photonic interference is a key quantum resource for optical quantum computation, and in particular for so-called boson sampling devices. In interferometers with certain symmetries, genuine multiphoton quantum interference effectively suppresses certain sets of events, as in the original Hong–Ou–Mandel effect. Recently, it was shown that some classical and semi-classical models could be ruled out by identifying such suppressions in Fourier interferometers. Here we propose a suppression law suitable for random-input experiments in multimode Sylvester interferometers, and verify it experimentally using 4- and 8-mode integrated interferometers. The observed suppression occurs for a much larger fraction of input–output combinations than what is observed in Fourier interferometers of the same size, and could be relevant to certification of boson sampling machines and other experiments relying on bosonic interference, such as quantum simulation and quantum metrology.
Segmented strings and the McMillan map
Gubser, Steven S.; Parikh, Sarthak; Witaszczyk, Przemek
2016-07-25
We present new exact solutions describing motions of closed segmented strings in AdS 3 in terms of elliptic functions. The existence of analytic expressions is due to the integrability of the classical equations of motion, which in our examples reduce to instances of the McMillan map. Here, we also obtain a discrete evolution rule for the motion in AdS 3 of arbitrary bound states of fundamental strings and D1-branes in the test approximation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kambersky, V.; Schaefer, R.; Leibniz Institute for Solid State and Materials Research, Helmholtzstrasse 20, 01069 Dresden
2011-07-15
An anomalous symmetry of magneto-optical images of ferromagnetic domain walls was reported by Schaefer and Hubert [Phys. Status Solidi A 118, 271 (1990)] and interpreted in terms of light amplitudes proportional to the magnetization gradient. We present analytic and numerical calculations supporting such proportionality under additional conditions implied by classical rules of micromagnetics and address some objections presented by Banno [Phys. Rev. A 77, 033818 (2008)] against such proportionality.
NASA Astrophysics Data System (ADS)
Glick, Aaron; Carr, Lincoln; Calarco, Tommaso; Montangero, Simone
2014-03-01
In order to investigate the emergence of complexity in quantum systems, we present a quantum game of life, inspired by Conway's classic game of life. Through Matrix Product State (MPS) calculations, we simulate the evolution of quantum systems, dictated by a Hamiltonian that defines the rules of our quantum game. We analyze the system through a number of measures which elicit the emergence of complexity in terms of spatial organization, system dynamics, and non-local mutual information within the network. Funded by NSF
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.
2010-05-01
In the paper we show that the biologically motivated conception of time-pulse encoding usage gives a set of advantages (single methodological basis, universality, tuning simplicity, learning and programming et al) at creation and design of sensor systems with parallel input-output and processing for 2D structures hybrid and next generations neuro-fuzzy neurocomputers. We show design principles of programmable relational optoelectronic time-pulse encoded processors on the base of continuous logic, order logic and temporal waves processes. We consider a structure that execute analog signal extraction, analog and time-pulse coded variables sorting. We offer optoelectronic realization of such base relational order logic element, that consists of time-pulse coded photoconverters (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutation blocks. We make technical parameters estimations of devices and processors on such base elements by simulation and experimental research: optical input signals power 0.2 - 20 uW, processing time 1 - 10 us, supply voltage 1 - 3 V, consumption power 10 - 100 uW, extended functional possibilities, learning possibilities. We discuss some aspects of possible rules and principles of learning and programmable tuning on required function, relational operation and realization of hardware blocks for modifications of such processors. We show that it is possible to create sorting machines, neural networks and hybrid data-processing systems with untraditional numerical systems and pictures operands on the basis of such quasiuniversal hardware simple blocks with flexible programmable tuning.
NASA Astrophysics Data System (ADS)
Fursdon, M.; Barrett, T.; Domptail, F.; Evans, Ll M.; Luzginova, N.; Greuner, N. H.; You, J.-H.; Li, M.; Richou, M.; Gallay, F.; Visca, E.
2017-12-01
The design and development of a novel plasma facing component (for fusion power plants) is described. The component uses the existing ‘monoblock’ construction which consists of a tungsten ‘block’ joined via a copper interlayer to a through CuCrZr cooling pipe. In the new concept the interlayer stiffness and conductivity properties are tuned so that stress in the principal structural element of the component (the cooling pipe) is reduced. Following initial trials with off-the-shelf materials, the concept was realized by machined features in an otherwise solid copper interlayer. The shape and distribution of the features were tuned by finite element analyses subject to ITER structural design criterion in-vessel components (SDC-IC) design rules. Proof of concept mock-ups were manufactured using a two stage brazing process verified by tomography and micrographic inspection. Full assemblies were inspected using ultrasound and thermographic (SATIR) test methods at ENEA and CEA respectively. High heat flux tests using IPP’s GLADIS facility showed that 200 cycles at 20 MW m-2 and five cycles at 25 MW m-2 could be sustained without apparent component damage. Further testing and component development is planned.
How Attention Can Create Synaptic Tags for the Learning of Working Memories in Sequential Tasks
Rombouts, Jaldert O.; Bohte, Sander M.; Roelfsema, Pieter R.
2015-01-01
Intelligence is our ability to learn appropriate responses to new stimuli and situations. Neurons in association cortex are thought to be essential for this ability. During learning these neurons become tuned to relevant features and start to represent them with persistent activity during memory delays. This learning process is not well understood. Here we develop a biologically plausible learning scheme that explains how trial-and-error learning induces neuronal selectivity and working memory representations for task-relevant information. We propose that the response selection stage sends attentional feedback signals to earlier processing levels, forming synaptic tags at those connections responsible for the stimulus-response mapping. Globally released neuromodulators then interact with tagged synapses to determine their plasticity. The resulting learning rule endows neural networks with the capacity to create new working memory representations of task relevant information as persistent activity. It is remarkably generic: it explains how association neurons learn to store task-relevant information for linear as well as non-linear stimulus-response mappings, how they become tuned to category boundaries or analog variables, depending on the task demands, and how they learn to integrate probabilistic evidence for perceptual decisions. PMID:25742003
Comparison of Classical and Lazy Approach in SCG Compiler
NASA Astrophysics Data System (ADS)
Jirák, Ota; Kolář, Dušan
2011-09-01
The existing parsing methods of scattered context grammar usually expand nonterminals deeply in the pushdown. This expansion is implemented by using either a linked list, or some kind of an auxiliary pushdown. This paper describes the parsing algorithm of an LL(1) scattered context grammar. The given algorithm merges two principles together. The first approach is a table-driven parsing method commonly used for parsing of the context-free grammars. The second is a delayed execution used in functional programming. The main part of this paper is a proof of equivalence between the common principle (the whole rule is applied at once) and our approach (execution of the rules is delayed). Therefore, this approach works with the pushdown top only. In the most cases, the second approach is faster than the first one. Finally, the future work is discussed.
Modelling excitonic-energy transfer in light-harvesting complexes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kramer, Tobias; Kreisbeck, Christoph
The theoretical and experimental study of energy transfer in photosynthesis has revealed an interesting transport regime, which lies at the borderline between classical transport dynamics and quantum-mechanical interference effects. Dissipation is caused by the coupling of electronic degrees of freedom to vibrational modes and leads to a directional energy transfer from the antenna complex to the target reaction-center. The dissipative driving is robust and does not rely on fine-tuning of specific vibrational modes. For the parameter regime encountered in the biological systems new theoretical tools are required to directly compare theoretical results with experimental spectroscopy data. The calculations require tomore » utilize massively parallel graphics processor units (GPUs) for efficient and exact computations.« less
Physical principles of monolithic high-contrast gratings
NASA Astrophysics Data System (ADS)
Dems, Maciej
2017-02-01
In this work I present visually the results of a numerical analysis of the transition between classical High-Contrast Gratings (HCGs) and Monolithic High-Contrast Gratings (MHCGs) and I identify the source of the differences between the scatterless reflection peaks and those that either show strong scattering or do not occur in MHCGs. I show that the key property of MHCGs is the independence of the peak reflectivity wavelength on the substrate refractive index, which results from the modal interference inside the grating and the special form of its impedance/admittance matrix. This form of matrix can be obtained for any wavelength and in almost any material system by tuning the geometrical parameters of the grating—its pitch, fill-factor, and height.
Spectral reflectance properties of iridescent pierid butterfly wings.
Wilts, Bodo D; Pirih, Primož; Stavenga, Doekele G
2011-06-01
The wings of most pierid butterflies exhibit a main, pigmentary colouration: white, yellow or orange. The males of many species have in restricted areas of the wing upper sides a distinct structural colouration, which is created by stacks of lamellae in the ridges of the wing scales, resulting in iridescence. The amplitude of the reflectance is proportional to the number of lamellae in the ridge stacks. The angle-dependent peak wavelength of the observed iridescence is in agreement with classical multilayer theory. The iridescence is virtually always in the ultraviolet wavelength range, but some species have a blue-peaking iridescence. The spectral properties of the pigmentary and structural colourations are presumably tuned to the spectral sensitivities of the butterflies' photoreceptors.
Field-programmable analogue arrays for the sensorless control of DC motors
NASA Astrophysics Data System (ADS)
Rivera, J.; Dueñas, I.; Ortega, S.; Del Valle, J. L.
2018-02-01
This work presents the analogue implementation of a sensorless controller for direct current motors based on the super-twisting (ST) sliding mode technique, by means of field programmable analogue arrays (FPAA). The novelty of this work is twofold, first is the use of the ST algorithm in a sensorless scheme for DC motors, and the implementation method of this type of sliding mode controllers in FPAAs. The ST algorithm reduces the chattering problem produced with the deliberate use of the sign function in classical sliding mode approaches. On the other hand, the advantages of the implementation method over a digital one are that the controller is not digitally approximated, the controller gains are not fine tuned and the implementation does not require the use of analogue-to-digital and digital-to-analogue converter circuits. In addition to this, the FPAA is a reconfigurable, lower cost and power consumption technology. Simulation and experimentation results were registered, where a more accurate transient response and lower power consumption were obtained by the proposed implementation method when compared to a digital implementation. Also, a more accurate performance by the DC motor is obtained with proposed sensorless ST technique when compared with a classical sliding mode approach.
Random-fractal Ansatz for the configurations of two-dimensional critical systems
NASA Astrophysics Data System (ADS)
Lee, Ching Hua; Ozaki, Dai; Matsueda, Hiroaki
2016-12-01
Critical systems have always intrigued physicists and precipitated the development of new techniques. Recently, there has been renewed interest in the information contained in the configurations of classical critical systems, whose computation do not require full knowledge of the wave function. Inspired by holographic duality, we investigated the entanglement properties of the classical configurations (snapshots) of the Potts model by introducing an Ansatz ensemble of random fractal images. By virtue of the central limit theorem, our Ansatz accurately reproduces the entanglement spectra of actual Potts snapshots without any fine tuning of parameters or artificial restrictions on ensemble choice. It provides a microscopic interpretation of the results of previous studies, which established a relation between the scaling behavior of snapshot entropy and the critical exponent. More importantly, it elucidates the role of ensemble disorder in restoring conformal invariance, an aspect previously ignored. Away from criticality, the breakdown of scale invariance leads to a renormalization of the parameter Σ in the random fractal Ansatz, whose variation can be used as an alternative determination of the critical exponent. We conclude by providing a recipe for the explicit construction of fractal unit cells consistent with a given scaling exponent.
Exchange-biased quantum tunnelling in a supramolecular dimer of single-molecule magnets.
Wernsdorfer, Wolfgang; Aliaga-Alcalde, Núria; Hendrickson, David N; Christou, George
2002-03-28
Various present and future specialized applications of magnets require monodisperse, small magnetic particles, and the discovery of molecules that can function as nanoscale magnets was an important development in this regard. These molecules act as single-domain magnetic particles that, below their blocking temperature, exhibit magnetization hysteresis, a classical property of macroscopic magnets. Such 'single-molecule magnets' (SMMs) straddle the interface between classical and quantum mechanical behaviour because they also display quantum tunnelling of magnetization and quantum phase interference. Quantum tunnelling of magnetization can be advantageous for some potential applications of SMMs, for example, in providing the quantum superposition of states required for quantum computing. However, it is a disadvantage in other applications, such as information storage, where it would lead to information loss. Thus it is important to both understand and control the quantum properties of SMMs. Here we report a supramolecular SMM dimer in which antiferromagnetic coupling between the two components results in quantum behaviour different from that of the individual SMMs. Our experimental observations and theoretical analysis suggest a means of tuning the quantum tunnelling of magnetization in SMMs. This system may also prove useful for studying quantum tunnelling of relevance to mesoscopic antiferromagnets.
Loop shaping design for tracking performance in machine axes.
Schinstock, Dale E; Wei, Zhouhong; Yang, Tao
2006-01-01
A modern interpretation of classical loop shaping control design methods is presented in the context of tracking control for linear motor stages. Target applications include noncontacting machines such as laser cutters and markers, water jet cutters, and adhesive applicators. The methods are directly applicable to the common PID controller and are pertinent to many electromechanical servo actuators other than linear motors. In addition to explicit design techniques a PID tuning algorithm stressing the importance of tracking is described. While the theory behind these techniques is not new, the analysis of their application to modern systems is unique in the research literature. The techniques and results should be important to control practitioners optimizing PID controller designs for tracking and in comparing results from classical designs to modern techniques. The methods stress high-gain controller design and interpret what this means for PID. Nothing in the methods presented precludes the addition of feedforward control methods for added improvements in tracking. Laboratory results from a linear motor stage demonstrate that with large open-loop gain very good tracking performance can be achieved. The resultant tracking errors compare very favorably to results from similar motions on similar systems that utilize much more complicated controllers.
NEPTUNE'S WILD DAYS: CONSTRAINTS FROM THE ECCENTRICITY DISTRIBUTION OF THE CLASSICAL KUIPER BELT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, Rebekah I.; Murray-Clay, Ruth, E-mail: rdawson@cfa.harvard.edu
2012-05-01
Neptune's dynamical history shaped the current orbits of Kuiper Belt objects (KBOs), leaving clues to the planet's orbital evolution. In the 'classical' region, a population of dynamically 'hot' high-inclination KBOs overlies a flat 'cold' population with distinct physical properties. Simulations of qualitatively different histories for Neptune, including smooth migration on a circular orbit or scattering by other planets to a high eccentricity, have not simultaneously produced both populations. We explore a general Kuiper Belt assembly model that forms hot classical KBOs interior to Neptune and delivers them to the classical region, where the cold population forms in situ. First, wemore » present evidence that the cold population is confined to eccentricities well below the limit dictated by long-term survival. Therefore, Neptune must deliver hot KBOs into the long-term survival region without excessively exciting the eccentricities of the cold population. Imposing this constraint, we explore the parameter space of Neptune's eccentricity and eccentricity damping, migration, and apsidal precession. We rule out much of parameter space, except where Neptune is scattered to a moderately eccentric orbit (e > 0.15) and subsequently migrates a distance {Delta}a{sub N} = 1-6 AU. Neptune's moderate eccentricity must either damp quickly or be accompanied by fast apsidal precession. We find that Neptune's high eccentricity alone does not generate a chaotic sea in the classical region. Chaos can result from Neptune's interactions with Uranus, exciting the cold KBOs and placing additional constraints. Finally, we discuss how to interpret our constraints in the context of the full, complex dynamical history of the solar system.« less
Ultra-small dye-doped silica nanoparticles via modified sol-gel technique.
Riccò, R; Nizzero, S; Penna, E; Meneghello, A; Cretaio, E; Enrichi, F
2018-01-01
In modern biosensing and imaging, fluorescence-based methods constitute the most diffused approach to achieve optimal detection of analytes, both in solution and on the single-particle level. Despite the huge progresses made in recent decades in the development of plasmonic biosensors and label-free sensing techniques, fluorescent molecules remain the most commonly used contrast agents to date for commercial imaging and detection methods. However, they exhibit low stability, can be difficult to functionalise, and often result in a low signal-to-noise ratio. Thus, embedding fluorescent probes into robust and bio-compatible materials, such as silica nanoparticles, can substantially enhance the detection limit and dramatically increase the sensitivity. In this work, ultra-small fluorescent silica nanoparticles (NPs) for optical biosensing applications were doped with a fluorescent dye, using simple water-based sol-gel approaches based on the classical Stöber procedure. By systematically modulating reaction parameters, controllable size tuning of particle diameters as low as 10 nm was achieved. Particles morphology and optical response were evaluated showing a possible single-molecule behaviour, without employing microemulsion methods to achieve similar results. Graphical abstractWe report a simple, cheap, reliable protocol for the synthesis and systematic tuning of ultra-small (< 10 nm) dye-doped luminescent silica nanoparticles.
Recipient design in human communication: simple heuristics or perspective taking?
Blokpoel, Mark; van Kesteren, Marlieke; Stolk, Arjen; Haselager, Pim; Toni, Ivan; van Rooij, Iris
2012-01-01
Humans have a remarkable capacity for tuning their communicative behaviors to different addressees, a phenomenon also known as recipient design. It remains unclear how this tuning of communicative behavior is implemented during live human interactions. Classical theories of communication postulate that recipient design involves perspective taking, i.e., the communicator selects her behavior based on her hypotheses about beliefs and knowledge of the recipient. More recently, researchers have argued that perspective taking is computationally too costly to be a plausible mechanism in everyday human communication. These researchers propose that computationally simple mechanisms, or heuristics, are exploited to perform recipient design. Such heuristics may be able to adapt communicative behavior to an addressee with no consideration for the addressee's beliefs and knowledge. To test whether the simpler of the two mechanisms is sufficient for explaining the "how" of recipient design we studied communicators' behaviors in the context of a non-verbal communicative task (the Tacit Communication Game, TCG). We found that the specificity of the observed trial-by-trial adjustments made by communicators is parsimoniously explained by perspective taking, but not by simple heuristics. This finding is important as it suggests that humans do have a computationally efficient way of taking beliefs and knowledge of a recipient into account.
Molecular Mechanisms of Taste Recognition: Considerations about the Role of Saliva
Fábián, Tibor Károly; Beck, Anita; Fejérdy, Pál; Hermann, Péter; Fábián, Gábor
2015-01-01
The gustatory system plays a critical role in determining food preferences and food intake, in addition to nutritive, energy and electrolyte balance. Fine tuning of the gustatory system is also crucial in this respect. The exact mechanisms that fine tune taste sensitivity are as of yet poorly defined, but it is clear that various effects of saliva on taste recognition are also involved. Specifically those metabolic polypeptides present in the saliva that were classically considered to be gut and appetite hormones (i.e., leptin, ghrelin, insulin, neuropeptide Y, peptide YY) were considered to play a pivotal role. Besides these, data clearly indicate the major role of several other salivary proteins, such as salivary carbonic anhydrase (gustin), proline-rich proteins, cystatins, alpha-amylases, histatins, salivary albumin and mucins. Other proteins like glucagon-like peptide-1, salivary immunoglobulin-A, zinc-α-2-glycoprotein, salivary lactoperoxidase, salivary prolactin-inducible protein and salivary molecular chaperone HSP70/HSPAs were also expected to play an important role. Furthermore, factors including salivary flow rate, buffer capacity and ionic composition of saliva should also be considered. In this paper, the current state of research related to the above and the overall emerging field of taste-related salivary research alongside basic principles of taste perception is reviewed. PMID:25782158
AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX
Weinberger, Norman M.
2009-01-01
Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002
Recipient design in human communication: simple heuristics or perspective taking?
Blokpoel, Mark; van Kesteren, Marlieke; Stolk, Arjen; Haselager, Pim; Toni, Ivan; van Rooij, Iris
2012-01-01
Humans have a remarkable capacity for tuning their communicative behaviors to different addressees, a phenomenon also known as recipient design. It remains unclear how this tuning of communicative behavior is implemented during live human interactions. Classical theories of communication postulate that recipient design involves perspective taking, i.e., the communicator selects her behavior based on her hypotheses about beliefs and knowledge of the recipient. More recently, researchers have argued that perspective taking is computationally too costly to be a plausible mechanism in everyday human communication. These researchers propose that computationally simple mechanisms, or heuristics, are exploited to perform recipient design. Such heuristics may be able to adapt communicative behavior to an addressee with no consideration for the addressee's beliefs and knowledge. To test whether the simpler of the two mechanisms is sufficient for explaining the “how” of recipient design we studied communicators' behaviors in the context of a non-verbal communicative task (the Tacit Communication Game, TCG). We found that the specificity of the observed trial-by-trial adjustments made by communicators is parsimoniously explained by perspective taking, but not by simple heuristics. This finding is important as it suggests that humans do have a computationally efficient way of taking beliefs and knowledge of a recipient into account. PMID:23055960
Optimizing the temporal dynamics of light to human perception.
Rieiro, Hector; Martinez-Conde, Susana; Danielson, Andrew P; Pardo-Vazquez, Jose L; Srivastava, Nishit; Macknik, Stephen L
2012-11-27
No previous research has tuned the temporal characteristics of light-emitting devices to enhance brightness perception in human vision, despite the potential for significant power savings. The role of stimulus duration on perceived contrast is unclear, due to contradiction between the models proposed by Bloch and by Broca and Sulzer over 100 years ago. We propose that the discrepancy is accounted for by the observer's "inherent expertise bias," a type of experimental bias in which the observer's life-long experience with interpreting the sensory world overcomes perceptual ambiguities and biases experimental outcomes. By controlling for this and all other known biases, we show that perceived contrast peaks at durations of 50-100 ms, and we conclude that the Broca-Sulzer effect best describes human temporal vision. We also show that the plateau in perceived brightness with stimulus duration, described by Bloch's law, is a previously uncharacterized type of temporal brightness constancy that, like classical constancy effects, serves to enhance object recognition across varied lighting conditions in natural vision-although this is a constancy effect that normalizes perception across temporal modulation conditions. A practical outcome of this study is that tuning light-emitting devices to match the temporal dynamics of the human visual system's temporal response function will result in significant power savings.
Henry, Christopher A; Joshi, Siddhartha; Xing, Dajun; Shapley, Robert M; Hawken, Michael J
2013-04-03
Neurons in primary visual cortex, V1, very often have extraclassical receptive fields (eCRFs). The eCRF is defined as the region of visual space where stimuli cannot elicit a spiking response but can modulate the response of a stimulus in the classical receptive field (CRF). We investigated the dependence of the eCRF on stimulus contrast and orientation in macaque V1 cells for which the laminar location was determined. The eCRF was more sensitive to contrast than the CRF across the whole population of V1 cells with the greatest contrast differential in layer 2/3. We confirmed that many V1 cells experience stronger suppression for collinear than orthogonal stimuli in the eCRF. Laminar analysis revealed that the predominant bias for collinear suppression was found in layers 2/3 and 4b. The laminar pattern of contrast and orientation dependence suggests that eCRF suppression may derive from different neural circuits in different layers, and may be comprised of two distinct components: orientation-tuned and untuned suppression. On average tuned suppression was delayed by ∼25 ms compared with the onset of untuned suppression. Therefore, response modulation by the eCRF develops dynamically and rapidly in time.
A rational explanation of wave-particle duality of light
NASA Astrophysics Data System (ADS)
Rashkovskiy, S. A.
2013-10-01
The wave-particle duality is a fundamental property of the nature. At the same time, it is one of the greatest mysteries of modern physics. This gave rise to a whole direction in quantum physics - the interpretation of quantum mechanics. The Wiener experiments demonstrating the wave-particle duality of light are discussed. It is shown that almost all interpretations of quantum mechanics allow explaining the double-slit experiments, but are powerless to explain the Wiener experiments. The reason of the paradox, associated with the wave-particle duality is analyzed. The quantum theory consists of two independent parts: (i) the dynamic equations describing the behavior of a quantum object (for example, the Schrodinger or Maxwell equations), and (ii) the Born's rule, the relation between the wave function and the probability of finding the particle at a given point. It is shown that precisely the Born's rule results in paradox in explaining the wave-particle duality. In order to eliminate this paradox, we propose a new rational interpretation of the wave-particle duality and associated new rule, connecting the corpuscular and wave properties of quantum objects. It is shown that this new rational interpretation of the wave-particle duality allows using the classic images of particle and wave in explaining the quantum mechanical and optical phenomena, does not result in paradox in explaining the doubleslit experiments and Wiener experiments, and does not contradict to the modern quantum mechanical concepts. It is shown that the Born's rule follows immediately from proposed new rules as an approximation.
Abnormal enhancement against interference inhibition for few-cycle pulses propagating in dense media
NASA Astrophysics Data System (ADS)
Chen, Yue-Yue; Feng, Xun-Li; Xu, Zhi-Zhan; Liu, Chengpu
2016-04-01
We numerically study the reflected spectrum of a few-cycle pulse propagating through an ultrathin resonant medium. According to the classical interference theory, a destructive interference dip is expected at the carrier frequency ωp for a half-wavelength medium. In contrast, an abnormal enhanced spike appears instead. The origin of such an abnormal enhancement is attributed to the coherent transient effects. In addition, its scaling laws versus medium length, pulse area and duration are obtained, which follow simple rules.
Lustgarten, Jonathan Lyle; Balasubramanian, Jeya Balaji; Visweswaran, Shyam; Gopalakrishnan, Vanathi
2017-03-01
The comprehensibility of good predictive models learned from high-dimensional gene expression data is attractive because it can lead to biomarker discovery. Several good classifiers provide comparable predictive performance but differ in their abilities to summarize the observed data. We extend a Bayesian Rule Learning (BRL-GSS) algorithm, previously shown to be a significantly better predictor than other classical approaches in this domain. It searches a space of Bayesian networks using a decision tree representation of its parameters with global constraints, and infers a set of IF-THEN rules. The number of parameters and therefore the number of rules are combinatorial to the number of predictor variables in the model. We relax these global constraints to a more generalizable local structure (BRL-LSS). BRL-LSS entails more parsimonious set of rules because it does not have to generate all combinatorial rules. The search space of local structures is much richer than the space of global structures. We design the BRL-LSS with the same worst-case time-complexity as BRL-GSS while exploring a richer and more complex model space. We measure predictive performance using Area Under the ROC curve (AUC) and Accuracy. We measure model parsimony performance by noting the average number of rules and variables needed to describe the observed data. We evaluate the predictive and parsimony performance of BRL-GSS, BRL-LSS and the state-of-the-art C4.5 decision tree algorithm, across 10-fold cross-validation using ten microarray gene-expression diagnostic datasets. In these experiments, we observe that BRL-LSS is similar to BRL-GSS in terms of predictive performance, while generating a much more parsimonious set of rules to explain the same observed data. BRL-LSS also needs fewer variables than C4.5 to explain the data with similar predictive performance. We also conduct a feasibility study to demonstrate the general applicability of our BRL methods on the newer RNA sequencing gene-expression data.
Evolution of Collective Behaviour in an Artificial World Using Linguistic Fuzzy Rule-Based Systems
Lebar Bajec, Iztok
2017-01-01
Collective behaviour is a fascinating and easily observable phenomenon, attractive to a wide range of researchers. In biology, computational models have been extensively used to investigate various properties of collective behaviour, such as: transfer of information across the group, benefits of grouping (defence against predation, foraging), group decision-making process, and group behaviour types. The question ‘why,’ however remains largely unanswered. Here the interest goes into which pressures led to the evolution of such behaviour, and evolutionary computational models have already been used to test various biological hypotheses. Most of these models use genetic algorithms to tune the parameters of previously presented non-evolutionary models, but very few attempt to evolve collective behaviour from scratch. Of these last, the successful attempts display clumping or swarming behaviour. Empirical evidence suggests that in fish schools there exist three classes of behaviour; swarming, milling and polarized. In this paper we present a novel, artificial life-like evolutionary model, where individual agents are governed by linguistic fuzzy rule-based systems, which is capable of evolving all three classes of behaviour. PMID:28045964
Evolution of Collective Behaviour in an Artificial World Using Linguistic Fuzzy Rule-Based Systems.
Demšar, Jure; Lebar Bajec, Iztok
2017-01-01
Collective behaviour is a fascinating and easily observable phenomenon, attractive to a wide range of researchers. In biology, computational models have been extensively used to investigate various properties of collective behaviour, such as: transfer of information across the group, benefits of grouping (defence against predation, foraging), group decision-making process, and group behaviour types. The question 'why,' however remains largely unanswered. Here the interest goes into which pressures led to the evolution of such behaviour, and evolutionary computational models have already been used to test various biological hypotheses. Most of these models use genetic algorithms to tune the parameters of previously presented non-evolutionary models, but very few attempt to evolve collective behaviour from scratch. Of these last, the successful attempts display clumping or swarming behaviour. Empirical evidence suggests that in fish schools there exist three classes of behaviour; swarming, milling and polarized. In this paper we present a novel, artificial life-like evolutionary model, where individual agents are governed by linguistic fuzzy rule-based systems, which is capable of evolving all three classes of behaviour.
Online intelligent controllers for an enzyme recovery plant: design methodology and performance.
Leite, M S; Fujiki, T L; Silva, F V; Fileti, A M F
2010-12-27
This paper focuses on the development of intelligent controllers for use in a process of enzyme recovery from pineapple rind. The proteolytic enzyme bromelain (EC 3.4.22.4) is precipitated with alcohol at low temperature in a fed-batch jacketed tank. Temperature control is crucial to avoid irreversible protein denaturation. Fuzzy or neural controllers offer a way of implementing solutions that cover dynamic and nonlinear processes. The design methodology and a comparative study on the performance of fuzzy-PI, neurofuzzy, and neural network intelligent controllers are presented. To tune the fuzzy PI Mamdani controller, various universes of discourse, rule bases, and membership function support sets were tested. A neurofuzzy inference system (ANFIS), based on Takagi-Sugeno rules, and a model predictive controller, based on neural modeling, were developed and tested as well. Using a Fieldbus network architecture, a coolant variable speed pump was driven by the controllers. The experimental results show the effectiveness of fuzzy controllers in comparison to the neural predictive control. The fuzzy PI controller exhibited a reduced error parameter (ITAE), lower power consumption, and better recovery of enzyme activity.
Online Intelligent Controllers for an Enzyme Recovery Plant: Design Methodology and Performance
Leite, M. S.; Fujiki, T. L.; Silva, F. V.; Fileti, A. M. F.
2010-01-01
This paper focuses on the development of intelligent controllers for use in a process of enzyme recovery from pineapple rind. The proteolytic enzyme bromelain (EC 3.4.22.4) is precipitated with alcohol at low temperature in a fed-batch jacketed tank. Temperature control is crucial to avoid irreversible protein denaturation. Fuzzy or neural controllers offer a way of implementing solutions that cover dynamic and nonlinear processes. The design methodology and a comparative study on the performance of fuzzy-PI, neurofuzzy, and neural network intelligent controllers are presented. To tune the fuzzy PI Mamdani controller, various universes of discourse, rule bases, and membership function support sets were tested. A neurofuzzy inference system (ANFIS), based on Takagi-Sugeno rules, and a model predictive controller, based on neural modeling, were developed and tested as well. Using a Fieldbus network architecture, a coolant variable speed pump was driven by the controllers. The experimental results show the effectiveness of fuzzy controllers in comparison to the neural predictive control. The fuzzy PI controller exhibited a reduced error parameter (ITAE), lower power consumption, and better recovery of enzyme activity. PMID:21234106
Cocrystals to facilitate delivery of poorly soluble compounds beyond-rule-of-5.
Kuminek, Gislaine; Cao, Fengjuan; Bahia de Oliveira da Rocha, Alanny; Gonçalves Cardoso, Simone; Rodríguez-Hornedo, Naír
2016-06-01
Besides enhancing aqueous solubilities, cocrystals have the ability to fine-tune solubility advantage over drug, supersaturation index, and bioavailability. This review presents important facts about cocrystals that set them apart from other solid-state forms of drugs, and a quantitative set of rules for the selection of additives and solution/formulation conditions that predict cocrystal solubility, supersaturation index, and transition points. Cocrystal eutectic constants are shown to be the most important cocrystal property that can be measured once a cocrystal is discovered, and simple relationships are presented that allow for prediction of cocrystal behavior as a function of pH and drug solubilizing agents. Cocrystal eutectic constant is a stability or supersatuation index that: (a) reflects how close or far from equilibrium a cocrystal is, (b) establishes transition points, and (c) provides a quantitative scale of cocrystal true solubility changes over drug. The benefit of this strategy is that a single measurement, that requires little material and time, provides a principled basis to tailor cocrystal supersaturation index by the rational selection of cocrystal formulation, dissolution, and processing conditions. Copyright © 2016 Elsevier B.V. All rights reserved.
Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics.
Sokoloski, Sacha
2017-09-01
In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.
[Taxonomic theory for non-classical systematics].
Pavlinov, I Ia
2012-01-01
Outlined briefly are basic principles of construing general taxonomic theory for biological systematics considered in the context of non-classical scientific paradigm. The necessity of such kind of theory is substantiated, and some key points of its elaboration are exposed: its interpretation as a framework concept for the partial taxonomic theories in various schools of systematics; elaboration of idea of cognitive situation including three interrelated components, namely subject, object, and epistemic ones; its construing as a content-wisely interpreted quasi-axiomatics, with strong structuring of its conceptual space including demarcation between axioms and inferring rules; its construing as a "conceptual pyramid" of concepts of various levels of generality; inclusion of a basic model into definition of the taxonomic system (classification) regulating its content. Two problems are indicated as fundamental: definition of taxonomic diversity as a subject domain for the systematics as a whole; definition of onto-epistemological status of taxonomic system (classification) in general and of taxa in particular.
Violation of a Leggett–Garg inequality with ideal non-invasive measurements
Knee, George C.; Simmons, Stephanie; Gauger, Erik M.; Morton, John J.L.; Riemann, Helge; Abrosimov, Nikolai V.; Becker, Peter; Pohl, Hans-Joachim; Itoh, Kohei M.; Thewalt, Mike L.W.; Briggs, G. Andrew D.; Benjamin, Simon C.
2012-01-01
The quantum superposition principle states that an entity can exist in two different states simultaneously, counter to our 'classical' intuition. Is it possible to understand a given system's behaviour without such a concept? A test designed by Leggett and Garg can rule out this possibility. The test, originally intended for macroscopic objects, has been implemented in various systems. However to date no experiment has employed the 'ideal negative result' measurements that are required for the most robust test. Here we introduce a general protocol for these special measurements using an ancillary system, which acts as a local measuring device but which need not be perfectly prepared. We report an experimental realization using spin-bearing phosphorus impurities in silicon. The results demonstrate the necessity of a non-classical picture for this class of microscopic system. Our procedure can be applied to systems of any size, whether individually controlled or in a spatial ensemble. PMID:22215081
Effects of tunnelling and asymmetry for system-bath models of electron transfer
NASA Astrophysics Data System (ADS)
Mattiat, Johann; Richardson, Jeremy O.
2018-03-01
We apply the newly derived nonadiabatic golden-rule instanton theory to asymmetric models describing electron-transfer in solution. The models go beyond the usual spin-boson description and have anharmonic free-energy surfaces with different values for the reactant and product reorganization energies. The instanton method gives an excellent description of the behaviour of the rate constant with respect to asymmetry for the whole range studied. We derive a general formula for an asymmetric version of the Marcus theory based on the classical limit of the instanton and find that this gives significant corrections to the standard Marcus theory. A scheme is given to compute this rate based only on equilibrium simulations. We also compare the rate constants obtained by the instanton method with its classical limit to study the effect of tunnelling and other quantum nuclear effects. These quantum effects can increase the rate constant by orders of magnitude.
Semiclassical evaluation of quantum fidelity
NASA Astrophysics Data System (ADS)
Vaníček, Jiří; Heller, Eric J.
2003-11-01
We present a numerically feasible semiclassical (SC) method to evaluate quantum fidelity decay (Loschmidt echo) in a classically chaotic system. It was thought that such evaluation would be intractable, but instead we show that a uniform SC expression not only is tractable but it also gives remarkably accurate numerical results for the standard map in both the Fermi-golden-rule and Lyapunov regimes. Because it allows Monte Carlo evaluation, the uniform expression is accurate at times when there are 1070 semiclassical contributions. Remarkably, it also explicitly contains the “building blocks” of analytical theories of recent literature, and thus permits a direct test of the approximations made by other authors in these regimes, rather than an a posteriori comparison with numerical results. We explain in more detail the extended validity of the classical perturbation approximation and show that within this approximation, the so-called “diagonal approximation” is automatic and does not require ensemble averaging.
Quantum Locality in Game Strategy
NASA Astrophysics Data System (ADS)
Melo-Luna, Carlos A.; Susa, Cristian E.; Ducuara, Andrés F.; Barreiro, Astrid; Reina, John H.
2017-03-01
Game theory is a well established branch of mathematics whose formalism has a vast range of applications from the social sciences, biology, to economics. Motivated by quantum information science, there has been a leap in the formulation of novel game strategies that lead to new (quantum Nash) equilibrium points whereby players in some classical games are always outperformed if sharing and processing joint information ruled by the laws of quantum physics is allowed. We show that, for a bipartite non zero-sum game, input local quantum correlations, and separable states in particular, suffice to achieve an advantage over any strategy that uses classical resources, thus dispensing with quantum nonlocality, entanglement, or even discord between the players’ input states. This highlights the remarkable key role played by pure quantum coherence at powering some protocols. Finally, we propose an experiment that uses separable states and basic photon interferometry to demonstrate the locally-correlated quantum advantage.
Nev, Olga A; van den Berg, Hugo A
2017-01-01
Variable-Internal-Stores models of microbial metabolism and growth have proven to be invaluable in accounting for changes in cellular composition as microbial cells adapt to varying conditions of nutrient availability. Here, such a model is extended with explicit allocation of molecular building blocks among various types of catalytic machinery. Such an extension allows a reconstruction of the regulatory rules employed by the cell as it adapts its physiology to changing environmental conditions. Moreover, the extension proposed here creates a link between classic models of microbial growth and analyses based on detailed transcriptomics and proteomics data sets. We ascertain the compatibility between the extended Variable-Internal-Stores model and the classic models, demonstrate its behaviour by means of simulations, and provide a detailed treatment of the uniqueness and the stability of its equilibrium point as a function of the availabilities of the various nutrients.
Quantum metabolism explains the allometric scaling of metabolic rates.
Demetrius, Lloyd; Tuszynski, J A
2010-03-06
A general model explaining the origin of allometric laws of physiology is proposed based on coupled energy-transducing oscillator networks embedded in a physical d-dimensional space (d = 1, 2, 3). This approach integrates Mitchell's theory of chemi-osmosis with the Debye model of the thermal properties of solids. We derive a scaling rule that relates the energy generated by redox reactions in cells, the dimensionality of the physical space and the mean cycle time. Two major regimes are found corresponding to classical and quantum behaviour. The classical behaviour leads to allometric isometry while the quantum regime leads to scaling laws relating metabolic rate and body size that cover a broad range of exponents that depend on dimensionality and specific parameter values. The regimes are consistent with a range of behaviours encountered in micelles, plants and animals and provide a conceptual framework for a theory of the metabolic function of living systems.
NASA Astrophysics Data System (ADS)
Argurio, Riccardo
1998-07-01
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th/9701042, hep-th/9704190, hep-th/9710027 and hep-th/9801053.
Quantum Locality in Game Strategy
Melo-Luna, Carlos A.; Susa, Cristian E.; Ducuara, Andrés F.; Barreiro, Astrid; Reina, John H.
2017-01-01
Game theory is a well established branch of mathematics whose formalism has a vast range of applications from the social sciences, biology, to economics. Motivated by quantum information science, there has been a leap in the formulation of novel game strategies that lead to new (quantum Nash) equilibrium points whereby players in some classical games are always outperformed if sharing and processing joint information ruled by the laws of quantum physics is allowed. We show that, for a bipartite non zero-sum game, input local quantum correlations, and separable states in particular, suffice to achieve an advantage over any strategy that uses classical resources, thus dispensing with quantum nonlocality, entanglement, or even discord between the players’ input states. This highlights the remarkable key role played by pure quantum coherence at powering some protocols. Finally, we propose an experiment that uses separable states and basic photon interferometry to demonstrate the locally-correlated quantum advantage. PMID:28327567
The classical and quantum dynamics of molecular spins on graphene.
Cervetti, Christian; Rettori, Angelo; Pini, Maria Gloria; Cornia, Andrea; Repollés, Ana; Luis, Fernando; Dressel, Martin; Rauschenbach, Stephan; Kern, Klaus; Burghard, Marko; Bogani, Lapo
2016-02-01
Controlling the dynamics of spins on surfaces is pivotal to the design of spintronic and quantum computing devices. Proposed schemes involve the interaction of spins with graphene to enable surface-state spintronics and electrical spin manipulation. However, the influence of the graphene environment on the spin systems has yet to be unravelled. Here we explore the spin-graphene interaction by studying the classical and quantum dynamics of molecular magnets on graphene. Whereas the static spin response remains unaltered, the quantum spin dynamics and associated selection rules are profoundly modulated. The couplings to graphene phonons, to other spins, and to Dirac fermions are quantified using a newly developed model. Coupling to Dirac electrons introduces a dominant quantum relaxation channel that, by driving the spins over Villain's threshold, gives rise to fully coherent, resonant spin tunnelling. Our findings provide fundamental insight into the interaction between spins and graphene, establishing the basis for electrical spin manipulation in graphene nanodevices.
The classical and quantum dynamics of molecular spins on graphene
NASA Astrophysics Data System (ADS)
Cervetti, Christian; Rettori, Angelo; Pini, Maria Gloria; Cornia, Andrea; Repollés, Ana; Luis, Fernando; Dressel, Martin; Rauschenbach, Stephan; Kern, Klaus; Burghard, Marko; Bogani, Lapo
2016-02-01
Controlling the dynamics of spins on surfaces is pivotal to the design of spintronic and quantum computing devices. Proposed schemes involve the interaction of spins with graphene to enable surface-state spintronics and electrical spin manipulation. However, the influence of the graphene environment on the spin systems has yet to be unravelled. Here we explore the spin-graphene interaction by studying the classical and quantum dynamics of molecular magnets on graphene. Whereas the static spin response remains unaltered, the quantum spin dynamics and associated selection rules are profoundly modulated. The couplings to graphene phonons, to other spins, and to Dirac fermions are quantified using a newly developed model. Coupling to Dirac electrons introduces a dominant quantum relaxation channel that, by driving the spins over Villain’s threshold, gives rise to fully coherent, resonant spin tunnelling. Our findings provide fundamental insight into the interaction between spins and graphene, establishing the basis for electrical spin manipulation in graphene nanodevices.
Denlinger, Kendra Leahy; Ortiz-Trankina, Lianna; Carr, Preston; Benson, Kingsley; Waddell, Daniel C; Mack, James
2018-01-01
Mechanochemistry is maturing as a discipline and continuing to grow, so it is important to continue understanding the rules governing the system. In a mechanochemical reaction, the reactants are added into a vessel along with one or more grinding balls and the vessel is shaken at high speeds to facilitate a chemical reaction. The dielectric constant of the solvent used in liquid-assisted grinding (LAG) and properly chosen counter-ion pairing increases the percentage conversion of stilbenes in a mechanochemical Wittig reaction. Utilizing stepwise addition/evaporation of ethanol in liquid-assisted grinding also allows for the tuning of the diastereoselectivity in the Wittig reaction.
An architecture for designing fuzzy logic controllers using neural networks
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
Described here is an architecture for designing fuzzy controllers through a hierarchical process of control rule acquisition and by using special classes of neural network learning techniques. A new method for learning to refine a fuzzy logic controller is introduced. A reinforcement learning technique is used in conjunction with a multi-layer neural network model of a fuzzy controller. The model learns by updating its prediction of the plant's behavior and is related to the Sutton's Temporal Difference (TD) method. The method proposed here has the advantage of using the control knowledge of an experienced operator and fine-tuning it through the process of learning. The approach is applied to a cart-pole balancing system.
Local operators in kinetic wealth distribution
NASA Astrophysics Data System (ADS)
Andrecut, M.
2016-05-01
The statistical mechanics approach to wealth distribution is based on the conservative kinetic multi-agent model for money exchange, where the local interaction rule between the agents is analogous to the elastic particle scattering process. Here, we discuss the role of a class of conservative local operators, and we show that, depending on the values of their parameters, they can be used to generate all the relevant distributions. We also show numerically that in order to generate the power-law tail, an heterogeneous risk aversion model is required. By changing the parameters of these operators, one can also fine tune the resulting distributions in order to provide support for the emergence of a more egalitarian wealth distribution.
On the integration of reinforcement learning and approximate reasoning for control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
The author discusses the importance of strengthening the knowledge representation characteristic of reinforcement learning techniques using methods such as approximate reasoning. The ARIC (approximate reasoning-based intelligent control) architecture is an example of such a hybrid approach in which the fuzzy control rules are modified (fine-tuned) using reinforcement learning. ARIC also demonstrates that it is possible to start with an approximately correct control knowledge base and learn to refine this knowledge through further experience. On the other hand, techniques such as the TD (temporal difference) algorithm and Q-learning establish stronger theoretical foundations for their use in adaptive control and also in stability analysis of hybrid reinforcement learning and approximate reasoning-based controllers.
Semiclassical evaluation of quantum fidelity
NASA Astrophysics Data System (ADS)
Vanicek, Jiri
2004-03-01
We present a numerically feasible semiclassical method to evaluate quantum fidelity (Loschmidt echo) in a classically chaotic system. It was thought that such evaluation would be intractable, but instead we show that a uniform semiclassical expression not only is tractable but it gives remarkably accurate numerical results for the standard map in both the Fermi-golden-rule and Lyapunov regimes. Because it allows a Monte-Carlo evaluation, this uniform expression is accurate at times where there are 10^70 semiclassical contributions. Remarkably, the method also explicitly contains the ``building blocks'' of analytical theories of recent literature, and thus permits a direct test of approximations made by other authors in these regimes, rather than an a posteriori comparison with numerical results. We explain in more detail the extended validity of the classical perturbation approximation and thus provide a ``defense" of the linear response theory from the famous Van Kampen objection. We point out the potential use of our uniform expression in other areas because it gives a most direct link between the quantum Feynman propagator based on the path integral and the semiclassical Van Vleck propagator based on the sum over classical trajectories. Finally, we test the applicability of our method in integrable and mixed systems.
Elementary test for nonclassicality based on measurements of position and momentum
NASA Astrophysics Data System (ADS)
Fresta, Luca; Borregaard, Johannes; Sørensen, Anders S.
2015-12-01
We generalize a nonclassicality test described by Kot et al. [Phys. Rev. Lett. 108, 233601 (2012), 10.1103/PhysRevLett.108.233601], which can be used to rule out any classical description of a physical system. The test is based on measurements of quadrature operators and works by proving a contradiction with the classical description in terms of a probability distribution in phase space. As opposed to the previous work, we generalize the test to include states without rotational symmetry in phase space. Furthermore, we compare the performance of the nonclassicality test with classical tomography methods based on the inverse Radon transform, which can also be used to establish the quantum nature of a physical system. In particular, we consider a nonclassicality test based on the so-called filtered back-projection formula. We show that the general nonclassicality test is conceptually simpler, requires less assumptions on the system, and is statistically more reliable than the tests based on the filtered back-projection formula. As a specific example, we derive the optimal test for quadrature squeezed single-photon states and show that the efficiency of the test does not change with the degree of squeezing.
NASA Astrophysics Data System (ADS)
Blue, C. R.; Giuffre, A.; Mergelsberg, S.; Han, N.; De Yoreo, J. J.; Dove, P. M.
2017-01-01
Calcite and other crystalline polymorphs of CaCO3 can form by pathways involving amorphous calcium carbonate (ACC). Apparent inconsistencies in the literature indicate the relationships between ACC composition, local conditions, and the subsequent crystalline polymorphs are not yet established. This experimental study quantifies the control of solution composition on the transformation of ACC into crystalline polymorphs in the presence of magnesium. Using a mixed flow reactor to control solution chemistry, ACC was synthesized with variable Mg contents by tuning input pH, Mg/Ca, and total carbonate concentration. ACC products were allowed to transform within the output suspension under stirred or quiescent conditions while characterizing the evolving solutions and solids. As the ACC transforms into a crystalline phase, the solutions record a polymorph-specific evolution of pH and Mg/Ca. The data provide a quantitative framework for predicting the initial polymorph that forms from ACC based upon the solution aMg2+/aCa2+ and aCO32-/aCa2+ and stirring versus quiescent conditions. This model reconciles discrepancies among previous studies that report on the nature of the polymorphs produced from ACC and supports the previous claim that monohydrocalcite may be an important, but overlooked, transient phase on the way to forming some aragonite and calcite deposits. By this construct, organic additives and extreme pH are not required to tune the composition and nature of the polymorph that forms. Our measurements show that the Mg content of ACC is recorded in the resulting calcite with a ≈1:1 dependence. By correlating composition of these calcite products with the Mgtot/Catot of the initial solutions, we find a ≈3:1 dependence that is approximately linear and general to whether calcite is formed via an ACC pathway or by the classical step-propagation process. Comparisons to calcite grown in synthetic seawater show a ≈1:1 dependence. The relationships suggest that the local Mg2+/Ca2+ at the time of precipitation determines the calcite composition, independent of whether growth occurs via an amorphous intermediate or classical pathway for a range of supersaturations and pH conditions. The findings reiterate the need to revisit the traditional picture of chemical and physical controls on CaCO3 polymorph selection. Mineralization by pathways involving ACC can lead to the formation of crystalline phases whose polymorphs and compositions are out of equilibrium with local growth media. As such, classical thermodynamic equilibria may not provide a reliable predictor of observed compositions.
Dixon, Matthew L.; Christoff, Kalina
2012-01-01
Cognitive control is a fundamental skill reflecting the active use of task-rules to guide behavior and suppress inappropriate automatic responses. Prior work has traditionally used paradigms in which subjects are told when to engage cognitive control. Thus, surprisingly little is known about the factors that influence individuals' initial decision of whether or not to act in a reflective, rule-based manner. To examine this, we took three classic cognitive control tasks (Stroop, Wisconsin Card Sorting Task, Go/No-Go task) and created novel ‘free-choice’ versions in which human subjects were free to select an automatic, pre-potent action, or an action requiring rule-based cognitive control, and earned varying amounts of money based on their choices. Our findings demonstrated that subjects' decision to engage cognitive control was driven by an explicit representation of monetary rewards expected to be obtained from rule-use. Subjects rarely engaged cognitive control when the expected outcome was of equal or lesser value as compared to the value of the automatic response, but frequently engaged cognitive control when it was expected to yield a larger monetary outcome. Additionally, we exploited fMRI-adaptation to show that the lateral prefrontal cortex (LPFC) represents associations between rules and expected reward outcomes. Together, these findings suggest that individuals are more likely to act in a reflective, rule-based manner when they expect that it will result in a desired outcome. Thus, choosing to exert cognitive control is not simply a matter of reason and willpower, but rather, conforms to standard mechanisms of value-based decision making. Finally, in contrast to current models of LPFC function, our results suggest that the LPFC plays a direct role in representing motivational incentives. PMID:23284730
Astronomical dating in the 19th century
NASA Astrophysics Data System (ADS)
Hilgen, Frederik J.
2010-01-01
Today astronomical tuning is widely accepted as numerical dating method after having revolutionised the age calibration of the geological archive and time scale over the last decades. However, its origin is not well known and tracing its roots is important especially from a science historic perspective. Astronomical tuning developed in consequence of the astronomical theory of the ice ages and was repeatedly used in the second half of the 19th century before the invention of radio-isotopic dating. Building upon earlier ideas of Joseph Adhémar, James Croll started to formulate his astronomical theory of the ice ages in 1864 according to which precession controlled ice ages occur alternatingly on both hemispheres at times of maximum eccentricity of the Earth's orbit. The publication of these ideas compelled Charles Lyell to revise his Principles of Geology and add Croll's theory, thus providing an alternative to his own geographical cause of the ice ages. Both Croll and Lyell initially tuned the last glacial epoch to the prominent eccentricity maximum 850,000 yr ago. This age was used as starting point by Lyell to calculate an age of 240 million years for the beginning of the Cambrium. But Croll soon revised the tuning to a much younger less prominent eccentricity maximum between 240,000 and 80,000 yr ago. In addition he tuned older glacial deposits of late Miocene and Eocene ages to eccentricity maxima around 800,000 and 2,800,000 yr ago. Archibald and James Geikie were the first to recognize interglacials during the last glacial epoch, as predicted by Croll's theory, and attempted to tune them to precession. Soon after Frank Taylor linked a series of 15 end-moraines left behind by the retreating ice sheet to precession to arrive at a possible age of 300,000 yr for the maximum glaciation. In a classic paper, Axel Blytt (1876) explained the scattered distribution of plant groups in Norway to precession induced alternating rainy and dry periods as recorded by the layering in Holocene peat bogs. He specifically linked the exceptionally wet Atlantic period to the prolonged precession minimum at 33,300 yr ago and further related basic stratigraphic alternations to precession induced climate change in general. Such a linkage was also proposed by Grove Karl Gilbert for cyclic alternations in the marine Cretaceous of North America. Extrapolating sedimentation rates, he arrived at an astronomical duration for part of the Cretaceous that was roughly as long as the final estimate of William Thomson for the age of the Earth. Assuming that orbital parameters directly affect sea level, Karl Mayer-Eymar and Blytt correlated the well known succession of Tertiary stages to precession and eccentricity, respectively. Remarkably, Blytt, like Croll before him, used very long-period cycles in eccentricity to establish and validate his tuning. Understandably these studies in the second half of the 19th century were largely deductive in nature and proved partly incorrect later. Nevertheless, this fascinating period marks a crucial phase in the development of the astronomical theory of the ice ages and climate, and in astronomical dating. It preceded the final inductive phase, which started with the recovery of deep-sea cores in 1947 and led to a spectacular revival of the astronomical theory, by a century. The first half of the 20th century can best be regarded as an intermediate phase, despite the significant progress made in both theoretical aspects and tuning.
Improved Method of Manufacturing SiC Devices
NASA Technical Reports Server (NTRS)
Okojie, Robert S.
2005-01-01
The phrase, "common-layered architecture for semiconductor silicon carbide" ("CLASSiC") denotes a method of batch fabrication of microelectromechanical and semiconductor devices from bulk silicon carbide. CLASSiC is the latest in a series of related methods developed in recent years in continuing efforts to standardize SiC-fabrication processes. CLASSiC encompasses both institutional and technological innovations that can be exploited separately or in combination to make the manufacture of SiC devices more economical. Examples of such devices are piezoresistive pressure sensors, strain gauges, vibration sensors, and turbulence-intensity sensors for use in harsh environments (e.g., high-temperature, high-pressure, corrosive atmospheres). The institutional innovation is to manufacture devices for different customers (individuals, companies, and/or other entities) simultaneously in the same batch. This innovation is based on utilization of the capability for fabrication, on the same substrate, of multiple SiC devices having different functionalities (see figure). Multiple customers can purchase shares of the area on the same substrate, each customer s share being apportioned according to the customer s production-volume requirement. This makes it possible for multiple customers to share costs in a common foundry, so that the capital equipment cost per customer in the inherently low-volume SiC-product market can be reduced significantly. One of the technological innovations is a five-mask process that is based on an established set of process design rules. The rules provide for standardization of the fabrication process, yet are flexible enough to enable multiple customers to lay out masks for their portions of the SiC substrate to provide for simultaneous batch fabrication of their various devices. In a related prior method, denoted multi-user fabrication in silicon carbide (MUSiC), the fabrication process is based largely on surface micromachining of poly SiC. However, in MUSiC one cannot exploit the superior sensing, thermomechanical, and electrical properties of single-crystal 6H-SiC or 4H-SiC. As a complement to MUSiC, the CLASSiC five-mask process can be utilized to fabricate multiple devices in bulk single-crystal SiC of any polytype. The five-mask process makes fabrication less complex because it eliminates the need for large-area deposition and removal of sacrificial material. Other innovations in CLASSiC pertain to selective etching of indium tin oxide and aluminum in connection with multilayer metallization. One major characteristic of bulk micromachined microelectromechanical devices is the presence of three-dimensional (3D) structures. Any 3D recesses that already exist at a given step in a fabrication process usually make it difficult to apply a planar coat of photoresist for metallization and other subsequent process steps. To overcome this difficulty, the CLASSiC process includes a reversal of part of the conventional flow: Metallization is performed before the recesses are etched.
Modeling of autocatalytic hydrolysis of adefovir dipivoxil in solid formulations.
Dong, Ying; Zhang, Yan; Xiang, Bingren; Deng, Haishan; Wu, Jingfang
2011-04-01
The stability and hydrolysis kinetics of a phosphate prodrug, adefovir dipivoxil, in solid formulations were studied. The stability relationship between five solid formulations was explored. An autocatalytic mechanism for hydrolysis could be proposed according to the kinetic behavior which fits the Prout-Tompkins model well. For the classical kinetic models could hardly describe and predict the hydrolysis kinetics of adefovir dipivoxil in solid formulations accurately when the temperature is high, a feedforward multilayer perceptron (MLP) neural network was constructed to model the hydrolysis kinetics. The build-in approaches in Weka, such as lazy classifiers and rule-based learners (IBk, KStar, DecisionTable and M5Rules), were used to verify the performance of MLP. The predictability of the models was evaluated by 10-fold cross-validation and an external test set. It reveals that MLP should be of general applicability proposing an alternative efficient way to model and predict autocatalytic hydrolysis kinetics for phosphate prodrugs.
Mathematical interpretation of Brownian motor model: Limit cycles and directed transport phenomena
NASA Astrophysics Data System (ADS)
Yang, Jianqiang; Ma, Hong; Zhong, Suchuang
2018-03-01
In this article, we first suggest that the attractor of Brownian motor model is one of the reasons for the directed transport phenomenon of Brownian particle. We take the classical Smoluchowski-Feynman (SF) ratchet model as an example to investigate the relationship between limit cycles and directed transport phenomenon of the Brownian particle. We study the existence and variation rule of limit cycles of SF ratchet model at changing parameters through mathematical methods. The influences of these parameters on the directed transport phenomenon of a Brownian particle are then analyzed through numerical simulations. Reasonable mathematical explanations for the directed transport phenomenon of Brownian particle in SF ratchet model are also formulated on the basis of the existence and variation rule of the limit cycles and numerical simulations. These mathematical explanations provide a theoretical basis for applying these theories in physics, biology, chemistry, and engineering.
Preference Mining Using Neighborhood Rough Set Model on Two Universes.
Zeng, Kai
2016-01-01
Preference mining plays an important role in e-commerce and video websites for enhancing user satisfaction and loyalty. Some classical methods are not available for the cold-start problem when the user or the item is new. In this paper, we propose a new model, called parametric neighborhood rough set on two universes (NRSTU), to describe the user and item data structures. Furthermore, the neighborhood lower approximation operator is used for defining the preference rules. Then, we provide the means for recommending items to users by using these rules. Finally, we give an experimental example to show the details of NRSTU-based preference mining for cold-start problem. The parameters of the model are also discussed. The experimental results show that the proposed method presents an effective solution for preference mining. In particular, NRSTU improves the recommendation accuracy by about 19% compared to the traditional method.
[The scope of forensic psychiatry: ethical responsibilities and conflicts of values].
Arboleda-Flórez, Julio; Weisstub, David N
2006-01-01
To write about ethics in specialties that straddle the lines of multiple systems cannot be done without discussing values and decisional rules that underlie each one of those systems. By virtue of its multiple associations, forensic psychiatry is an archetype of such specialties ; it works within a set of values that might be viewed as antithetical, even irreconcilable, with other aspects of psychiatry. The extensive scope of action of forensic psychiatry compels its practitioners to hold alternate world views and to apply decisional rules that may clash with the classical values and ethical considerations of medicine (Weisstub, 1980). In this article, following an historical précis, the authors review the scope of action of forensic psychiatry as the basis for the definition of this subspecialty. The concepts, themes and controversies pertaining to the ethical practice of this specialty will be reflected upon in the light of issues encountered in actual practice.
Reevaluating Muscle Biopsies in the Diagnosis of Pompe Disease: A Corroborative Report.
Genge, Angela; Campbell, Natasha
2016-07-01
Previous reports suggest that although a diagnostic muscle biopsy can confirm the presence of Pompe disease, the absence of a definitive biopsy result does not rule out the diagnosis. In this study, we reviewed patients with a limb-girdle syndrome who demonstrated nonspecific abnormalities of muscle, without evidence of the classical changes of acid maltase deficiency. These patients were rescreened for Pompe disease using dried blood spot (DBS) testing. Twenty-seven patients provided blood samples for the DBS test. Four patients underwent subsequent genetic testing. Genetic analysis demonstrated that one patient tested positive for Pompe disease and one patient had one copy of a pathogenic variant. In conclusion, the ability of a diagnostic muscle biopsy to definitively rule out the presence of Pompe disease is limited. There is a role for a screening DBS in all patients presenting with a limb-girdle syndrome without a clear diagnosis.
Newsvendor problem under complete uncertainty: a case of innovative products.
Gaspars-Wieloch, Helena
2017-01-01
The paper presents a new scenario-based decision rule for the classical version of the newsvendor problem (NP) under complete uncertainty (i.e. uncertainty with unknown probabilities). So far, NP has been analyzed under uncertainty with known probabilities or under uncertainty with partial information (probabilities known incompletely). The novel approach is designed for the sale of new, innovative products, where it is quite complicated to define probabilities or even probability-like quantities, because there are no data available for forecasting the upcoming demand via statistical analysis. The new procedure described in the contribution is based on a hybrid of Hurwicz and Bayes decision rules. It takes into account the decision maker's attitude towards risk (measured by coefficients of optimism and pessimism) and the dispersion (asymmetry, range, frequency of extremes values) of payoffs connected with particular order quantities. It does not require any information about the probability distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niquet, Yann-Michel, E-mail: yniquet@cea.fr; Nguyen, Viet-Hung; Duchemin, Ivan
2014-02-07
We discuss carrier mobilities in the quantum Non-Equilibrium Green's Functions (NEGF) framework. We introduce a method for the extraction of the mobility that is free from contact resistance contamination and with minimal needs for ensemble averages. We focus on silicon thin films as an illustration, although the method can be applied to various materials such as semiconductor nanowires or carbon nanostructures. We then introduce a new paradigm for the definition of the partial mobility μ{sub M} associated with a given elastic scattering mechanism “M,” taking phonons (PH) as a reference (μ{sub M}{sup −1}=μ{sub PH+M}{sup −1}−μ{sub PH}{sup −1}). We argue thatmore » this definition makes better sense in a quantum transport framework as it is free from long range interference effects that can appear in purely ballistic calculations. As a matter of fact, these mobilities satisfy Matthiessen's rule for three mechanisms [e.g., surface roughness (SR), remote Coulomb scattering (RCS) and phonons] much better than the usual, single mechanism calculations. We also discuss the problems raised by the long range spatial correlations in the RCS disorder. Finally, we compare semi-classical Kubo-Greenwood (KG) and quantum NEGF calculations. We show that KG and NEGF are in reasonable agreement for phonon and RCS, yet not for SR. We discuss the reasons for these discrepancies.« less
Exploring Attitudes of Indian Classical Singers Toward Seeking Vocal Health Care.
Gunjawate, Dhanshree R; Aithal, Venkataraja U; Guddattu, Vasudeva; Kishore, Amrutha; Bellur, Rajashekhar
2016-11-01
The attitude of Indian classical singers toward seeking vocal health care is a dimension yet to be explored. The current study was aimed to determine the attitudes of these singers toward seeking vocal health care and further understand the influence of age and gender. Cross-sectional. A 10-item self-report questionnaire adapted from a study on contemporary commercial music singers was used. An additional question was added to ask if the singer was aware about the profession and role of speech-language pathologists (SLPs). The questionnaire was administered on 55 randomly selected self-identified trained Indian classical singers who rated the items using a five-point Likert scale. Demographic variables were summarized using descriptive statistics and t test was used to compare the mean scores between genders and age groups. Of the singers, 78.2% were likely to see a doctor for heath-related problems, whereas 81.8% were unlikely to seek medical care for voice-related problems; the difference was statistically significant (P < 0.001). Responses for the questions assessing the attitudes toward findings from medical examination by a specialist revealed a statistically significant difference (P = 0.02) between the genders. Age did not have a significant influence on the responses. Only 23.6% of the respondents were aware about the profession and the role of SLPs. The findings are in tune with western literature reporting hesitation of singers toward seeking vocal health care and draws attention of SLPs to promote their role in vocal health awareness and management. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Electroweak vacuum stability in classically conformal B - L extension of the standard model
Das, Arindam; Okada, Nobuchika; Papapietro, Nathan
2017-02-23
Here, we consider the minimal U(1) B - L extension of the standard model (SM) with the classically conformal invariance, where an anomaly-free U(1) B - L gauge symme- try is introduced along with three generations of right-handed neutrinos and a U(1) B - L Higgs field. Because of the classi- cally conformal symmetry, all dimensional parameters are forbidden. The B - L gauge symmetry is radiatively bro- ken through the Coleman–Weinberg mechanism, generating the mass for the U(1) B - L gauge boson (Z' boson) and the right-handed neutrinos. Through a small negative coupling betweenmore » the SM Higgs doublet and the B - L Higgs field, the negative mass term for the SM Higgs doublet is gener- ated and the electroweak symmetry is broken. We investigate the electroweak vacuum instability problem in the SM in this model context. It is well known that in the classically conformal U(1) B - L extension of the SM, the electroweak vacuum remains unstable in the renormalization group anal- ysis at the one-loop level. In this paper, we extend the anal- ysis to the two-loop level, and perform parameter scans. We also identify a parameter region which not only solve the vacuum instability problem, but also satisfy the recent ATLAS and CMS bounds from search for Z ' boson resonance at the LHC Run-2. Considering self-energy corrections to the SM Higgs doublet through the right-handed neutrinos and the Z ' boson, we derive the naturalness bound on the model parameters to realize the electroweak scale without fine-tunings.« less
Electroweak vacuum stability in classically conformal B - L extension of the standard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Arindam; Okada, Nobuchika; Papapietro, Nathan
Here, we consider the minimal U(1) B - L extension of the standard model (SM) with the classically conformal invariance, where an anomaly-free U(1) B - L gauge symme- try is introduced along with three generations of right-handed neutrinos and a U(1) B - L Higgs field. Because of the classi- cally conformal symmetry, all dimensional parameters are forbidden. The B - L gauge symmetry is radiatively bro- ken through the Coleman–Weinberg mechanism, generating the mass for the U(1) B - L gauge boson (Z' boson) and the right-handed neutrinos. Through a small negative coupling betweenmore » the SM Higgs doublet and the B - L Higgs field, the negative mass term for the SM Higgs doublet is gener- ated and the electroweak symmetry is broken. We investigate the electroweak vacuum instability problem in the SM in this model context. It is well known that in the classically conformal U(1) B - L extension of the SM, the electroweak vacuum remains unstable in the renormalization group anal- ysis at the one-loop level. In this paper, we extend the anal- ysis to the two-loop level, and perform parameter scans. We also identify a parameter region which not only solve the vacuum instability problem, but also satisfy the recent ATLAS and CMS bounds from search for Z ' boson resonance at the LHC Run-2. Considering self-energy corrections to the SM Higgs doublet through the right-handed neutrinos and the Z ' boson, we derive the naturalness bound on the model parameters to realize the electroweak scale without fine-tunings.« less
Böhm, Stanislav; Exner, Otto
2004-02-01
The geometrical parameters of molecules of 2-substituted 2-methylpropanes and 1-substituted bicyclo[2.2.2]octanes were calculated at the B3LYP/6-311+G(d,p) level. They agreed reasonably well with the mean crystallographic values retrieved from the Cambridge Structural Database for a set of diverse non-cyclic structures with a tertiary C atom. The angle deformations at this C atom produced by the immediately bonded substituent are also closely related to those observed previously in benzene mono derivatives (either as calculated or as derived from crystallographic data). The calculated geometrical parameters were used to test the classical Walsh rule: It is evidently true that an electron-attracting substituent increases the proportion of C-atom p-electrons in the bond to the substituent and leaves more s-electrons to the remaining bonds; as a consequence the C-C-C angles at a tertiary carbon are widened and the C-C bonds shortened. However, this rule describes only part of the reality since the bond angles and lengths are controlled by other factors as well, for instance by steric crowding. Another imperfection of the Walsh rule is that the sequence of substituents does not correspond to their electronegativities, as measured by any known scale; more probably it is connected with the inductive effect, but then only very roughly.
Toward an Application Guide for Safety Integrity Level Allocation in Railway Systems.
Ouedraogo, Kiswendsida Abel; Beugin, Julie; El-Koursi, El-Miloudi; Clarhaut, Joffrey; Renaux, Dominique; Lisiecki, Frederic
2018-02-02
The work in the article presents the development of an application guide based on feedback and comments stemming from various railway actors on their practices of SIL allocation to railway safety-related functions. The initial generic methodology for SIL allocation has been updated to be applied to railway rolling stock safety-related functions in order to solve the SIL concept application issues. Various actors dealing with railway SIL allocation problems are the intended target of the methodology; its principles will be summarized in this article with a focus on modifications and precisions made in order to establish a practical guide for railway safety authorities. The methodology is based on the flowchart formalism used in CSM (common safety method) European regulation. It starts with the use of quantitative safety requirements, particularly tolerable hazard rates (THR). THR apportioning rules are applied. On the one hand, the rules are related to classical logical combinations of safety-related functions preventing hazard occurrence. On the other hand, to take into account technical conditions (last safety weak link, functional dependencies, technological complexity, etc.), specific rules implicitly used in existing practices are defined for readjusting some THR values. SIL allocation process based on apportioned and validated THR values is finally illustrated through the example of "emergency brake" subsystems. Some specific SIL allocation rules are also defined and illustrated. © 2018 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Bardachenko, Vitaliy F.; Nikolsky, Alexander I.; Lazarev, Alexander A.
2007-04-01
In the paper we show that the biologically motivated conception of the use of time-pulse encoding gives the row of advantages (single methodological basis, universality, simplicity of tuning, training and programming et al) at creation and designing of sensor systems with parallel input-output and processing, 2D-structures of hybrid and neuro-fuzzy neurocomputers of next generations. We show principles of construction of programmable relational optoelectronic time-pulse coded processors, continuous logic, order logic and temporal waves processes, that lie in basis of the creation. We consider structure that executes extraction of analog signal of the set grade (order), sorting of analog and time-pulse coded variables. We offer optoelectronic realization of such base relational elements of order logic, which consists of time-pulse coded phototransformers (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutations blocks. We make estimations of basic technical parameters of such base devices and processors on their basis by simulation and experimental research: power of optical input signals - 0.200-20 μW, processing time - microseconds, supply voltage - 1.5-10 V, consumption power - hundreds of microwatts per element, extended functional possibilities, training possibilities. We discuss some aspects of possible rules and principles of training and programmable tuning on the required function, relational operation and realization of hardware blocks for modifications of such processors. We show as on the basis of such quasiuniversal hardware simple block and flexible programmable tuning it is possible to create sorting machines, neural networks and hybrid data-processing systems with the untraditional numerical systems and pictures operands.
Influence of the electromagnetic parameters on the surface wave attenuation in thin absorbing layers
NASA Astrophysics Data System (ADS)
Li, Yinrui; Li, Dongmeng; Wang, Xian; Nie, Yan; Gong, Rongzhou
2018-05-01
This paper describes the relationships between the surface wave attenuation properties and the electromagnetic parameters of radar absorbing materials (RAMs). In order to conveniently obtain the attenuation constant of TM surface waves over a wide frequency range, the simplified dispersion equations in thin absorbing materials were firstly deduced. The validity of the proposed method was proved by comparing with the classical dispersion equations. Subsequently, the attenuation constants were calculated separately for the absorbing layers with hypothetical relative permittivity and permeability. It is found that the surface wave attenuation properties can be strongly tuned by the permeability of RAM. Meanwhile, the permittivity should be appropriate so as to maintain high cutoff frequency. The present work provides specific methods and designs to improve the attenuation performances of radar absorbing materials.
High-speed switching of biphoton delays through electro-optic pump frequency modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odele, Ogaga D.; Lukens, Joseph M.; Jaramillo-Villegas, Jose A.
The realization of high-speed tunable delay control has received significant attention in the scene of classical photonics. In quantum optics, however, such rapid delay control systems for entangled photons have remained undeveloped. Here for the first time, we demonstrate rapid (2.5 MHz) modulation of signal-idler arrival times through electro-optic pump frequency modulation. Our technique applies the quantum phenomenon of nonlocal dispersion cancellation along with pump frequency tuning to control the relative delay between photon pairs. Chirped fiber Bragg gratings are employed to provide large amounts of dispersion which result in biphoton delays exceeding 30 ns. This rapid delay modulation schememore » could be useful for on-demand single-photon distribution in addition to quantum versions of pulse position modulation.« less
High-speed switching of biphoton delays through electro-optic pump frequency modulation
Odele, Ogaga D.; Lukens, Joseph M.; Jaramillo-Villegas, Jose A.; ...
2016-12-08
The realization of high-speed tunable delay control has received significant attention in the scene of classical photonics. In quantum optics, however, such rapid delay control systems for entangled photons have remained undeveloped. Here for the first time, we demonstrate rapid (2.5 MHz) modulation of signal-idler arrival times through electro-optic pump frequency modulation. Our technique applies the quantum phenomenon of nonlocal dispersion cancellation along with pump frequency tuning to control the relative delay between photon pairs. Chirped fiber Bragg gratings are employed to provide large amounts of dispersion which result in biphoton delays exceeding 30 ns. This rapid delay modulation schememore » could be useful for on-demand single-photon distribution in addition to quantum versions of pulse position modulation.« less
Multi-dimensional roles of ketone bodies in fuel metabolism, signaling, and therapeutics
Puchalska, Patrycja; Crawford, Peter A.
2017-01-01
Ketone body metabolism is a central node in physiological homeostasis. In this review, we discuss how ketones serve discrete fine-tuning metabolic roles that optimize organ and organism performance in varying nutrient states, and protect from inflammation and injury in multiple organ systems. Traditionally viewed as metabolic substrates enlisted only in carbohydrate restriction, recent observations underscore the importance of ketone bodies as vital metabolic and signaling mediators when carbohydrates are abundant. Complementing a repertoire of known therapeutic options for diseases of the nervous system, prospective roles for ketone bodies in cancer have arisen, as have intriguing protective roles in heart and liver, opening therapeutic options in obesity-related and cardiovascular disease. Controversies in ketone metabolism and signaling are discussed to reconcile classical dogma with contemporary observations. PMID:28178565
Novel perspectives for the engineering of abiotic stress tolerance in plants.
Cabello, Julieta V; Lodeyro, Anabella F; Zurbriggen, Matias D
2014-04-01
Adverse environmental conditions pose serious limitations to agricultural production. Classical biotechnological approaches towards increasing abiotic stress tolerance focus on boosting plant endogenous defence mechanisms. However, overexpression of regulatory elements or effectors is usually accompanied by growth handicap and yield penalties due to crosstalk between developmental and stress-response networks. Herein we offer an overview on novel strategies with the potential to overcome these limitations based on the engineering of regulatory systems involved in the fine-tuning of the plant response to environmental hardships, including post-translational modifications, small RNAs, epigenetic control of gene expression and hormonal networks. The development and application of plant synthetic biology tools and approaches will add new functionalities and perspectives to genetic engineering programs for enhancing abiotic stress tolerance. Copyright © 2013 Elsevier Ltd. All rights reserved.
Apparatus for magnetic and electrostatic confinement of plasma
Rostoker, Norman; Binderbauer, Michl
2013-06-11
An apparatus and method for containing plasma and forming a Field Reversed Configuration (FRC) magnetic topology are described in which plasma ions are contained magnetically in stable, non-adiabatic orbits in the FRC. Further, the electrons are contained electrostatically in a deep energy well, created by tuning an externally applied magnetic field. The simultaneous electrostatic confinement of electrons and magnetic confinement of ions avoids anomalous transport and facilitates classical containment of both electrons and ions. In this configuration, ions and electrons may have adequate density and temperature so that upon collisions ions are fused together by nuclear force, thus releasing fusion energy. Moreover, the fusion fuel plasmas that can be used with the present confinement system and method are not limited to neutronic fuels only, but also advantageously include advanced fuels.
Apparatus for magnetic and electrostatic confinement of plasma
Rostoker, Norman; Binderbauer, Michl
2016-07-05
An apparatus and method for containing plasma and forming a Field Reversed Configuration (FRC) magnetic topology are described in which plasma ions are contained magnetically in stable, non-adiabatic orbits in the FRC. Further, the electrons are contained electrostatically in a deep energy well, created by tuning an externally applied magnetic field. The simultaneous electrostatic confinement of electrons and magnetic confinement of ions avoids anomalous transport and facilitates classical containment of both electrons and ions. In this configuration, ions and electrons may have adequate density and temperature so that upon collisions ions are fused together by nuclear force, thus releasing fusion energy. Moreover, the fusion fuel plasmas that can be used with the present confinement system and method are not limited to neutronic fuels only, but also advantageously include advanced fuels.
Apparatus for magnetic and electrostatic confinement of plasma
Rostoker, Norman; Binderbauer, Michl
2006-10-31
An apparatus and method for containing plasma and forming a Field Reversed Configuration (FRC) magnetic topology are described in which plasma ions are contained magnetically in stable, non-adiabatic orbits in the FRC. Further, the electrons are contained electrostatically in a deep energy well, created by tuning an externally applied magnetic field. The simultaneous electrostatic confinement of electrons and magnetic confinement of ions avoids anomalous transport and facilitates classical containment of both electrons and ions. In this configuration, ions and electrons may have adequate density and temperature so that upon collisions they are fused together by nuclear force, thus releasing fusion energy. Moreover, the fusion fuel plasmas that can be used with the present confinement system and method are not limited to neutronic fuels only, but also advantageously include advanced fuels.
Apparatus for magnetic and electrostatic confinement of plasma
Rostoker, Norman; Binderbauer, Michl
2006-04-11
An apparatus and method for containing plasma and forming a Field Reversed Configuration (FRC) magnetic topology are described in which plasma ions are contained magnetically in stable, non-adiabatic orbits in the FRC. Further, the electrons are contained electrostatically in a deep energy well, created by tuning an externally applied magnetic field. The simultaneous electrostatic confinement of electrons and magnetic confinement of ions avoids anomalous transport and facilitates classical containment of both electrons and ions. In this configuration, ions and electrons may have adequate density and temperature so that upon collisions they are fused together by nuclear force, thus releasing fusion energy. Moreover, the fusion fuel plasmas that can be used with the present confinement system and method are not limited to neutronic fuels only, but also advantageously include advanced fuels.
Apparatus for magnetic and electrostatic confinement of plasma
Rostoker, Norman [Irvine, CA; Binderbauer, Michl [Irvine, CA
2009-08-04
An apparatus and method for containing plasma and forming a Field Reversed Configuration (FRC) magnetic topology are described in which plasma ions are contained magnetically in stable, non-adiabatic orbits in the FRC. Further, the electrons are contained electrostatically in a deep energy well, created by tuning an externally applied magnetic field. The simultaneous electrostatic confinement of electrons and magnetic confinement of ions avoids anomalous transport and facilitates classical containment of both electrons and ions. In this configuration, ions and electrons may have adequate density and temperature so that upon collisions ions are fused together by nuclear force, thus releasing fusion energy. Moreover, the fusion fuel plasmas that can be used with the present confinement system and method are not limited to neutronic fuels only, but also advantageously include advanced fuels.
Model-Selection Theory: The Need for a More Nuanced Picture of Use-Novelty and Double-Counting.
Steele, Katie; Werndl, Charlotte
2018-06-01
This article argues that common intuitions regarding (a) the specialness of 'use-novel' data for confirmation and (b) that this specialness implies the 'no-double-counting rule', which says that data used in 'constructing' (calibrating) a model cannot also play a role in confirming the model's predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in light of prominent accounts of confirmation of model predictions. We show that on the Bayesian account of confirmation, and also on the standard classical hypothesis-testing account, claims (a) and (b) are not generally true; but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialized classical model-selection methods, on the other hand, uphold a nuanced version of claim (a), but this comes apart from (b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised. 1 Introduction 2 A Climate Case Study 3 The Bayesian Method vis-à-vis Intuitions 4 Classical Tests vis-à-vis Intuitions 5 Classical Model-Selection Methods vis-à-vis Intuitions 5.1 Introducing classical model-selection methods 5.2 Two cases 6 Re-examining Our Case Study 7 Conclusion .
Effects of inspections in small world social networks with different contagion rules
NASA Astrophysics Data System (ADS)
Muñoz, Francisco; Nuño, Juan Carlos; Primicerio, Mario
2015-08-01
We study the way the structure of social links determines the effects of random inspections on a population formed by two types of individuals, e.g. tax-payers and tax-evaders (free riders). It is assumed that inspections occur in a larger scale than the population relaxation time and, therefore, a unique initial inspection is performed on a population that is completely formed by tax-evaders. Besides, the inspected tax-evaders become tax-payers forever. The social network is modeled as a Watts-Strogatz Small World whose topology can be tuned in terms of a parameter p ∈ [ 0 , 1 ] from regular (p = 0) to random (p = 1). Two local contagion rules are considered: (i) a continuous one that takes the proportion of neighbors to determine the next status of an individual (node) and (ii) a discontinuous (threshold rule) that assumes a minimum number of neighbors to modify the current state. In the former case, irrespective of the inspection intensity ν, the equilibrium population is always formed by tax-payers. In the mean field approach, we obtain the characteristic time of convergence as a function of ν and p. For the threshold contagion rule, we show that the response of the population to the intensity of inspections ν is a function of the structure of the social network p and the willingness of the individuals to change their state, r. It is shown that sharp transitions occur at critical values of ν that depends on p and r. We discuss these results within the context of tax evasion and fraud where the strategies of inspection could be of major relevance.
Fully adaptive propagation of the quantum-classical Liouville equation
NASA Astrophysics Data System (ADS)
Horenko, Illia; Weiser, Martin; Schmidt, Burkhard; Schütte, Christof
2004-05-01
In mixed quantum-classical molecular dynamics few but important degrees of freedom of a dynamical system are modeled quantum-mechanically while the remaining ones are treated within the classical approximation. Rothe methods established in the theory of partial differential equations are used to control both temporal and spatial discretization errors on grounds of a global tolerance criterion. The TRAIL (trapezoidal rule for adaptive integration of Liouville dynamics) scheme [I. Horenko and M. Weiser, J. Comput. Chem. 24, 1921 (2003)] has been extended to account for nonadiabatic effects in molecular dynamics described by the quantum-classical Liouville equation. In the context of particle methods, the quality of the spatial approximation of the phase-space distributions is maximized while the numerical condition of the least-squares problem for the parameters of particles is minimized. The resulting dynamical scheme is based on a simultaneous propagation of moving particles (Gaussian and Dirac deltalike trajectories) in phase space employing a fully adaptive strategy to upgrade Dirac to Gaussian particles and, vice versa, downgrading Gaussians to Dirac-type trajectories. This allows for the combination of Monte-Carlo-based strategies for the sampling of densities and coherences in multidimensional problems with deterministic treatment of nonadiabatic effects. Numerical examples demonstrate the application of the method to spin-boson systems in different dimensionality. Nonadiabatic effects occurring at conical intersections are treated in the diabatic representation. By decreasing the global tolerance, the numerical solution obtained from the TRAIL scheme are shown to converge towards exact results.
Fully adaptive propagation of the quantum-classical Liouville equation.
Horenko, Illia; Weiser, Martin; Schmidt, Burkhard; Schütte, Christof
2004-05-15
In mixed quantum-classical molecular dynamics few but important degrees of freedom of a dynamical system are modeled quantum-mechanically while the remaining ones are treated within the classical approximation. Rothe methods established in the theory of partial differential equations are used to control both temporal and spatial discretization errors on grounds of a global tolerance criterion. The TRAIL (trapezoidal rule for adaptive integration of Liouville dynamics) scheme [I. Horenko and M. Weiser, J. Comput. Chem. 24, 1921 (2003)] has been extended to account for nonadiabatic effects in molecular dynamics described by the quantum-classical Liouville equation. In the context of particle methods, the quality of the spatial approximation of the phase-space distributions is maximized while the numerical condition of the least-squares problem for the parameters of particles is minimized. The resulting dynamical scheme is based on a simultaneous propagation of moving particles (Gaussian and Dirac deltalike trajectories) in phase space employing a fully adaptive strategy to upgrade Dirac to Gaussian particles and, vice versa, downgrading Gaussians to Dirac-type trajectories. This allows for the combination of Monte-Carlo-based strategies for the sampling of densities and coherences in multidimensional problems with deterministic treatment of nonadiabatic effects. Numerical examples demonstrate the application of the method to spin-boson systems in different dimensionality. Nonadiabatic effects occurring at conical intersections are treated in the diabatic representation. By decreasing the global tolerance, the numerical solution obtained from the TRAIL scheme are shown to converge towards exact results.
Interference in the classical probabilistic model and its representation in complex Hilbert space
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei Yu.
2005-10-01
The notion of a context (complex of physical conditions, that is to say: specification of the measurement setup) is basic in this paper.We show that the main structures of quantum theory (interference of probabilities, Born's rule, complex probabilistic amplitudes, Hilbert state space, representation of observables by operators) are present already in a latent form in the classical Kolmogorov probability model. However, this model should be considered as a calculus of contextual probabilities. In our approach it is forbidden to consider abstract context independent probabilities: “first context and only then probability”. We construct the representation of the general contextual probabilistic dynamics in the complex Hilbert space. Thus dynamics of the wave function (in particular, Schrödinger's dynamics) can be considered as Hilbert space projections of a realistic dynamics in a “prespace”. The basic condition for representing of the prespace-dynamics is the law of statistical conservation of energy-conservation of probabilities. In general the Hilbert space projection of the “prespace” dynamics can be nonlinear and even irreversible (but it is always unitary). Methods developed in this paper can be applied not only to quantum mechanics, but also to classical statistical mechanics. The main quantum-like structures (e.g., interference of probabilities) might be found in some models of classical statistical mechanics. Quantum-like probabilistic behavior can be demonstrated by biological systems. In particular, it was recently found in some psychological experiments.
Seleson, Pablo; Du, Qiang; Parks, Michael L.
2016-08-16
The peridynamic theory of solid mechanics is a nonlocal reformulation of the classical continuum mechanics theory. At the continuum level, it has been demonstrated that classical (local) elasticity is a special case of peridynamics. Such a connection between these theories has not been extensively explored at the discrete level. This paper investigates the consistency between nearest-neighbor discretizations of linear elastic peridynamic models and finite difference discretizations of the Navier–Cauchy equation of classical elasticity. While nearest-neighbor discretizations in peridynamics have been numerically observed to present grid-dependent crack paths or spurious microcracks, this paper focuses on a different, analytical aspect of suchmore » discretizations. We demonstrate that, even in the absence of cracks, such discretizations may be problematic unless a proper selection of weights is used. Specifically, we demonstrate that using the standard meshfree approach in peridynamics, nearest-neighbor discretizations do not reduce, in general, to discretizations of corresponding classical models. We study nodal-based quadratures for the discretization of peridynamic models, and we derive quadrature weights that result in consistency between nearest-neighbor discretizations of peridynamic models and discretized classical models. The quadrature weights that lead to such consistency are, however, model-/discretization-dependent. We motivate the choice of those quadrature weights through a quadratic approximation of displacement fields. The stability of nearest-neighbor peridynamic schemes is demonstrated through a Fourier mode analysis. Finally, an approach based on a normalization of peridynamic constitutive constants at the discrete level is explored. This approach results in the desired consistency for one-dimensional models, but does not work in higher dimensions. The results of the work presented in this paper suggest that even though nearest-neighbor discretizations should be avoided in peridynamic simulations involving cracks, such discretizations are viable, for example for verification or validation purposes, in problems characterized by smooth deformations. Furthermore, we demonstrate that better quadrature rules in peridynamics can be obtained based on the functional form of solutions.« less
Bennett, Kochise; Kowalewski, Markus; Mukamel, Shaul
2016-02-09
We present a hierarchy of Fermi golden rules (FGRs) that incorporate strongly coupled electronic/nuclear dynamics in time-resolved photoelectron spectroscopy (TRPES) signals at different levels of theory. Expansion in the joint electronic and nuclear eigenbasis yields the numerically most challenging exact FGR (eFGR). The quasistatic Fermi Golden Rule (qsFGR) neglects nuclear motion during the photoionization process but takes into account electronic coherences as well as populations initially present in the pumped matter as well as those generated internally by coupling between electronic surfaces. The standard semiclassical Fermi Golden Rule (scFGR) neglects the electronic coherences and the nuclear kinetic energy during the ionizing pulse altogether, yielding the classical Condon approximation. The coherence contributions depend on the phase-profile of the ionizing field, allowing coherent control of TRPES signals. The photoelectron spectrum from model systems is simulated using these three levels of theory. The eFGR and the qsFGR show temporal oscillations originating from the electronic or vibrational coherences generated as the nuclear wave packet traverses a conical intersection. These oscillations, which are missed by the scFGR, directly reveal the time-evolving splitting between electronic states of the neutral molecule in the curve-crossing regime.
The island rule in large mammals: paleontology meets ecology.
Raia, Pasquale; Meiri, Shai
2006-08-01
The island rule is the phenomenon of the miniaturization of large animals and the gigantism of small animals on islands, with mammals providing the classic case studies. Several explanations for this pattern have been suggested, and departures from the predictions of this rule are common among mammals of differing body size, trophic habits, and phylogenetic affinities. Here we offer a new explanation for the evolution of body size of large insular mammals, using evidence from both living and fossil island faunal assemblages. We demonstrate that the extent of dwarfism in ungulates depends on the existence of competitors and, to a lesser extent, on the presence of predators. In contrast, competition and predation have little or no effect on insular carnivore body size, which is influenced by the nature of the resource base. We suggest dwarfism in large herbivores is an outcome of the fitness increase resulting from the acceleration of reproduction in low-mortality environments. Carnivore size is dependent on the abundance and size of their prey. Size evolution of large mammals in different trophic levels has different underlying mechanisms, resulting in different patterns. Absolute body size may be only an indirect predictor of size evolution, with ecological interactions playing a major role.
Semiclassical approximations in the coherent-state representation
NASA Technical Reports Server (NTRS)
Kurchan, J.; Leboeuf, P.; Saraceno, M.
1989-01-01
The semiclassical limit of the stationary Schroedinger equation in the coherent-state representation is analyzed simultaneously for the groups W1, SU(2), and SU(1,1). A simple expression for the first two orders for the wave function and the associated semiclassical quantization rule is obtained if a definite choice for the classical Hamiltonian and expansion parameter is made. The behavior of the modulus of the wave function, which is a distribution function in a curved phase space, is studied for the three groups. The results are applied to the quantum triaxial rotor.
Ferguson, J S; Bosworth, J; Min, T; Mercieca, J; Holden, C A
2014-03-01
We report the case of a male patient presenting with eosinophilia, pulmonary oedema and eosinophilic fasciitis (EF). He had the classic clinical appearance and magnetic resonance imaging of EF. Cytogenetic analysis of the bone marrow revealed a previously undescribed pericentric inversion of chromosome 5. Overall, the presentation was consistent with a diagnosis of chronic eosinophilic leukaemia, not otherwise specified (CEL-NOS). Dermatologists should consult a haematologist in cases of EF, in order to rule out possible haematological malignancies. © 2013 British Association of Dermatologists.
A Formalized Design Process for Bacterial Consortia That Perform Logic Computing
Sun, Rui; Xi, Jingyi; Wen, Dingqiao; Feng, Jingchen; Chen, Yiwei; Qin, Xiao; Ma, Yanrong; Luo, Wenhan; Deng, Linna; Lin, Hanchi; Yu, Ruofan; Ouyang, Qi
2013-01-01
The concept of microbial consortia is of great attractiveness in synthetic biology. Despite of all its benefits, however, there are still problems remaining for large-scaled multicellular gene circuits, for example, how to reliably design and distribute the circuits in microbial consortia with limited number of well-behaved genetic modules and wiring quorum-sensing molecules. To manage such problem, here we propose a formalized design process: (i) determine the basic logic units (AND, OR and NOT gates) based on mathematical and biological considerations; (ii) establish rules to search and distribute simplest logic design; (iii) assemble assigned basic logic units in each logic operating cell; and (iv) fine-tune the circuiting interface between logic operators. We in silico analyzed gene circuits with inputs ranging from two to four, comparing our method with the pre-existing ones. Results showed that this formalized design process is more feasible concerning numbers of cells required. Furthermore, as a proof of principle, an Escherichia coli consortium that performs XOR function, a typical complex computing operation, was designed. The construction and characterization of logic operators is independent of “wiring” and provides predictive information for fine-tuning. This formalized design process provides guidance for the design of microbial consortia that perform distributed biological computation. PMID:23468999