Bounding of Performance Measures for Threshold-Based Queuing Systems: Theory
Lui, John C.S.
Bounding of Performance Measures for Threshold-Based Queuing Systems: Theory and Application-based queuing system with hysteresis in which the number of active servers is governed by a forward threshold; . . . ; RKÀ1 (where R1 queuing system
Adversarial Queuing Theory ALLAN BORODIN
Kleinberg, Jon
Adversarial Queuing Theory ALLAN BORODIN University of Toronto, Toronto, Ontario, Canada JON when packets are injected continuously into a network. We develop an adversarial theory of queuing aimed at addressing some of the restrictions inherent in probabilistic analysis and queuing theory based
An individual based model of fish recruitment using simple queuing theory
Baxter, Paul D.
An individual based model of fish recruitment using simple queuing theory Paul D. Baxter, ¡ Jon W, the growth process forms an § queue. ¤ Queuing theory is a well developed branch of applied probability, and grows at a rate zero otherwise. larva zooplankton #12;The queuing model ¤ Zooplankton arrive at random
The Performance of a Precedence-Based Queuing Discipline
Tsitsiklis, John
[Performanceof Systems]:design studies; D.4.8 [Operating Systems]:Performance-queuing theory; H.2.2 [Database:Database concurrency control, queuing theory, random dag, static locking, throughput 1. Introduction ConsiderThe Performance of a Precedence-Based Queuing Discipline JOHN N. TSITSIKLIS AND CHRISTOS H
Queuing Theory and Reference Transactions.
ERIC Educational Resources Information Center
Terbille, Charles
1995-01-01
Examines the implications of applying the queuing theory to three different reference situations: (1) random patron arrivals; (2) random durations of transactions; and (3) use of two librarians. Tables and figures represent results from spreadsheet calculations of queues for each reference situation. (JMV)
Using Queuing Theory to Model Streaming Applications
Chamberlain, Roger
by constructing a Jacksonian queuing network model of a streaming implementation of BLAST deployed on modeling Mercury BLAST [2], [3], an accelerated BLAST that combines general-purpose processors and FPGAsUsing Queuing Theory to Model Streaming Applications Rahav Dor Joseph M. Lancaster Mark A. Franklin
Applying Queuing and Probability Theory to Predict Organizational Behaviors
Massachusetts at Amherst, University of
Applying Queuing and Probability Theory to Predict Organizational Behaviors Bryan Horling Multi these and other characteristics is then created using techniques from queuing and probability theory. This model system has been modeled in the ODML framework. Techniques from queuing and probability theory embedded
An application of queuing theory to waterfowl migration
Sojda, Richard S.; Cornely, John E.; Fredrickson, Leigh H.
2002-01-01
There has always been great interest in the migration of waterfowl and other birds. We have applied queuing theory to modelling waterfowl migration, beginning with a prototype system for the Rocky Mountain Population of trumpeter swans (Cygnus buccinator) in Western North America. The queuing model can be classified as a D/BB/28 system, and we describe the input sources, service mechanism, and network configuration of queues and servers. The intrinsic nature of queuing theory is to represent the spatial and temporal characteristics of entities and how they move, are placed in queues, and are serviced. The service mechanism in our system is an algorithm representing how swans move through the flyway based on seasonal life cycle events. The system uses an observed number of swans at each of 27 areas for a breeding season as input and simulates their distribution through four seasonal steps. The result is a simulated distribution of birds for the subsequent year's breeding season. The model was built as a multiagent system with one agent handling movement algorithms, with one facilitating user interface, and with one to seven agents representing specific geographic areas for which swan management interventions can be implemented. The many parallels in queuing model servers and service mechanisms with waterfowl management areas and annual life cycle events made the transfer of the theory to practical application straightforward.
Team #32 Page 1 of 19 Modeling Toll Plaza Behavior Using Queuing Theory
Morrow, James A.
Team #32 Page 1 of 19 Modeling Toll Plaza Behavior Using Queuing Theory February 7, 2005 Abstract collection and merging. We apply Queuing Theory to each stage, modeling each stage as a queuing system
CDA6530: Performance Models of Computers and Networks Chapter 4: Elementary Queuing Theory
Zou, Cliff C.
CDA6530: Performance Models of Computers and Networks Chapter 4: Elementary Queuing Theory #12;DefinitionDefinition Queuing system: Queuing system: a buffer (waiting room), service facility (one or more if the system includes the queue? 8 y q #12;Internet Queuing Delay IntroductionInternet Queuing Delay
Queuing theory models for computer networks
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.
Queuing Theory: An Alternative Approach to Educational Research.
ERIC Educational Resources Information Center
Huyvaert, Sarah H.
Queuing theory is examined in this paper in order to determine if the theory could be applied in educational settings. It is defined as a form of operations research that uses mathematical formulas and/or computer simulation to study wait and congestion in a system and, through the study of these visible phenomena, to discover malfunctions within…
A queuing-theory model of word frequency distributions Robert Munro
A queuing-theory model of word frequency distributions Robert Munro University of Sydney rmunro@it.usyd.edu.au Abstract This paper describes a novel model for term frequency distributions that is derived from queuing-theory, that provoked the exploration of a queuing-theory model of word frequency distri- butions. There, and in (Church
Application of queuing theory in production-inventory optimization
NASA Astrophysics Data System (ADS)
Rashid, Reza; Hoseini, Seyed Farzad; Gholamian, M. R.; Feizabadi, Mohammad
2015-07-01
This paper presents a mathematical model for an inventory control system in which customers' demands and suppliers' service time are considered as stochastic parameters. The proposed problem is solved through queuing theory for a single item. In this case, transitional probabilities are calculated in steady state. Afterward, the model is extended to the case of multi-item inventory systems. Then, to deal with the complexity of this problem, a new heuristic algorithm is developed. Finally, the presented bi-level inventory-queuing model is implemented as a case study in Electroestil Company.
Application of queuing theory in inventory systems with substitution flexibility
NASA Astrophysics Data System (ADS)
Seyedhoseini, S. M.; Rashid, Reza; Kamalpour, Iman; Zangeneh, Erfan
2015-01-01
Considering the competition in today's business environment, tactical planning of a supply chain becomes more complex than before. In many multi-product inventory systems, substitution flexibility can improve profits. This paper aims to prepare a comprehensive substitution inventory model, where an inventory system with two substitute products with ignorable lead time has been considered, and effects of simultaneous ordering have been examined. In this paper, demands of customers for both of the products have been regarded as stochastic parameters, and queuing theory has been used to construct a mathematical model. The model has been coded by C++, and it has been analyzed due to a real example, where the results indicate efficiency of proposed model.
On the application of queuing theory for analysis of twin data.
Kuravsky, L S; Malykh, S B
2000-06-01
A mathematical model based on queuing theory is used to study the dynamics of environmental influence on twin pairs. The model takes into consideration genetic factors and effects of nonshared environment. Histograms are exploited as base analysed characteristics, with the method of minimum chi-square yielding estimated characteristics. The proposed technique was applied to analysis of longitudinal data for MZ and DZ twins. It was shown that the same environment impact may yield different contributions to final variances of the IQ parameters under investigation. Magnitudes of these contributions depend on the genetic factor represented by distributions of an analysed parameter at the point of birth. PMID:10918622
Queuing Network Modeling of Transcription CHANGXU WU and YILI LIU
Wu, Changxu (Sean)
. Based on the queuing network theory of human per- formance [Liu 1996; 1997] and current discoveries6 Queuing Network Modeling of Transcription Typing CHANGXU WU and YILI LIU University of Michigan in cognitive and neural science, this article extends and applies the Queuing Network-Model Human Processor (QN
Liu, Yili
and mental workload--a queuing network approach based on the queuing network theory of human performance, SEPTEMBER 2008 Queuing Network Modeling of a Real-Time Psychophysiological Index of Mental Workload--P300, P300, queuing network. I. INTRODUCTION MENTAL workload is one of the most important issues
Vahle, M.O.
1982-03-01
Queuing theory is applied to the problem of assigning computer ports within a terminal switching network to maximize the likelihood of instant connect. A brief background of the network is included to focus on the statement of the problem.
Queuing theory to guide the implementation of a heart failure inpatient registry program.
Zai, Adrian H; Farr, Kit M; Grant, Richard W; Mort, Elizabeth; Ferris, Timothy G; Chueh, Henry C
2009-01-01
OBJECTIVE The authors previously implemented an electronic heart failure registry at a large academic hospital to identify heart failure patients and to connect these patients with appropriate discharge services. Despite significant improvements in patient identification and connection rates, time to connection remained high, with an average delay of 3.2 days from the time patients were admitted to the time connections were made. Our objective for this current study was to determine the most effective solution to minimize time to connection. DESIGN We used a queuing theory model to simulate 3 different potential solutions to decrease the delay from patient identification to connection with discharge services. MEASUREMENTS The measures included average rate at which patients were being connected to the post discharge heart failure services program, average number of patients in line, and average patient waiting time. RESULTS Using queuing theory model simulations, we were able to estimate for our current system the minimum rate at which patients need to be connected (262 patients/mo), the ideal patient arrival rate (174 patients/mo) and the maximal patient arrival rate that could be achieved by adding 1 extra nurse (348 patients/mo). CONCLUSIONS Our modeling approach was instrumental in helping us characterize key process parameters and estimate the impact of adding staff on the time between identifying patients with heart failure and connecting them with appropriate discharge services. PMID:19390108
Spreadsheet Analysis Of Queuing In A Computer Network
NASA Technical Reports Server (NTRS)
Galant, David C.
1992-01-01
Method of analyzing responses of computer network based on simple queuing-theory mathmatical models via spreadsheet program. Effects of variations in traffic, capacities of channels, and message protocols assessed.
Wedagedera, Janak R.
T-Cell Activation: A Queuing Theory Analysis at Low Agonist Density J. R. Wedagedera* and N. J, University of Warwick, Coventry, United Kingdom ABSTRACT We analyze a simple linear triggering model of the T-cell,arobustnessanalysisshowsthatthesepropertiesaredegradedwhenthequeueparameters aresubject tovariation--for example, under stochasticity in the ligand number in the cell-cell interface
Bult, Johannes H F; van Putten, Bram; Schifferstein, Hendrik N J; Roozen, Jacques P; Voragen, Alphons G J; Kroeze, Jan H A
2004-10-01
In continuous vigilance tasks, the number of coincident panel responses to stimuli provides an index of stimulus detectability. To determine whether this number is due to chance, panel noise levels have been approximated by the maximum coincidence level obtained in stimulus-free conditions. This study proposes an alternative method by which to assess noise levels, derived from queuing system theory (QST). Instead of critical coincidence levels, QST modeling estimates the duration of coinciding responses in the absence of stimuli. The proposed method has the advantage over previous approaches that it yields more reliable noise estimates and allows for statistical testing. The method was applied in an olfactory detection experiment using 16 panelists in stimulus-present and stimulus-free conditions. We propose that QST may be used as an alternative to signal detection theory for analyzing data from continuous vigilance tasks. PMID:15751471
KalmanQueue: An Adaptive Approach to Virtual Queuing
Morrow, James A.
4 Properties of a Good Model 5 5 Basic Queuing Theory: Is It Useful? 6 5.1 Queuing Theory in OurKalmanQueue: An Adaptive Approach to Virtual Queuing February 10, 2004 Abstract QuickPass is a virtual queuing system that allows some theme park customers to significantly cut down their waiting time
Effects of Diversity and Procrastination in Priority Queuing Theory: the Different Power Law Regimes
Saichev, A
2009-01-01
Empirical analysis show that, after the update of a browser, the publication of the vulnerability of a software, or the discovery of a cyber worm, the fraction of computers still using the older version, or being not yet patched, or exhibiting worm activity decays as power laws $\\sim 1/t^{\\alpha}$ with $0 < \\alpha \\leq 1$ over time scales of years. We present a simple model for this persistence phenomenon framed within the standard priority queuing theory, of a target task which has the lowest priority compared with all other tasks that flow on the computer of an individual. We identify a "time deficit" control parameter $\\beta$ and a bifurcation to a regime where there is a non-zero probability for the target task to never be completed. The distribution of waiting time ${\\cal T}$ till the completion of the target task has the power law tail $\\sim 1/t^{1/2}$, resulting from a first-passage solution of an equivalent Wiener process. Taking into account a diversity of time deficit parameters in a population o...
T-cell activation: A queuing theory analysis at low agonist density.
Wedagedera, J R; Burroughs, N J
2006-09-01
We analyze a simple linear triggering model of the T-cell receptor (TCR) within the framework of queuing theory, in which TCRs enter the queue upon full activation and exit by downregulation. We fit our model to four experimentally characterized threshold activation criteria and analyze their specificity and sensitivity: the initial calcium spike, cytotoxicity, immunological synapse formation, and cytokine secretion. Specificity characteristics improve as the time window for detection increases, saturating for time periods on the timescale of downregulation; thus, the calcium spike (30 s) has low specificity but a sensitivity to single-peptide MHC ligands, while the cytokine threshold (1 h) can distinguish ligands with a 30% variation in the complex lifetime. However, a robustness analysis shows that these properties are degraded when the queue parameters are subject to variation-for example, under stochasticity in the ligand number in the cell-cell interface and population variation in the cellular threshold. A time integration of the queue over a period of hours is shown to be able to control parameter noise efficiently for realistic parameter values when integrated over sufficiently long time periods (hours), the discrimination characteristics being determined by the TCR signal cascade kinetics (a kinetic proofreading scheme). Therefore, through a combination of thresholds and signal integration, a T cell can be responsive to low ligand density and specific to agonist quality. We suggest that multiple threshold mechanisms are employed to establish the conditions for efficient signal integration, i.e., coordinate the formation of a stable contact interface. PMID:16766611
Effects of diversity and procrastination in priority queuing theory: The different power law regimes
NASA Astrophysics Data System (ADS)
Saichev, A.; Sornette, D.
2010-01-01
Empirical analyses show that after the update of a browser, or the publication of the vulnerability of a software, or the discovery of a cyber worm, the fraction of computers still using the older browser or software version, or not yet patched, or exhibiting worm activity decays as a power law ˜1/t? with 0queuing theory, of a target task which has the lowest priority compared to all other tasks that flow on the computer of an individual. We identify a “time deficit” control parameter ? and a bifurcation to a regime where there is a nonzero probability for the target task to never be completed. The distribution of waiting time T until the completion of the target task has the power law tail ˜1/t1/2 , resulting from a first-passage solution of an equivalent Wiener process. Taking into account a diversity of time deficit parameters in a population of individuals, the power law tail is changed into 1/t? , with ??(0.5,?) , including the well-known case 1/t . We also study the effect of “procrastination,” defined as the situation in which the target task may be postponed or delayed even after the individual has solved all other pending tasks. This regime provides an explanation for even slower apparent decay and longer persistence.
Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S.
2015-03-10
Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.
Saichev, A; Sornette, D
2010-01-01
Empirical analyses show that after the update of a browser, or the publication of the vulnerability of a software, or the discovery of a cyber worm, the fraction of computers still using the older browser or software version, or not yet patched, or exhibiting worm activity decays as a power law approximately 1/t(alpha) with 0
Fast Queuing Policies for Multimedia Applications Duong Nguyen-Huu
Nguyen, Thinh
. Both theory and simulation results are presented to verify our framework. Index Terms--QoS, Queuing-switched networks in early 1960s, queuing theory has been a critical part in the perfor- mance analysis for most be analyzed in the language of queuing theory [1]. Many current wireless transmission protocols
Improving queuing service at McDonald's
NASA Astrophysics Data System (ADS)
Koh, Hock Lye; Teh, Su Yean; Wong, Chin Keat; Lim, Hooi Kie; Migin, Melissa W.
2014-07-01
Fast food restaurants are popular among price-sensitive youths and working adults who value the conducive environment and convenient services. McDonald's chains of restaurants promote their sales during lunch hours by offering package meals which are perceived to be inexpensive. These promotional lunch meals attract good response, resulting in occasional long queues and inconvenient waiting times. A study is conducted to monitor the distribution of waiting time, queue length, customer arrival and departure patterns at a McDonald's restaurant located in Kuala Lumpur. A customer survey is conducted to gauge customers' satisfaction regarding waiting time and queue length. An android app named Que is developed to perform onsite queuing analysis and report key performance indices. The queuing theory in Que is based upon the concept of Poisson distribution. In this paper, Que is utilized to perform queuing analysis at this McDonald's restaurant with the aim of improving customer service, with particular reference to reducing queuing time and shortening queue length. Some results will be presented.
Dynamic Scaling and Growth Behavior of Queuing Network Normalization Constants
Lam, Simon S.
~ction;queuingtheory General Terms: Performance, Theory AdditionalKey Words and Phrases: Queuing networks, productDynamic Scaling and Growth Behavior of Queuing Network Normalization Constants SIMON S. LAM of normalization constants of closed product-form queuing networks W~thdynamic scaling, normalizationconstants
Adversarial Queuing on the Multiple Access Channel Bogdan S. Chlebus
Chlebus, Bogdan
in adversarial queuing theory for store-and-forward packet networks, we study stability of deter- ministicAdversarial Queuing on the Multiple Access Channel Bogdan S. Chlebus Dariusz R. Kowalski Mariusz are injected continuously. The quality of service is considered in the framework of adversarial queuing
On the Analysis of Exponential Queuing Systems with Randomly Changing Arrival
Kleinrock, Leonard
in queuing theory. Necessary and sufficient conditions for the stability or ergodicity of the queuing process283 On the Analysis of Exponential Queuing Systems with Randomly Changing Arrival Rates: Stability are queuing systems, modelling and performance evaluation of broadband integrated telecommunication networks
Takahashi, Taiki
2006-01-01
Intertemporal and probabilistic decision-making has been studied in psychiatry, ecology, and neuroeconomics. Because drug addicts and psycopaths often make risky decisions (e.g., drug misuse and aggression), investigation into types of impulsivity in intertemporal and probabilistic choices (delay and probability discounting) are important for psychiatric treatments. Studies in behavioral ecology proposed that delay and probability discounting are mediated by the same psychological process, because a decrease in probability of winning corresponds to an increase in delay until winning. According to this view, odds-against winning (=1/p-1) in probabilistic choice corresponds to delay in intertemporal choice. This hypothesis predicts that preference of gambling (low degree of probability discounting) may be associated with patience, rather than impulsivity or impatience, in intertemporal choice (low degree of delay discounting). However, recent empirical evidence in psychiatric research employing pathological gamblers indicates that pathological gamblers are impulsive in intertemporal choice (high degrees of delay discounting). However, a hyperbolic discounting function (usually adopted to explain intertemporal choice) with odds-against (instead of delay) explain experimental data in probabilistic choice dramatically well. Therefore, an alternative explanation is required for the hypothetical equivalence of odds-against to delay. We propose that queuing theory (often adopted for analyzing computer network traffic) under a competitive social foraging condition may explain the equivalence. Our hypothesis may help understand impulsivity of psychiatrics in social behavior (e.g., aggression and antisocial behavior) in addition to non-social impulsivity in reward-seeking (e.g., substance misuse). PMID:16574335
NEW STABILITY RESULTS FOR ADVERSARIAL QUEUING ZVI LOTKER, BOAZ PATT-SHAMIR, AND ADI ROSEN
Rosén, Adi
consider the model of "adversarial queuing theory" for packet networks intro- duced by Borodin et al. [J). Key words. adversarial queuing theory, network protocols, stability, lower bounds AMS subject makes use of the model of "adversarial queuing theory" proposed by Borodin et al. [7]. The model can
Analytical approach and verification of a DiffServ-based priority service
in the wide-area. Our approach is based on a series of well- known results of queuing theory but is proven to the analysis made here is the validity of the M/D/1 queuing model in the analysis of queues serving aggregates mechanism based on the operation of TCP and aiming at preserving queue occupancy (and thus queuing delays
He, Xinhua
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367
Queuing Theoretic and Information Theoretic Capacity of Energy Harvesting Sensor Nodes
Sharma, Vinod
combine information theory and queuing theory to provide a unified view of capacity for an en- ergyQueuing Theoretic and Information Theoretic Capacity of Energy Harvesting Sensor Nodes Vinod Sharma of such a system. Further we combine the information theoretic and queuing theoretic approaches to obtain
Miller, Nicholas; Zavadil, Robert; Ellis, Abraham; Muljadi, Eduard; Camm, Ernst; Kirby, Brendan J
2007-01-01
The knowledge base of the electric power system engineering community continues to grow with installed capacity of wind generation in North America. While this process has certainly occurred at other times in the industry with other technologies, the relatively explosive growth, the compressed time frames from project conception to commissioning, and the unconventional characteristics of wind generation make this period in the industry somewhat unique. Large wind generation facilities are necessarily evolving to look more and more like conventional generating plants in terms of their ability to interact with the transmission network in a way that does not compromise performance or system reliability. Such an evolution has only been possible through the cumulative contributions of an ever-growing number of power system engineers who have delved into the unique technologies and technical challenges presented by wind generation. The industry is still only part of the way up the learning curve, however. Numerous technical challenges remain, and as has been found, each new wind generation facility has the potential to generate some new questions. With the IEEE PES expanding its presence and activities in this increasingly significant commercial arena, the prospects for staying "ahead of the curve" are brightened.
A discrete-time Markov modulated queuing system with batched arrivals
Clegg, Richard G.
to existing work in queuing theory. In section 3 the model is solved to get equations for the expected queueA discrete-time Markov modulated queuing system with batched arrivals Richard G. Clegg Department a discrete-time queuing system with applications to telecom- munications traffic. The arrival process
Self-Organizing Relays in LTE networks: Queuing analysis and algorithms
optimum is proven. 1 Index Terms--Relay, Queuing Theory, Stability, OFDMA, Load Balancing, SelfSelf-Organizing Relays in LTE networks: Queuing analysis and algorithms Richard Combes,Zwi Altman sharing mechanisms are investigated. In the static case we provide a queu- ing model to calculate
Optimizing Automated Call Routing by Integrating Spoken Dialog Models with Queuing Models
Horvitz, Eric
on modeling techniques from decision analysis and queuing theory, for determining when callers shouldOptimizing Automated Call Routing by Integrating Spoken Dialog Models with Queuing Models Tim Paek a call is likely to fail using spoken dialog fea- tures with queuing models of call center vol- ume
Human Factors of Queuing: A Library Circulation Model.
ERIC Educational Resources Information Center
Mansfield, Jerry W.
1981-01-01
Classical queuing theories and their accompanying service facilities totally disregard the human factors in the name of efficiency. As library managers we need to be more responsive to human needs in the design of service points and make every effort to minimize queuing and queue frustration. Five references are listed. (Author/RAA)
Mayhew, L; Smith, D
2008-03-01
This paper uses a queuing model to evaluate completion times in Accident and Emergency (A&E) departments in the light of the Government target of completing and discharging 98% of patients inside 4 h. It illustrates how flows though an A&E can be accurately represented as a queuing process, how outputs can be used to visualise and interpret the 4-h Government target in a simple way and how the model can be used to assess the practical achievability of A&E targets in the future. The paper finds that A&E targets have resulted in significant improvements in completion times and thus deal with a major source of complaint by users of the National Health Service in the U.K. It suggests that whilst some of this improvement is attributable to better management, some is also due to the way some patients in A&E are designated and therefore counted through the system. It finds for example that the current target would not have been possible without some form of patient re-designation or re-labelling taking place. Further it finds that the current target is so demanding that the integrity of reported performance is open to question. Related incentives and demand management issues resulting from the target are also briefly discussed. PMID:18390164
A queuing model for road traffic simulation
Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.; Farhi, N.
2015-03-10
We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme.
- source/channel coding, queuing theory, router design, network architectures (Intserv, DiffServ, MPLSS-aware communications for IP networks Buffer management, scheduling policies, fairness, and queuing principles, such as RSVP (ABET Outcomes: a, c, k, l, m, n) 3. Explain queuing concepts, buffer management, and scheduling
Yousefi'zadeh, Homayoun
traffic patterns in an analytical study of queuing systems. Alkhatib et al. [1] used chaos theory1 Neural Network Estimation of Packet Arrival Rate in Self-Similar Queuing Systems Homayoun Yousefi@uci.edu Abstract--Estimating average latency of queuing systems is one of the most challenging tasks
Wu, Changxu (Sean)
describes a model of PRP that integrates queuing network theory (Liu, 1996, 1997) and reinforcement learningModeling Psychological Refractory Period (PRP) and Practice Effect on PRP with Queuing Networks information processing in dual-task situations. This article describes a queuing network model of PRP
NASA Astrophysics Data System (ADS)
Santoshkumar; Udaykumar, R. Y.
2015-04-01
The electrical vehicles (EVs) can be connected to the grid for power transaction. The vehicle-to-grid (V2G) supports the grid requirements and helps in maintaining the load demands. The grid control center (GCC), aggregator and EV are three key entities in V2G communication. The GCC sends the information about power requirements to the aggregator. The aggregator after receiving the information from the GCC sends the information to the EVs. Based on the information, the interested EV owners participate in power transaction with the grid. The aggregator facilitates the EVs by providing the parking and charging slot. In this paper the queuing model for EVs connected to the grid and development of wireless infrastructure for the EV to Smart Meter communication is proposed. The queuing model is developed and simulated. The path loss models for WiMAX are analyzed and compared. Also, the physical layer of WiMAX protocol is modeled and simulated for the EV to Smart Meter communication in V2G.
DOI: 10.1007/s00224-006-1251-9 Theory Comput. Systems 39, 875901 (2006)
Tirthapura, Srikanta
2006-01-01
DOI: 10.1007/s00224-006-1251-9 Theory Comput. Systems 39, 875901 (2006) Theory of Computing Zurich, CH-8092 Z¨urich, Switzerland wattenhofer@tik.ee.ethz.ch Abstract. Distributed queuing- tributed queuing that is based on path reversal on a pre-selected spanning tree of the network. We present
Queuing Models with Two Types of Service: Applications for Dependability Planning of Complex Systems
Sztrik, János
1 Queuing Models with Two Types of Service: Applications for Dependability Planning of Complex. These models represent open and closed queuing systems for two maintenance operations replacements theory and survivability theory [1-3]. Namely, this problem plays a key role in the dependability
Oscillations and Buffer Overflows in Video Streaming under Non-Negligible Queuing Delay
Loguinov, Dmitri
Oscillations and Buffer Overflows in Video Streaming under Non-Negligible Queuing Delay Yueping from ideal from the control-theoretic point of view and leads to ampli- fied oscillations when queuing, Performance, Theory Keywords Buffer Overflows, Delay, Stability, Video Streaming 1. INTRODUCTION Video
Abujudeh, Hani; Vuong, Bill; Baker, Stephen R
2005-07-01
The objective of this study was to evaluate the operation of the portable X-ray machine in relation to examinations ordered by the Emergency Department at the University of Medicine and Dentistry of New Jersey, as well as to identify any bottlenecks hindering the performance of the aforementioned system. To do so, the activity of the portable X-ray was monitored in the period from 8 June 2004 to 24 June 2004, as well as from 6 July 2004 to 12 July 2004, yielding 11 days of data and 116 individual X-ray examinations. During observation times was noted for various checkpoints in the procedure. Using the data gathered, the average input, output, processing times, and variance were calculated. In turn, these values were used to calculate the response times for the Ordering Phase (5.502 min), traveling (2.483 min), Examination Phase (4.453 min), returning (3.855 min), Order Processing Phase (2.962 min), and the Development Phase (3.437 min). These phases were combined for a total of 22.721 min from the time the examination was placed to the time the X-ray films were uploaded to the PACS computer network. Based on these calculations, the Ordering Phase was determined to be the single largest bottleneck in the portable X-ray system. The Examination Phase also represented the second largest bottleneck for a combined total of 44% of the total response time. PMID:16133619
Capacity Utilization Study for Aviation Security Cargo Inspection Queuing System
Allgood, Glenn O; Olama, Mohammed M; Lake, Joe E; Brumback, Daryl L
2010-01-01
In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The queuing model employed in our study is based on discrete-event simulation and processes various types of cargo simultaneously. Onsite measurements are collected in an airport facility to validate the queuing model. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, throughput, capacity utilization, subscribed capacity utilization, resources capacity utilization, subscribed resources capacity utilization, and number of cargo pieces (or pallets) in the different queues. These metrics are performance indicators of the system s ability to service current needs and response capacity to additional requests. We studied and analyzed different scenarios by changing various model parameters such as number of pieces per pallet, number of TSA inspectors and ATS personnel, number of forklifts, number of explosives trace detection (ETD) and explosives detection system (EDS) inspection machines, inspection modality distribution, alarm rate, and cargo closeout time. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures should reduce the overall cost and shipping delays associated with new inspection requirements.
Application of queuing model in Dubai's busiest megaplex
NASA Astrophysics Data System (ADS)
Bhagchandani, Maneesha; Bajpai, Priti
2013-09-01
This paper provides a study and analysis of the extremely busy booking counters at the Megaplex in Dubai using the queuing model and simulation. Dubai is an emirate in UAE with a multicultural population. Majority of the population in Dubai is foreign born. Cinema is one of the major forms of entertainment. There are more than 13 megaplexes each with a number of screens ranging from 3 to 22. They screen movies in English, Arabic, Hindi and other languages. It has been observed that during the weekends megaplexes attract a large number of crowd resulting in long queues at the booking counters. One of the busiest megaplex was selected for the study. Queuing theory satisfies the model when tested in real time situation. The concepts of arrival rate, service rate, utilization rate, waiting time in the system, average number of people in the queue, using Little's Theorem and M/M/s queuing model along with simulation software have been used to suggest an empirical solution. The aim of the paper is twofold-To assess the present situation at the Megaplex and give recommendations to optimize the use of booking counters.
Queuing register uses fluid logic elements
NASA Technical Reports Server (NTRS)
1966-01-01
Queuing register /a multistage bit-shifting device/ uses a series of pure fluid elements to perform the required logic operations. The register has several stages of three-state pure fluid elements combined with two-input NOR gates.
Queuing Models of Tertiary Storage
NASA Technical Reports Server (NTRS)
Johnson, Theodore
1996-01-01
Large scale scientific projects generate and use large amounts of data. For example, the NASA Earth Observation System Data and Information System (EOSDIS) project is expected to archive one petabyte per year of raw satellite data. This data is made automatically available for processing into higher level data products and for dissemination to the scientific community. Such large volumes of data can only be stored in robotic storage libraries (RSL's) for near-line access. A characteristic of RSL's is the use of a robot arm that transfers media between a storage rack and the read/write drives, thus multiplying the capacity of the system. The performance of the RSL's can be a critical limiting factor for the performance of the archive system. However, the many interacting components of an RSL make a performance analysis difficult. In addition, different RSL components can have widely varying performance characteristics. This paper describes our work to develop performance models of an RSL in isolation. Next we show how the RSL model can be incorporated into a queuing network model. We use the models to make some example performance studies of archive systems. The models described in this paper, developed for the NASA EODIS project, are implemented in C with a well defined interface. The source code, accompanying documentation, and also sample JAVA applets are available at: http://www.cis.ufl.edu/ted/
Predicting age-related differences in visual information processing using a two-stage queuing model.
Ellis, R D; Goldberg, J H; Detweiler, M C
1996-05-01
Recent work on age-related differences in some types of visual information processing has qualitatively stated that younger adults are able to develop parallel processing capability, while older adults remain serial processors. A mathematical model based on queuing theory was used to quantitatively predict and parameterize age-related differences in the perceptual encoding and central decision-making aspects of a multiple-frame search task. Statistical results indicated main effects for frame duration, display load, age group, and session of practice. Comparison of the full model and a restricted model indicated an efficient contribution of the encoding speed parameter. The best-fitting parameter set indicated that (1) younger participants processed task information with a two-channel parallel system, while older participants were serial processors; and (2) perceptual encoding had a large impact on age-related differences in task performance. Results are discussed with implications for human factors design principles. PMID:8620355
Bremer, H; Ehrenberg, M
1995-05-17
A recently reported comparison of stable RNA (rRNA, tRNA) and mRNA synthesis rates in ppGpp-synthesizing and ppGpp-deficient (delta relA delta spoT) bacteria has suggested that ppGpp inhibits transcription initiation from stable RNA promoters, as well as synthesis of (bulk) mRNA. Inhibition of stable RNA synthesis occurs mainly during slow growth of bacteria when cytoplasmic levels of ppGpp are high. In contrast, inhibition of mRNA occurs mainly during fast growth when ppGpp levels are low, and it is associated with a partial inactivation of RNA polymerase. To explain these observations it has been proposed that ppGpp causes transcriptional pausing and queuing during the synthesis of mRNA. Polymerase queuing requires high rates of transcription initiation in addition to polymerase pausing, and therefore high concentrations of free RNA polymerase. These conditions are found in fast growing bacteria. Furthermore, the RNA polymerase queues lead to a promoter blocking when RNA polymerase molecules stack up from the pause site back to the (mRNA) promoter. This occurs most frequently at pause sites close to the promoter. Blocking of mRNA promoters diverts RNA polymerase to stable RNA promoters. In this manner ppGpp could indirectly stimulate synthesis of stable RNA at high growth rates. In the present work a mathematical analysis, based on the theory of queuing, is presented and applied to the global control of transcription in bacteria. This model predicts the in vivo distribution of RNA polymerase over stable RNA and mRNA genes for both ppGpp-synthesizing and ppGpp-deficient bacteria in response to different environmental conditions. It also shows how small changes in basal ppGpp concentrations can produce large changes in the rate of stable RNA synthesis. PMID:7539631
Priority Queuing On A Parallel Data Bus
NASA Technical Reports Server (NTRS)
Wallis, D. E.
1985-01-01
Queuing strategy for communications along shared data bus minimizes number of data lines while always assuring user of highest priority given access to bus. New system handles up to 32 user demands on 17 data lines that previously serviced only 17 demands.
Theory-Based Stakeholder Evaluation
ERIC Educational Resources Information Center
Hansen, Morten Balle; Vedung, Evert
2010-01-01
This article introduces a new approach to program theory evaluation called theory-based stakeholder evaluation or the TSE model for short. Most theory-based approaches are program theory driven and some are stakeholder oriented as well. Practically, all of the latter fuse the program perceptions of the various stakeholder groups into one unitary…
Frame-Scheduling for Input-Queued Switches with Energy Reconfiguration Costs
electronic switching fabrics are based on CMOS technology. Roughly speaking, in an electronic crossbar, one a slotted input-queued switch with a crossbar-like switching fabric. In each time-slot, a centralized scheduler determines a switching fabric configuration to transfer packets. We consider the energy
Regular Variation, Subexponentiality Their Applications in Probability Theory
Mikosch, Thomas
; alternatively one could have taken renewal theory, branching, queuing where one often faces the same problemsRegular Variation, Subexponentiality and Their Applications in Probability Theory T. Mikosch
Some queuing network models of computer systems
NASA Technical Reports Server (NTRS)
Herndon, E. S.
1980-01-01
Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.
Case study: applying management policies to manage distributed queuing systems
NASA Astrophysics Data System (ADS)
Neumair, Bernhard; Wies, René
1996-06-01
The increasing deployment of workstations and high performance endsystems in addition to the operation of mainframe computers leads to a situation where many companies can no longer afford for their expensive workstations to run idle for long hours during the night or with little load during daytime. Distributed queuing systems and batch systems (DQSs) provide an efficient basis to make use of these unexploited resources and allow corporations to replace expensive supercomputers with clustered workstations running DQSs. To employ these innovative DQSs on a large scale, the management policies for scheduling jobs, configuring queues, etc must be integrated in the overall management process for the IT infrastructure. For this purpose, the concepts of application management and management policies are introduced and discussed. The definition, automatic transformation, and implementation of policies on management platforms to effectively manage DQSs will show that policy-based application management is already possible using the existing management functionality found in today's systems.
Queuing network approach for building evacuation planning
NASA Astrophysics Data System (ADS)
Ishak, Nurhanis; Khalid, Ruzelan; Baten, Md. Azizul; Nawawi, Mohd. Kamal Mohd.
2014-12-01
The complex behavior of pedestrians in a limited space layout can explicitly be modeled using an M/G/C/C state dependent queuing network. This paper implements the approach to study pedestrian flows through various corridors in a topological network. The best arrival rates and their impacts to the corridors' performances in terms of the throughput, blocking probability, expected number of occupants in the system and expected travel time were first measured using the M/G/C/C analytical model. These best arrival rates were then fed to its Network Flow Programming model to find the best arrival rates to source corridors and routes optimizing the network's total throughput. The analytical results were then validated using a simulation model. Various results of this study can be used to support the current Standard Operating Procedures (SOP) to efficiently and safely evacuate people in emergency cases.
NQS - NETWORK QUEUING SYSTEM, VERSION 2.0 (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Walter, H.
1994-01-01
The Network Queuing System, NQS, is a versatile batch and device queuing facility for a single Unix computer or a group of networked computers. With the Unix operating system as a common interface, the user can invoke the NQS collection of user-space programs to move batch and device jobs freely around the different computer hardware tied into the network. NQS provides facilities for remote queuing, request routing, remote status, queue status controls, batch request resource quota limits, and remote output return. This program was developed as part of an effort aimed at tying together diverse UNIX based machines into NASA's Numerical Aerodynamic Simulator Processing System Network. This revision of NQS allows for creating, deleting, adding and setting of complexes that aid in limiting the number of requests to be handled at one time. It also has improved device-oriented queues along with some revision of the displays. NQS was designed to meet the following goals: 1) Provide for the full support of both batch and device requests. 2) Support all of the resource quotas enforceable by the underlying UNIX kernel implementation that are relevant to any particular batch request and its corresponding batch queue. 3) Support remote queuing and routing of batch and device requests throughout the NQS network. 4) Support queue access restrictions through user and group access lists for all queues. 5) Enable networked output return of both output and error files to possibly remote machines. 6) Allow mapping of accounts across machine boundaries. 7) Provide friendly configuration and modification mechanisms for each installation. 8) Support status operations across the network, without requiring a user to log in on remote target machines. 9) Provide for file staging or copying of files for movement to the actual execution machine. To support batch and device requests, NQS v.2 implements three queue types--batch, device and pipe. Batch queues hold and prioritize batch requests; device queues hold and prioritize device requests; pipe queues transport both batch and device requests to other batch, device, or pipe queues at local or remote machines. Unique to batch queues are resource quota limits that restrict the amounts of different resources that a batch request can consume during execution. Unique to each device queue is a set of one or more devices, such as a line printer, to which requests can be sent for execution. Pipe queues have associated destinations to which they route and deliver requests. If the proper destination machine is down or unreachable, pipe queues are able to requeue the request and deliver it later when the destination is available. All NQS network conversations are performed using the Berkeley socket mechanism as ported into the respective vendor kernels. NQS is written in C language. The generic UNIX version (ARC-13179) has been successfully implemented on a variety of UNIX platforms, including Sun3 and Sun4 series computers, SGI IRIS computers running IRIX 3.3, DEC computers running ULTRIX 4.1, AMDAHL computers running UTS 1.3 and 2.1, platforms running BSD 4.3 UNIX. The IBM RS/6000 AIX version (COS-10042) is a vendor port. NQS 2.0 will also communicate with the Cray Research, Inc. and Convex, Inc. versions of NQS. The standard distribution medium for either machine version of NQS 2.0 is a 60Mb, QIC-24, .25 inch streaming magnetic tape cartridge in UNIX tar format. Upon request the generic UNIX version (ARC-13179) can be provided in UNIX tar format on alternate media. Please contact COSMIC to discuss the availability and cost of media to meet your specific needs. An electronic copy of the NQS 2.0 documentation is included on the program media. NQS 2.0 was released in 1991. The IBM RS/6000 port of NQS was developed in 1992. IRIX is a trademark of Silicon Graphics Inc. IRIS is a registered trademark of Silicon Graphics Inc. UNIX is a registered trademark of UNIX System Laboratories Inc. Sun3 and Sun4 are trademarks of Sun Microsystems Inc. DEC and ULTRIX are trademarks of Digital Equipment Corporation.
An application of a queuing model for sea states
NASA Astrophysics Data System (ADS)
Loffredo, L.; Monbaliu, J.; Anderson, C.
2012-04-01
Unimodal approaches in design practice have shown inconsistencies in terms of directionality and limitations for accurate sea states description. Spectral multimodality needs to be included in the description of the wave climate. It can provide information about the coexistence of different wave systems originating from different meteorological events, such as locally generated wind waves and swell systems from distant storms. A 20 years dataset (1989-2008) for a location on the North Sea (K13, 53.2°N 3.2°E) has been retrieved from the ECMWF ERA- Interim re-analysis data archive, providing a consistent and homogeneous dataset. The work focuses on the joint and conditional probability distributions of wind sea and swell systems. For marine operations and design applications, critical combinations of wave systems may exist. We define a critical sea state on the basis of a set of thresholds, which can be not necessarily extreme, the emphasis is given to the dangerous combination of different wave systems concerning certain operations (i.e. small vessels navigation, dredging). The distribution of non-operability windows is described by a point process model with random and independent events, whose occurrences and lengths can be described only probabilistically. These characteristics allow to treat the emerging patterns as a part of a queuing system. According to this theory, generally adopted for several applications including traffic flows and waiting lines, the input process describes the sequence of requests for a service and the service mechanism the length of time that these requests will occupy the facilities. For weather-driven processes at sea an alternating renewal process appears as a suitable model. It consists of a sequence of critical events (period of inoperability), each of random duration, separated by calms, also of random durations. Inoperability periods and calms are assumed independent. In this model it is not possible more than one critical event occurring at the same time. The analysis is carried out taking into account the thresholds' selection and the seasonality.
A queueing theory based model for business continuity in hospitals.
Miniati, R; Cecconi, G; Dori, F; Frosini, F; Iadanza, E; Biffi Gentili, G; Niccolini, F; Gusinu, R
2013-01-01
Clinical activities can be seen as results of precise and defined events' succession where every single phase is characterized by a waiting time which includes working duration and possible delay. Technology makes part of this process. For a proper business continuity management, planning the minimum number of devices according to the working load only is not enough. A risk analysis on the whole process should be carried out in order to define which interventions and extra purchase have to be made. Markov models and reliability engineering approaches can be used for evaluating the possible interventions and to protect the whole system from technology failures. The following paper reports a case study on the application of the proposed integrated model, including risk analysis approach and queuing theory model, for defining the proper number of device which are essential to guarantee medical activity and comply the business continuity management requirements in hospitals. PMID:24109839
FIFO Queuing of Constant Length Fully Synchronous Jobs
Louchard, Guy
FIFO Queuing of Constant Length Fully Synchronous Jobs Vandy Berten, Raymond Devillers and Guy synchronous parallel jobs are submitted. In order to simplify the analysis, we assume constant length jobs In a (computational) Grid, clients submit their jobs to a job broker, who sends them to well chosen computing elements
SHARED MEMORY PARALLEL REGENERATIVE QUEUING NETWORK Panajotis Katsaros
Katsaros, Panagiotis
SHARED MEMORY PARALLEL REGENERATIVE QUEUING NETWORK SIMULATION Panajotis Katsaros Constantine Lazos simulations. The simulation results are statistically processed, by applying the classical regenerative method'' of the time intervals, match each other. In this sense, the regenerative method provides a safe way
Modified weighted fair queuing for packet scheduling in mobile WiMAX networks
NASA Astrophysics Data System (ADS)
Satrya, Gandeva B.; Brotoharsono, Tri
2013-03-01
The increase of user mobility and the need for data access anytime also increases the interest in broadband wireless access (BWA). The best available quality of experience for mobile data service users are assured for IEEE 802.16e based users. The main problem of assuring a high QOS value is how to allocate available resources among users in order to meet the QOS requirement for criteria such as delay, throughput, packet loss and fairness. There is no specific standard scheduling mechanism stated by IEEE standards, which leaves it for implementer differentiation. There are five QOS service classes defined by IEEE 802.16: Unsolicited Grant Scheme (UGS), Extended Real Time Polling Service (ertPS), Real Time Polling Service (rtPS), Non Real Time Polling Service (nrtPS) and Best Effort Service (BE). Each class has different QOS parameter requirements for throughput and delay/jitter constraints. This paper proposes Modified Weighted Fair Queuing (MWFQ) scheduling scenario which was based on Weighted Round Robin (WRR) and Weighted Fair Queuing (WFQ). The performance of MWFQ was assessed by using above five QoS criteria. The simulation shows that using the concept of total packet size calculation improves the network's performance.
Using multi-class queuing network to solve performance models of e-business sites.
Zheng, Xiao-ying; Chen, De-ren
2004-01-01
Due to e-business's variety of customers with different navigational patterns and demands, multi-class queuing network is a natural performance model for it. The open multi-class queuing network(QN) models are based on the assumption that no service center is saturated as a result of the combined loads of all the classes. Several formulas are used to calculate performance measures, including throughput, residence time, queue length, response time and the average number of requests. The solution technique of closed multi-class QN models is an approximate mean value analysis algorithm (MVA) based on three key equations, because the exact algorithm needs huge time and space requirement. As mixed multi-class QN models, include some open and some closed classes, the open classes should be eliminated to create a closed multi-class QN so that the closed model algorithm can be applied. Some corresponding examples are given to show how to apply the algorithms mentioned in this article. These examples indicate that multi-class QN is a reasonably accurate model of e-business and can be solved efficiently. PMID:14663849
Time-varying priority queuing models for human dynamics
NASA Astrophysics Data System (ADS)
Jo, Hang-Hyun; Pan, Raj Kumar; Kaski, Kimmo
2012-06-01
Queuing models provide insight into the temporal inhomogeneity of human dynamics, characterized by the broad distribution of waiting times of individuals performing tasks. We theoretically study the queuing model of an agent trying to execute a task of interest, the priority of which may vary with time due to the agent's “state of mind.” However, its execution is disrupted by other tasks of random priorities. By considering the priority of the task of interest either decreasing or increasing algebraically in time, we analytically obtain and numerically confirm the bimodal and unimodal waiting time distributions with power-law decaying tails, respectively. These results are also compared to the updating time distribution of papers in arXiv.org and the processing time distribution of papers in Physical Review journals. Our analysis helps to understand human task execution in a more realistic scenario.
Priority Queuing Models for Hospital Intensive Care Units and Impacts to Severe Case Patients
Hagen, Matthew S.; Jopling, Jeffrey K; Buchman, Timothy G; Lee, Eva K.
2013-01-01
This paper examines several different queuing models for intensive care units (ICU) and the effects on wait times, utilization, return rates, mortalities, and number of patients served. Five separate intensive care units at an urban hospital are analyzed and distributions are fitted for arrivals and service durations. A system-based simulation model is built to capture all possible cases of patient flow after ICU admission. These include mortalities and returns before and after hospital exits. Patients are grouped into 9 different classes that are categorized by severity and length of stay (LOS). Each queuing model varies by the policies that are permitted and by the order the patients are admitted. The first set of models does not prioritize patients, but examines the advantages of smoothing the operating schedule for elective surgeries. The second set analyzes the differences between prioritizing admissions by expected LOS or patient severity. The last set permits early ICU discharges and conservative and aggressive bumping policies are contrasted. It was found that prioritizing patients by severity considerably reduced delays for critical cases, but also increased the average waiting time for all patients. Aggressive bumping significantly raised the return and mortality rates, but more conservative methods balance quality and efficiency with lowered wait times without serious consequences. PMID:24551379
Wang, Jie; Cui, Kai; Zhou, Kuanjiu; Yu, Yanshuo
2014-01-01
Due to the limited resources of wireless sensor network, low efficiency of real-time communication scheduling, poor safety defects, and so forth, a queuing performance evaluation approach based on regular expression match is proposed, which is a method that consists of matching preprocessing phase, validation phase, and queuing model of performance evaluation phase. Firstly, the subset of related sequence is generated in preprocessing phase, guiding the validation phase distributed matching. Secondly, in the validation phase, the subset of features clustering, the compressed matching table is more convenient for distributed parallel matching. Finally, based on the queuing model, the sensor networks of task scheduling dynamic performance are evaluated. Experiments show that our approach ensures accurate matching and computational efficiency of more than 70%; it not only effectively detects data packets and access control, but also uses queuing method to determine the parameters of task scheduling in wireless sensor networks. The method for medium scale or large scale distributed wireless node has a good applicability. PMID:25401151
Cui, Kai; Zhou, Kuanjiu; Yu, Yanshuo
2014-01-01
Due to the limited resources of wireless sensor network, low efficiency of real-time communication scheduling, poor safety defects, and so forth, a queuing performance evaluation approach based on regular expression match is proposed, which is a method that consists of matching preprocessing phase, validation phase, and queuing model of performance evaluation phase. Firstly, the subset of related sequence is generated in preprocessing phase, guiding the validation phase distributed matching. Secondly, in the validation phase, the subset of features clustering, the compressed matching table is more convenient for distributed parallel matching. Finally, based on the queuing model, the sensor networks of task scheduling dynamic performance are evaluated. Experiments show that our approach ensures accurate matching and computational efficiency of more than 70%; it not only effectively detects data packets and access control, but also uses queuing method to determine the parameters of task scheduling in wireless sensor networks. The method for medium scale or large scale distributed wireless node has a good applicability. PMID:25401151
Efficient Buffering and Scheduling for a Single-Chip Crosspoint-Queued Switch
Panwar, Shivendra S.
Efficient Buffering and Scheduling for a Single-Chip Crosspoint-Queued Switch Zizhong Cao The single-chip crosspoint-queued (CQ) switch is a self-sufficient switching architecture enabled by state--Routers General Terms Algorithms, Design Keywords Single-Chip, Crossbar, Load Balancing, Deflection Routing
SENSITIVITY ANALYSIS ON A DYNAMIC PRICING PROBLEM OF AN M/M/c QUEUING SYSTEM
Örmeci, E. Lerzan
. In a recent research, Ziya et al. (2002) study the effect of the customer willingness to pay, the systemSENSITIVITY ANALYSIS ON A DYNAMIC PRICING PROBLEM OF AN M/M/c QUEUING SYSTEM Eren Baar Çil, Fikri of this article is to analyze the effects of changes in the system parameters on an M/M/c queuing system in which
Prabhakar, Balaji
results from queuing theory for continuous-time networks. Index Terms--Product form equilibrium, queuingIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 11, NOVEMBER 2006 5111 [10] S. A. Borbash," IEEE Trans. Inf. Theory, vol. IT-24, no. 5, pp. 525530, Sep. 1978. [12] H. Karloff, Linear Programming
NAS Requirements Checklist for Job Queuing/Scheduling Software
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The increasing reliability of parallel systems and clusters of computers has resulted in these systems becoming more attractive for true production workloads. Today, the primary obstacle to production use of clusters of computers is the lack of a functional and robust Job Management System for parallel applications. This document provides a checklist of NAS requirements for job queuing and scheduling in order to make most efficient use of parallel systems and clusters for parallel applications. Future requirements are also identified to assist software vendors with design planning.
Duality Between Prefetching and Queued Writing with Parallel Disks
Hutchinson, David A.; Sanders, Peter; Vitter, Jeffrey Scott
2005-01-01
for Industrial and Applied Mathematics Vol. 34, No. 6, pp. 1443–1463 DUALITY BETWEEN PREFETCHING AND QUEUED WRITING WITH PARALLEL DISKS ? DAVID A. HUTCHINSON † , PETER SANDERS ‡ , AND JEFFREY SCOTT VITTER § Abstract. Parallel disks promise to be a cost e?ective... BETWEEN PREFETCHING AND WRITING 1445 S SR RC FR Fig. 1.1. A sequence of 16 blocks allocated to 4 disks (1=white, 2=light grey, 3=dark grey, 4=black) using di?erent allocation strategies. in a simple, round-robin manner. Simple randomized (SR): For each...
NASA Astrophysics Data System (ADS)
Duan, Haoran
1997-12-01
This dissertation presents the concepts, principles, performance, and implementation of input queuing and cell-scheduling modules for the Illinois Pulsar-based Optical INTerconnect (iPOINT) input-buffered Asynchronous Transfer Mode (ATM) testbed. Input queuing (IQ) ATM switches are well suited to meet the requirements of current and future ultra-broadband ATM networks. The IQ structure imposes minimum memory bandwidth requirements for cell buffering, tolerates bursty traffic, and utilizes memory efficiently for multicast traffic. The lack of efficient cell queuing and scheduling solutions has been a major barrier to build high-performance, scalable IQ-based ATM switches. This dissertation proposes a new Three-Dimensional Queue (3DQ) and a novel Matrix Unit Cell Scheduler (MUCS) to remove this barrier. 3DQ uses a linked-list architecture based on Synchronous Random Access Memory (SRAM) to combine the individual advantages of per-virtual-circuit (per-VC) queuing, priority queuing, and N-destination queuing. It avoids Head of Line (HOL) blocking and provides per-VC Quality of Service (QoS) enforcement mechanisms. Computer simulation results verify the QoS capabilities of 3DQ. For multicast traffic, 3DQ provides efficient usage of cell buffering memory by storing multicast cells only once. Further, the multicast mechanism of 3DQ prevents a congested destination port from blocking other less- loaded ports. The 3DQ principle has been prototyped in the Illinois Input Queue (iiQueue) module. Using Field Programmable Gate Array (FPGA) devices, SRAM modules, and integrated on a Printed Circuit Board (PCB), iiQueue can process incoming traffic at 800 Mb/s. Using faster circuit technology, the same design is expected to operate at the OC-48 rate (2.5 Gb/s). MUCS resolves the output contention by evaluating the weight index of each candidate and selecting the heaviest. It achieves near-optimal scheduling and has a very short response time. The algorithm originates from a heuristic strategy that leads to 'socially optimal' solutions, yielding a maximum number of contention-free cells being scheduled. A novel mixed digital-analog circuit has been designed to implement the MUCS core functionality. The MUCS circuit maps the cell scheduling computation to the capacitor charging and discharging procedures that are conducted fully in parallel. The design has a uniform circuit structure, low interconnect counts, and low chip I/O counts. Using 2 ?m CMOS technology, the design operates on a 100 MHz clock and finds a near-optimal solution within a linear processing time. The circuit has been verified at the transistor level by HSPICE simulation. During this research, a five-port IQ-based optoelectronic iPOINT ATM switch has been developed and demonstrated. It has been fully functional with an aggregate throughput of 800 Mb/s. The second-generation IQ-based switch is currently under development. Equipped with iiQueue modules and MUCS module, the new switch system will deliver a multi-gigabit aggregate throughput, eliminate HOL blocking, provide per-VC QoS, and achieve near-100% link bandwidth utilization. Complete documentation of input modules and trunk module for the existing testbed, and complete documentation of 3DQ, iiQueue, and MUCS for the second-generation testbed are given in this dissertation.
Basing quantum theory on information processing
NASA Astrophysics Data System (ADS)
Barnum, Howard
2008-03-01
I consider information-based derivations of the quantum formalism, in a framework encompassing quantum and classical theory and a broad spectrum of theories serving as foils to them. The most ambitious hope for such a derivation is a role analogous to Einstein's development of the dynamics and kinetics of macroscopic bodies, and later of their gravitational interactions, on the basis of simple principles with clear operational meanings and experimental consequences. Short of this, it could still provide a principled understanding of the features of quantum mechanics that account for its greater-than-classical information-processing power, helping guide the search for new quantum algorithms and protocols. I summarize the convex operational framework for theories, and discuss information-processing in theories therein. Results include the fact that information that can be obtained without disturbance is inherently classical, generalized no-cloning and no-broadcasting theorems, exponentially secure bit commitment in all non-classical theories without entanglement, properties of theories that allow teleportation, and properties of theories that allow ``remote steering'' of ensembles using entanglement. Joint work with collaborators including Jonathan Barrett, Matthew Leifer, Alexander Wilce, Oscar Dahlsten, and Ben Toner.
Jigsaw Cooperative Learning: Acid-Base Theories
ERIC Educational Resources Information Center
Tarhan, Leman; Sesen, Burcin Acar
2012-01-01
This study focused on investigating the effectiveness of jigsaw cooperative learning instruction on first-year undergraduates' understanding of acid-base theories. Undergraduates' opinions about jigsaw cooperative learning instruction were also investigated. The participants of this study were 38 first-year undergraduates in chemistry education…
Evaluation of Job Queuing/Scheduling Software: Phase I Report
NASA Technical Reports Server (NTRS)
Jones, James Patton
1996-01-01
The recent proliferation of high performance work stations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, the national Aerodynamic Simulation (NAS) supercomputer facility compiled a requirements checklist for job queuing/scheduling software. Next, NAS began an evaluation of the leading job management system (JMS) software packages against the checklist. This report describes the three-phase evaluation process, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still insufficient, even in the leading JMS's. However, by ranking each JMS evaluated against the requirements, we provide data that will be useful to other sites in selecting a JMS.
Second Evaluation of Job Queuing/Scheduling Software. Phase 1
NASA Technical Reports Server (NTRS)
Jones, James Patton; Brickell, Cristy; Chancellor, Marisa (Technical Monitor)
1997-01-01
The recent proliferation of high performance workstations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, NAS compiled a requirements checklist for job queuing/scheduling software. Next, NAS evaluated the leading job management system (JMS) software packages against the checklist. A year has now elapsed since the first comparison was published, and NAS has repeated the evaluation. This report describes this second evaluation, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still lacking, however, definite progress has been made by the vendors to correct the deficiencies. This report is supplemented by a WWW interface to the data collected, to aid other sites in extracting the evaluation information on specific requirements of interest.
Non-Equilibrium Statistical Physics of Currents in Network Queuing
NASA Astrophysics Data System (ADS)
Chertkov, Michael; Chernyak, Vladimir; Goldberg, David; Turitsyn, Konstantin
2010-03-01
We present a framework for studying large deviations of currents in a queuing network viewed as a non-equilibrium system of interacting particles. The network is completely specified by its underlying graphical structure, the number of servers (type of interaction) at each node, and the Poisson transition rates between nodes/stations. We focus on analyzing the statistics of currents over the network for the class of stable (statistically steady) networks. Some of our results are general (and surprising) explicit statements and some are conjectures, validated on a network with feedback which allows an independent spectral analysis. In particular, we show that for sufficiently strong atypical currents the system experiences a dynamical transition into a ``congested" regime, characterized by the saturation of certain servers in the network. We also discuss possible applications of these results for the analysis and control of traffic flows in transportation networks and of scheduling power flows in electric grids.
Thermal Control for Crossbar-based Input-Queued Switches
is proportional to f3 [1]. In a N × N single-chip crossbar with N2 crosspoint, each implemented through con- trol of a single-chip crossbar, analyzing the tradeoff between throughput (i.e., performance
The Scope of Usage-Based Theory
Ibbotson, Paul
2013-01-01
Usage-based approaches typically draw on a relatively small set of cognitive processes, such as categorization, analogy, and chunking to explain language structure and function. The goal of this paper is to first review the extent to which the “cognitive commitment” of usage-based theory has had success in explaining empirical findings across domains, including language acquisition, processing, and typology. We then look at the overall strengths and weaknesses of usage-based theory and highlight where there are significant debates. Finally, we draw special attention to a set of culturally generated structural patterns that seem to lie beyond the explanation of core usage-based cognitive processes. In this context we draw a distinction between cognition permitting language structure vs. cognition entailing language structure. As well as addressing the need for greater clarity on the mechanisms of generalizations and the fundamental units of grammar, we suggest that integrating culturally generated structures within existing cognitive models of use will generate tighter predictions about how language works. PMID:23658552
Harmonic-Oscillator-Based Effective Theory
W. C. Haxton
2006-08-06
I describe harmonic-oscillator-based effective theory (HOBET) and explore the extent to which the effects of excluded higher-energy oscillator shells can be represented by a contact-gradient expansion in next-to-next-to-leading order (NNLO). I find the expansion can be very successful provided the energy dependence of the effective interaction, connected with missing long-wavelength physics associated with low-energy breakup channels, is taken into account. I discuss a modification that removes operator mixing from HOBET, simplifying the task of determining the parameters of an NNLO interaction.
Application of queuing models to electronic toll collection
NASA Astrophysics Data System (ADS)
Zarrillo, Marguerite L.; Radwan, A. E.; Al-Deek, H. M.
1998-01-01
Electronic Toll Collection (ETC) via Automatic Vehicle Identification (AVI) technology has significantly altered traffic operations during toll collection. In particular, the value of the average processing rate of a lane providing both ETC service as well as a traditional service, fluctuates over the rush hour between the average value of the processing rate of the traditional service and the capacity of the ETC service. This study develops a queuing model to address the changing processing rates for the different mixed lanes. The model is applied to the westbound 9-lane portion of the Holland East Plaza in Orlando, FLorida. Data is evaluated for 6 different rush hours that include 3 different configuration patterns implemented over a period of 3 years. In the first configuration, only the traditional toll collection services are provided. In another configuration, all traditional lanes become mixed to include ETC except for the center lane, which becomes a lane dedicated solely to ETC service. In a final configuration, two lanes become dedicated to ETC service.
Spectrally queued feature selection for robotic visual odometery
NASA Astrophysics Data System (ADS)
Pirozzo, David M.; Frederick, Philip A.; Hunt, Shawn; Theisen, Bernard; Del Rose, Mike
2011-01-01
Over the last two decades, research in Unmanned Vehicles (UV) has rapidly progressed and become more influenced by the field of biological sciences. Researchers have been investigating mechanical aspects of varying species to improve UV air and ground intrinsic mobility, they have been exploring the computational aspects of the brain for the development of pattern recognition and decision algorithms and they have been exploring perception capabilities of numerous animals and insects. This paper describes a 3 month exploratory applied research effort performed at the US ARMY Research, Development and Engineering Command's (RDECOM) Tank Automotive Research, Development and Engineering Center (TARDEC) in the area of biologically inspired spectrally augmented feature selection for robotic visual odometry. The motivation for this applied research was to develop a feasibility analysis on multi-spectrally queued feature selection, with improved temporal stability, for the purposes of visual odometry. The intended application is future semi-autonomous Unmanned Ground Vehicle (UGV) control as the richness of data sets required to enable human like behavior in these systems has yet to be defined.
Modelling Pedestrian Travel Time and the Design of Facilities: A Queuing Approach
Rahman, Khalidur; Abdul Ghani, Noraida; Abdulbasah Kamil, Anton; Mustafa, Adli; Kabir Chowdhury, Md. Ahmed
2013-01-01
Pedestrian movements are the consequence of several complex and stochastic facts. The modelling of pedestrian movements and the ability to predict the travel time are useful for evaluating the performance of a pedestrian facility. However, only a few studies can be found that incorporate the design of the facility, local pedestrian body dimensions, the delay experienced by the pedestrians, and level of service to the pedestrian movements. In this paper, a queuing based analytical model is developed as a function of relevant determinants and functional factors to predict the travel time on pedestrian facilities. The model can be used to assess the overall serving rate or performance of a facility layout and correlate it to the level of service that is possible to provide the pedestrians. It has also the ability to provide a clear suggestion on the designing and sizing of pedestrian facilities. The model is empirically validated and is found to be a robust tool to understand how well a particular walking facility makes possible comfort and convenient pedestrian movements. The sensitivity analysis is also performed to see the impact of some crucial parameters of the developed model on the performance of pedestrian facilities. PMID:23691055
Doubly Special Relativity theories as different bases of $?$--Poincaré algebra
J. Kowalski-Glikman; S. Nowak
2002-03-05
Doubly Special Relativity (DSR) theory is a theory with two observer-independent scales, of velocity and mass (or length). Such a theory has been proposed by Amelino--Camelia as a kinematic structure which may underline quantum theory of relativity. Recently another theory of this kind has been proposed by Magueijo and Smolin. In this paper we show that both these theories can be understood as particular bases of the $\\kappa$--Poincar\\'e theory based on quantum (Hopf) algebra. This observation makes it possible to construct the space-time sector of Magueijo and Smolin DSR. We also show how this construction can be extended to the whole class of DSRs. It turns out that for all such theories the structure of space-time commutators is the same. This results lead us to the claim that physical predictions of properly defined DSR theory should be independent of the choice of basis.
MODELING HUMAN TRANSCRIPTION TYPING WITH QUEUING NETWORK-MODEL HUMAN PROCESSOR (QN-MHP)
Wu, Changxu (Sean)
MODELING HUMAN TRANSCRIPTION TYPING WITH QUEUING NETWORK-MODEL HUMAN PROCESSOR (QN-MHP) Changxu Wu of the 31 behavioral phenomena in transcription typing (Salthouse, 1986, 1987; Gentner, 1983). However in transcription typing including 5 typing error phenomena, 3 eye movement phenomena and 2 brain imaging phenomena
A Queuing Network Model with Blocking: Analysis of Congested Patient Flows in Mental Health Systems
Smith, Tony E.
mentally ill patients with more adequate care by replacing the state hospital function with a continuum1 Title: A Queuing Network Model with Blocking: Analysis of Congested Patient Flows in Mental for Mental Health Policy Services Research Department of Psychiatry, School of Medicine, University
Hajek, Bruce
bandwidth of bursty data sources, deterministic constraints on datastreams, queuing theory, and switching2416 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 6, OCTOBER 1998 Information Theory (Invited Paper) Abstract--Information theory has not yet had a direct impact on networking, although
Feature-Based Binding and Phase Theory
ERIC Educational Resources Information Center
Antonenko, Andrei
2012-01-01
Current theories of binding cannot provide a uniform account for many facts associated with the distribution of anaphors, such as long-distance binding effects and the subject-orientation of monomorphemic anaphors. Further, traditional binding theory is incompatible with minimalist assumptions. In this dissertation I propose an analysis of…
Theoretical description of metabolism using queueing theory.
Evstigneev, Vladyslav P; Holyavka, Marina G; Khrapatiy, Sergii V; Evstigneev, Maxim P
2014-09-01
A theoretical description of the process of metabolism has been developed on the basis of the Pachinko model (see Nicholson and Wilson in Nat Rev Drug Discov 2:668-676, 2003) and the queueing theory. The suggested approach relies on the probabilistic nature of the metabolic events and the Poisson distribution of the incoming flow of substrate molecules. The main focus of the work is an output flow of metabolites or the effectiveness of metabolism process. Two simplest models have been analyzed: short- and long-living complexes of the source molecules with a metabolizing point (Hole) without queuing. It has been concluded that the approach based on queueing theory enables a very broad range of metabolic events to be described theoretically from a single probabilistic point of view. PMID:25142745
Scheme-Based Synthesis of Inductive Theories
Montano-Rivas, O.; McCasland, R.; Dixon, L.; Bundy, Alan
2010-01-01
We describe an approach to automatically invent/explore new mathematical theories, with the goal of producing results comparable to those produced by humans, as represented, for example, in the libraries of the Isabelle ...
Bull, Susan
THEORY-BASED SUPPORT FOR MOBILE LANGUAGE LEARNING: NOTICING AND RECORDING Theory-based Support for Mobile Language Learning: Noticing and Recording doi:10.3991/ijim.v3i2.740 A. Kukulska-Hulme1 and S. Bull, United Kingdom Abstract--This paper considers the issue of 'noticing' in second language acquisition
Environmental Quality Indices Based on Evolutionary Theory
ERIC Educational Resources Information Center
Calabrese, Edward J.
1976-01-01
This paper proposes that any depth of understanding of environmental indices (and therefore, their determination) can be achieved only through the establishment of the evolutionary paradigm as the fundamental basis of all the environmental-biological problems Man encounters. Evolutionary theory could be used in the understanding of other parametal…
Maximum entropy principle based estimation of performance distribution in queueing theory.
He, Dayi; Li, Ran; Huang, Qi; Lei, Ping
2014-01-01
In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated. PMID:25207992
Maximum Entropy Principle Based Estimation of Performance Distribution in Queueing Theory
He, Dayi; Li, Ran; Huang, Qi; Lei, Ping
2014-01-01
In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated. PMID:25207992
MODELING AND PERFORMANCE EVALUATION FOR AVIATION SECURITY CARGO INSPECTION QUEUING SYSTEM
Allgood, Glenn O; Olama, Mohammed M; Rose, Terri A; Brumback, Daryl L
2009-01-01
Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we conduct performance evaluation study for an aviation security cargo inspection queuing system for material flow and accountability. The overall performance of the aviation security cargo inspection system is computed, analyzed, and optimized for the different system dynamics. Various performance measures are considered such as system capacity, residual capacity, and throughput. These metrics are performance indicators of the system s ability to service current needs and response capacity to additional requests. The increased physical understanding resulting from execution of the queuing model utilizing these vetted performance measures will reduce the overall cost and shipping delays associated with the new inspection requirements.
Discrete-time Queuing Analysis of Opportunistic Spectrum Access: Single User Case
NASA Astrophysics Data System (ADS)
Wang, Jin-long; Xu, Yu-hua; Gao, Zhan; Wu, Qi-hui
2011-11-01
This article studies the discrete-time queuing dynamics of opportunistic spectrum access (OSA) systems, in which the secondary user seeks spectrum vacancies between bursty transmissions of the primary user to communicate. Since spectrum sensing and data transmission can not be performed simultaneously, the secondary user employs a sensing-then-transmission strategy to detect the presence of the primary user before accessing the licensed channel. Consequently, the transmission of the secondary user is periodically suspended for spectrum sensing. To capture the discontinuous transmission nature of the secondary user, we introduce a discrete-time queuing subjected to bursty preemption to describe the behavior of the secondary user. Specifically, we derive some important metrics of the secondary user, including secondary spectrum utilization ratio, buffer length, packet delay and packet dropping ratio. Finally, simulation results validate the proposed theoretical model and reveal that the theoretical results fit the simulated results well.
Theory of friction based on brittle fracture
Byerlee, J.D.
1967-01-01
A theory of friction is presented that may be more applicable to geologic materials than the classic Bowden and Tabor theory. In the model, surfaces touch at the peaks of asperities and sliding occurs when the asperities fail by brittle fracture. The coefficient of friction, ??, was calculated from the strength of asperities of certain ideal shapes; for cone-shaped asperities, ?? is about 0.1 and for wedge-shaped asperities, ?? is about 0.15. For actual situations which seem close to the ideal model, observed ?? was found to be very close to 0.1, even for materials such as quartz and calcite with widely differing strengths. If surface forces are present, the theory predicts that ?? should decrease with load and that it should be higher in a vacuum than in air. In the presence of a fluid film between sliding surfaces, ?? should depend on the area of the surfaces in contact. Both effects are observed. The character of wear particles produced during sliding and the way in which ?? depends on normal load, roughness, and environment lend further support to the model of friction presented here. ?? 1967 The American Institute of Physics.
Task-Based Language Teaching and Expansive Learning Theory
ERIC Educational Resources Information Center
Robertson, Margaret
2014-01-01
Task-Based Language Teaching (TBLT) has become increasingly recognized as an effective pedagogy, but its location in generalized sociocultural theories of learning has led to misunderstandings and criticism. The purpose of this article is to explain the congruence between TBLT and Expansive Learning Theory and the benefits of doing so. The merit…
Monotonicity Reasoning in Formal Semantics Based on Modern Type Theories
Luo, Zhaohui
mono- tonicity reasoning can be dealt in MTTs, so that it can contribute to NLI based on the MTT-types in the Montagovian setting of sim- ple type theory. In MTT-semantics, richer type constructors involving dependent in Modern Type Theories (MTTs), or MTT-semantics for short, was first studied by Ranta in Martin-LÂ¨of's type
A Declarative Agent Programming Language Based on Action Theories
Thielscher, Michael
practical agent programming languages on the one hand and elaborate action theories on the other handA Declarative Agent Programming Language Based on Action Theories Conrad Drescher, Stephan Schiffel.drescher,stephan.schiffel,mit } @inf.tu-dresden.de Abstract. We discuss a new concept of agent programs that combines logic programming
Theory-Based Approaches to the Concept of Life
ERIC Educational Resources Information Center
El-Hani, Charbel Nino
2008-01-01
In this paper, I argue that characterisations of life through lists of properties have several shortcomings and should be replaced by theory-based accounts that explain the coexistence of a set of properties in living beings. The concept of life should acquire its meaning from its relationships with other concepts inside a theory. I illustrate…
Spectrum Investment with Uncertainty Based on Prospect Theory
Huang, Jianwei
Spectrum Investment with Uncertainty Based on Prospect Theory Junlin Yu, Man Hon Cheung, and Jianwei Huang Abstract--We study a secondary wireless operator's spectrum investment problem under spectrum supply uncertainty using prospect theory. In order to meet the demands of its users, the secondary
Modeling Crowd Behavior Based on Social Comparison Theory: Extended Abstract
Kaminka, Gal A.
model of crowd behavior, based on Festinger's Social Comparison Theory, a social psychology theory known- isting models, in a variety of fields, leave many open challenges. In social sciences and psychology according to forces of attraction and repulsion. Pedestrians react both to obstacles and to other
A Multiple Constraint Queuing Model for Predicting Current and Future Terminal Area Capacities
NASA Technical Reports Server (NTRS)
Meyn, Larry A.
2004-01-01
A new queuing model is being developed to evaluate the capacity benefits of several new concepts for terminal airspace operations. The major innovation is the ability to support a wide variety of multiple constraints for modeling the scheduling logic of several concepts. Among the constraints modeled are in-trail separation, separation between aircraft landing on parallel runways, in-trail separation at terminal area entry points, and permissible terminal area flight times.
Assessing the Queuing Process Using Data Envelopment Analysis: an Application in Health Centres.
Safdar, Komal A; Emrouznejad, Ali; Dey, Prasanta K
2016-01-01
Queuing is one of the very important criteria for assessing the performance and efficiency of any service industry, including healthcare. Data Envelopment Analysis (DEA) is one of the most widely-used techniques for performance measurement in healthcare. However, no queue management application has been reported in the health-related DEA literature. Most of the studies regarding patient flow systems had the objective of improving an already existing Appointment System. The current study presents a novel application of DEA for assessing the queuing process at an Outpatients' department of a large public hospital in a developing country where appointment systems do not exist. The main aim of the current study is to demonstrate the usefulness of DEA modelling in the evaluation of a queue system. The patient flow pathway considered for this study consists of two stages; consultation with a doctor and pharmacy. The DEA results indicated that waiting times and other related queuing variables included need considerable minimisation at both stages. PMID:26558394
NASA Astrophysics Data System (ADS)
Chowdhury, Prasun; Saha Misra, Iti
2014-10-01
Nowadays, due to increased demand for using the Broadband Wireless Access (BWA) networks in a satisfactory manner a promised Quality of Service (QoS) is required to manage the seamless transmission of the heterogeneous handoff calls. To this end, this paper proposes an improved Call Admission Control (CAC) mechanism with prioritized handoff queuing scheme that aims to reduce dropping probability of handoff calls. Handoff calls are queued when no bandwidth is available even after the allowable bandwidth degradation of the ongoing calls and get admitted into the network when an ongoing call is terminated with a higher priority than the newly originated call. An analytical Markov model for the proposed CAC mechanism is developed to analyze various performance parameters. Analytical results show that our proposed CAC with handoff queuing scheme prioritizes the handoff calls effectively and reduces dropping probability of the system by 78.57% for real-time traffic without degrading the number of failed new call attempts. This results in the increased bandwidth utilization of the network.
A Noncompetitive Game Based on Community Theory: METROPROB.
ERIC Educational Resources Information Center
Allen, Irving
1979-01-01
Describes METROPROB, a noncompetitive teaching game based on the theory of metropolitan locality-relevant problems and stressing their interactions. Discusses potential for teaching systemic analysis of urban problems and helping students think sociologically. (Author/CK)
Game Theory based Peer Grading Mechanisms for MOOCs
Wu, William
An efficient peer grading mechanism is proposed for grading the multitude of assignments in online courses. This novel approach is based on game theory and mechanism design. A set of assumptions and a mathematical model ...
Unifying ecology and macroevolution with individual-based theory
Rosindell, James; Harmon, Luke J; Etienne, Rampal S
2015-01-01
A contemporary goal in both ecology and evolutionary biology is to develop theory that transcends the boundary between the two disciplines, to understand phenomena that cannot be explained by either field in isolation. This is challenging because macroevolution typically uses lineage-based models, whereas ecology often focuses on individual organisms. Here, we develop a new parsimonious individual-based theory by adding mild selection to the neutral theory of biodiversity. We show that this model generates realistic phylogenies showing a slowdown in diversification and also improves on the ecological predictions of neutral theory by explaining the occurrence of very common species. Moreover, we find the distribution of individual fitness changes over time, with average fitness increasing at a pace that depends positively on community size. Consequently, large communities tend to produce fitter species than smaller communities. These findings have broad implications beyond biodiversity theory, potentially impacting, for example, invasion biology and paleontology. PMID:25818618
Where Theory meets Practice: A Case for an Activity Theory based Methodology to guide Computer for an Activity Theory based Methodology to guide Computer System Design Daisy Mwanza Knowledge Media Institute a computer system. Activity Theory (AT) has emerged as a suitable framework for analysing social and cultural
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 8, No 3
Borissova, Daniela
: the queuing theory [2] and the hidden Markov models [3]. Basically, the queuing theory addresses the models Models (HMM). In fact, the queuing theory also uses the Markov chains and models but we are addressing security base- lining based on a simple flow analysis utilizing the flows measurements and the theory
Ground reaction curve based upon block theory
Yow, J.L. Jr.; Goodman, R.E.
1985-09-01
Discontinuities in a rock mass can intersect an excavation surface to form discrete blocks (keyblocks) which can be unstable. Once a potentially unstable block is identified, the forces affecting it can be calculated to assess its stability. The normal and shear stresses on each block face before displacement are calculated using elastic theory and are modified in a nonlinear way by discontinuity deformations as the keyblock displaces. The stresses are summed into resultant forces to evaluate block stability. Since the resultant forces change with displacement, successive increments of block movement are examined to see whether the block ultimately becomes stable or fails. Two-dimensional (2D) and three-dimensional (3D) analytic models for the stability of simple pyramidal keyblocks were evaluated. Calculated stability is greater for 3D analyses than for 2D analyses. Calculated keyblock stability increases with larger in situ stress magnitudes, larger lateral stress ratios, and larger shear strengths. Discontinuity stiffness controls blocks displacement more strongly than it does stability itself. Large keyblocks are less stable than small ones, and stability increases as blocks become more slender.
Code of Federal Regulations, 2013 CFR
2013-04-01
...2013-04-01 false Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds? 661...PROGRAM § 661.43 Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds?...
Code of Federal Regulations, 2010 CFR
2010-04-01
...2010-04-01 false Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds? 661...PROGRAM § 661.43 Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds?...
Code of Federal Regulations, 2011 CFR
2011-04-01
...2011-04-01 false Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds? 661...PROGRAM § 661.43 Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds?...
Code of Federal Regulations, 2014 CFR
2014-04-01
...2014-04-01 false Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds? 661...PROGRAM § 661.43 Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds?...
Code of Federal Regulations, 2012 CFR
2012-04-01
...2012-04-01 false Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds? 661...PROGRAM § 661.43 Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds?...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds? 661.43 Section 661.43 Highways FEDERAL HIGHWAY... PROGRAM § 661.43 Can other sources of funds be used to finance a queued project in advance of receipt...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 23 Highways 1 2011-04-01 2011-04-01 false Can other sources of funds be used to finance a queued project in advance of receipt of IRRBP funds? 661.43 Section 661.43 Highways FEDERAL HIGHWAY... PROGRAM § 661.43 Can other sources of funds be used to finance a queued project in advance of receipt...
Combined Field Integral Equation Based Theory of Characteristic Mode
Qi I. Dai; Qin S. Liu; Hui Gan; Weng Cho Chew
2015-03-04
Conventional electric field integral equation based theory is susceptible to the spurious internal resonance problem when the characteristic modes of closed perfectly conducting objects are computed iteratively. In this paper, we present a combined field integral equation based theory to remove the difficulty of internal resonances in characteristic mode analysis. The electric and magnetic field integral operators are shown to share a common set of non-trivial characteristic pairs (values and modes), leading to a generalized eigenvalue problem which is immune to the internal resonance corruption. Numerical results are presented to validate the proposed formulation. This work may offer efficient solutions to characteristic mode analysis which involves electrically large closed surfaces.
Aviation security cargo inspection queuing simulation model for material flow and accountability
Olama, Mohammed M; Allgood, Glenn O; Rose, Terri A; Brumback, Daryl L
2009-01-01
Beginning in 2010, the U.S. will require that all cargo loaded in passenger aircraft be inspected. This will require more efficient processing of cargo and will have a significant impact on the inspection protocols and business practices of government agencies and the airlines. In this paper, we develop an aviation security cargo inspection queuing simulation model for material flow and accountability that will allow cargo managers to conduct impact studies of current and proposed business practices as they relate to inspection procedures, material flow, and accountability.
Instanton moduli spaces and bases in coset conformal field theory
A. A. Belavin; M. A. Bershtein; B. L. Feigin; A. V. Litvinov; G. M. Tarnopolsky
2012-05-28
Recently proposed relation between conformal field theories in two dimensions and supersymmetric gauge theories in four dimensions predicts the existence of the distinguished basis in the space of local fields in CFT. This basis has a number of remarkable properties, one of them is the complete factorization of the coefficients of the operator product expansion. We consider a particular case of the U(r) gauge theory on C^2/Z_p which corresponds to a certain coset conformal field theory and describe the properties of this basis. We argue that in the case p=2, r=2 there exist different bases. We give an explicit construction of one of them. For another basis we propose the formula for matrix elements.
ARTICLE Communicated by Misha Tsodyks Spike-Driven Synaptic Plasticity: Theory, Simulation, VLSI
Columbia University
. The dynamicsofthesinglesynapseisstudiedanalyticallybyextendingtheso- lution to a classic problem in queuing theory (Tak`acs process). The model of the synapseARTICLE Communicated by Misha Tsodyks Spike-Driven Synaptic Plasticity: Theory, Simulation, VLSI of stimulation. Yet stochastic learning theory ensures that it does not affect the collective behavior of the net
Modeling and Performance Analysis of Communication Networks
Xie,Jiang (Linda)
. The underlying principles of computer systems analysis (which are based on queuing theory) will be studied. Analytical methods based on queuing theory will be used to study the behavior of communication networks will be covered as time permits: Probability Theory Refresher Overview of Queuing Systems Random Processes
Nami, Mohammad Rahim
2013-01-01
Summary In this article, a new higher order shear deformation theory based on trigonometric shear deformation theory is developed. In order to consider the size effects, the nonlocal elasticity theory is used. An analytical method is adopted to solve the governing equations for static analysis of simply supported nanoplates. In the present theory, the transverse shear stresses satisfy the traction free boundary conditions of the rectangular plates and these stresses can be calculated from the constitutive equations. The effects of different parameters such as nonlocal parameter and aspect ratio are investigated on both nondimensional deflections and deflection ratios. It may be important to mention that the present formulations are general and can be used for isotropic, orthotropic and anisotropic nanoplates. PMID:24455455
Autofocus Imaging: Image reconstruction based on inverse scattering theory
Snieder, Roel
Autofocus Imaging: Image reconstruction based on inverse scattering theory Jyoti Behura1 , Kees Wapenaar2 , and Roel Snieder3 ABSTRACT Conventional imaging algorithms assume single scatter- ing and therefore cannot image multiply scattered waves correctly. The multiply scattered events in the data are im
RAMANUJAN'S THEORIES OF ELLIPTIC FUNCTIONS TO ALTERNATIVE BASES
Garvan, Frank
RAMANUJAN'S THEORIES OF ELLIPTIC FUNCTIONS TO ALTERNATIVE BASES Bruce C. Berndt, S. Bhargava, and Frank G. Garvan Contents 1. Introduction 2. Ramanujan's Cubic Transformation, the Borweins' Cubic Theta Hypergeometric Transformations 13. Concluding Remarks 1. Introduction In his famous paper [Ramanujan1
A Memory-Based Theory of Verbal Cognition
ERIC Educational Resources Information Center
Dennis, Simon
2005-01-01
The syntagmatic paradigmatic model is a distributed, memory-based account of verbal processing. Built on a Bayesian interpretation of string edit theory, it characterizes the control of verbal cognition as the retrieval of sets of syntagmatic and paradigmatic constraints from sequential and relational long-term memory and the resolution of these…
Toward an Instructionally Oriented Theory of Example-Based Learning
ERIC Educational Resources Information Center
Renkl, Alexander
2014-01-01
Learning from examples is a very effective means of initial cognitive skill acquisition. There is an enormous body of research on the specifics of this learning method. This article presents an instructionally oriented theory of example-based learning that integrates theoretical assumptions and findings from three research areas: learning from…
What Gets Recycled: An Information Theory Based Model for
Gutowski, Timothy
What Gets Recycled: An Information Theory Based Model for Product Recycling J E F F R E Y B . D A H focuses on developing a concise representation of the material recycling potential for products at end for the two different applications. Cost estimates for product recycling systems are developed using Shannon
A Natural Teaching Method Based on Learning Theory.
ERIC Educational Resources Information Center
Smilkstein, Rita
1991-01-01
The natural teaching method is active and student-centered, based on schema and constructivist theories, and informed by research in neuroplasticity. A schema is a mental picture or understanding of something we have learned. Humans can have knowledge only to the degree to which they have constructed schemas from learning experiences and practice.…
THEORY OF VALENCE TRANSITIONS IN YTTERBIUM-BASED COMPOUNDS
Freericks, Jim
. INTRODUCTION The intermetallic compounds of the YbInCu§ family exhibit an isostructural transition from highChapter 1 THEORY OF VALENCE TRANSITIONS IN YTTERBIUM-BASED COMPOUNDS V. Zlati´c ¢¡ £ , and J. K of YbInCu¦ and similar compounds is modeled by the exact solution of the spin one-half Falicov
Theory-Based Considerations Influence the Interpretation of Generic Sentences
ERIC Educational Resources Information Center
Cimpian, Andrei; Gelman, Susan A.; Brandone, Amanda C.
2010-01-01
Under what circumstances do people agree that a kind-referring generic sentence (e.g., "Swans are beautiful") is true? We hypothesised that theory-based considerations are sufficient, independently of prevalence/frequency information, to lead to acceptance of a generic statement. To provide evidence for this general point, we focused on…
Chemical Organization Theory as a Theoretical Base for Chemical Computing
Dittrich, Peter
Chemical Organization Theory as a Theoretical Base for Chemical Computing NAOKI MATSUMARU, FLORIAN-07743 Jena, Germany http://www.minet.uni-jena.de/csb/ Submitted 14 November 2005 In chemical computing- gramming chemical systems a theoretical method to cope with that emergent behavior is desired
Qualitative model-based diagnosis using possibility theory
NASA Technical Reports Server (NTRS)
Joslyn, Cliff
1994-01-01
The potential for the use of possibility in the qualitative model-based diagnosis of spacecraft systems is described. The first sections of the paper briefly introduce the Model-Based Diagnostic (MBD) approach to spacecraft fault diagnosis; Qualitative Modeling (QM) methodologies; and the concepts of possibilistic modeling in the context of Generalized Information Theory (GIT). Then the necessary conditions for the applicability of possibilistic methods to qualitative MBD, and a number of potential directions for such an application, are described.
Evolutionary game theory using agent-based methods
Adami, Christoph; Hintze, Arend
2014-01-01
Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic situations (finite populations, non-vanishing mutations rates, communication between agents, and spatial interactions) require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. Here we discuss the use of agent-based methods in evolutionary game theory and contrast standard results to those obtainable by a mathematical treatment. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread, but that mathematics is crucial...
Biophysics of risk aversion based on neurotransmitter receptor theory
Takahashi, Taiki
2011-01-01
Decision under risk and uncertainty has been attracting attention in neuroeconomics and neuroendocrinology of decision-making. This paper demonstrated that the neurotransmitter receptor theory-based value (utility) function can account for human and animal risk-taking behavior. The theory predicts that (i) when dopaminergic neuronal response is efficiently coupled to the formation of ligand-receptor complex, subjects are risk-aversive (irrespective of their satisfaction level) and (ii) when the coupling is inefficient, subjects are risk-seeking at low satisfaction levels, consistent with risk-sensitive foraging theory in ecology. It is further suggested that some anomalies in decision under risk are due to inefficiency of the coupling between dopamine receptor activation and neuronal response. Future directions in the application of the model to studies in neuroeconomics of addiction and neuroendocrine modulation of risk-taking behavior are discussed.
Discrete-time system optimal dynamic traffic assignment (SO-DTA) with partial control Traffic Assignment problem with Partial Control (SO- DTA-PC) for general networks with horizontal queuing the discrete adjoint method. 1 Introduction Dynamic traffic assignment overview. Dynamic traffic assignment
Using Discrete Event Simulation to Model Multi-Robot Multi-Operator Teamwork
Cummings, Mary "Missy"
. The teamwork of multi-robot and multi-operator teams can be modeled based on queuing theory as well. However queuing theory to model the operator utilization of air traffic controllers. Divita et al. (2004) modeled a team performing supervisory control of an air defense warfare system using queuing theory. However
Reduce fluctuations in capacity to improve the accessibility of radiotherapy treatment
Boucherie, Richard J.
is to find a cost-effective way to meet future throughput targets. A combination of queuing theory out to be the outpatient department (OPD). Next, based on queuing theory, waiting times were improved event simulation Á Queuing theory Á Radiotherapy department Á Variability Á Waiting lists P. E. Joustra
Enabling what-if explorations in ENO THERESKA
as a bridge between research in theory (e.g., queuing theory and statistical theory) and research in systems be a show-stopper for models based on queuing analysis. This v #12;vi · Enabling what-if explorations that queuing models can attribute resource demands to the correct workloads. In addition, this disser- tation
Infrared small target detection based on Danger Theory
NASA Astrophysics Data System (ADS)
Lan, Jinhui; Yang, Xiao
2009-11-01
To solve the problem that traditional method can't detect the small objects whose local SNR is less than 2 in IR images, a Danger Theory-based model to detect infrared small target is presented in this paper. First, on the analog with immunology, the definition is given, in this paper, to such terms as dangerous signal, antigens, APC, antibodies. Besides, matching rule between antigen and antibody is improved. Prior to training the detection model and detecting the targets, the IR images are processed utilizing adaptive smooth filter to decrease the stochastic noise. Then at the training process, deleting rule, generating rule, crossover rule and the mutation rule are established after a large number of experiments in order to realize immediate convergence and obtain good antibodies. The Danger Theory-based model is built after the training process, and this model can detect the target whose local SNR is only 1.5.
Ensemble method: Community detection based on game theory
NASA Astrophysics Data System (ADS)
Zhang, Xia; Xia, Zhengyou; Xu, Shengwu; Wang, J. D.
2014-08-01
Timely and cost-effective analytics over social network has emerged as a key ingredient for success in many businesses and government endeavors. Community detection is an active research area of relevance to analyze online social network. The problem of selecting a particular community detection algorithm is crucial if the aim is to unveil the community structure of a network. The choice of a given methodology could affect the outcome of the experiments because different algorithms have different advantages and depend on tuning specific parameters. In this paper, we propose a community division model based on the notion of game theory, which can combine advantages of previous algorithms effectively to get a better community classification result. By making experiments on some standard dataset, it verifies that our community detection model based on game theory is valid and better.
A Scalable Cloud-based Queueing Service with Improved Consistency Levels
Kim, Minkyong
Kim, Hui Lei IBM T.J. Watson Research Center, 19 Skyline Drive, Hawthorne, NY 10532 {chenhan, fanye continues to gain traction, a number of vendors currently operate cloud-based shared queuing services
Infrared Image Simulation Based On Statistical Learning Theory
NASA Astrophysics Data System (ADS)
Chaochao, Huang; Xiaodi, Wu; Wuqin, Tong
2007-12-01
A real-time simulation algorithm of infrared image based on statistical learning theory is presented. The method includes three contents to achieve real-time simulation of infrared image, such as acquiring the training sample, forecasting the scene temperature field value by statistical learning machine, data processing and data analysis of temperature field. The simulation result shows this algorithm based on ? - support vector regression have better maneuverability and generalization than the other method, and the simulation precision and real-time quality are satisfying.
Gohm, Rolf
Quantum Control: Approach based on Scattering Theory for Non-commutative Markov Chains Theory to problems in the rapidly developing interdisciplinary field of Quantum Control. The proposal between these fields, and to combine them in a new way toward de- veloping a systematic theory of quantum
Waiting patterns for a printer D. Merlini, R. Sprugnoli, M. C. Verri
Merlini, Donatella
to produce analysis based on queuing theory (see, e.g., [15]). We wish to observe that we are not interested be realized by queuing the processes according to their arrival time and by using a FIFO algorithm. Obviously
Waiting patterns for a printer (Extended Abstract)
Merlini, Donatella
probabilistic reasoning to produce analysis based on queuing theory (see, e.g., [11]). 1 #12; In order be realised by queuing the processes according to their arrival time and by using a FIFO algorithm. Obviously
Modeling the emergency cardiac in-patient flow: an application of queuing theory.
de Bruin, Arnoud M; van Rossum, A C; Visser, M C; Koole, G M
2007-06-01
This study investigates the bottlenecks in the emergency care chain of cardiac in-patient flow. The primary goal is to determine the optimal bed allocation over the care chain given a maximum number of refused admissions. Another objective is to provide deeper insight in the relation between natural variation in arrivals and length of stay and occupancy rates. The strong focus on raising occupancy rates of hospital management is unrealistic and counterproductive. Economies of scale cannot be neglected. An important result is that refused admissions at the First Cardiac Aid (FCA) are primarily caused by unavailability of beds downstream the care chain. Both variability in LOS and fluctuations in arrivals result in large workload variations. Techniques from operations research were successfully used to describe the complexity and dynamics of emergency in-patient flow. PMID:17608054
Research on the Optimization Method of Maintenance Support Unit Configuration with Queuing Theory
NASA Astrophysics Data System (ADS)
Zhang, Bo; Xu, Ying; Dong, Yue; Hou, Na; Yu, Yongli
Beginning with the conception of maintenance support unit, the maintenance support flow is analyzed, so as to confirm the relation between damaged equipment mean-time-to-repair and the number of maintenance support units. On that basis, the maintenance support unit configuration optimization model is found aiming at the minimum cost of maintenance support resources, of which the solution is given. And the process is explained at the last with a example.
Congestion at Card and Book Catalogs--A Queuing-Theory Approach
ERIC Educational Resources Information Center
Bookstein, Abraham
1972-01-01
This paper attempts to analyze the problem of congestion, using a mathematical model shown to be of value in other similar applications. Three criteria of congestion are considered, and it is found that the conclusion one can draw is sensitive to which of these criteria is paramount. (8 references) (Author/NH)
Congestion at Card and Book Catalogs--A Queuing Theory Approach.
ERIC Educational Resources Information Center
Bookstein, Abraham
The question of whether a library's catalog should consist of cards arranged in a single alphabetical order (the "dictionary catalog) or be segregated as a separate file is discussed. Development is extended to encompass related problems involved in the creation of a book catalog. A model to study the effects of congestion at the catalog is…
An application of queuing theory to SIS and SEIS epidemic models.
Hernandez-Suarez, Carlos M; Castillo-Chavez, Carlos; Lopez, Osval Montesinos; Hernandez-Cuevas, Karla
2010-10-01
In this work we consider every individual of a population to be a server whose state can be either busy (infected) or idle (susceptible). This server approach allows to consider a general distribution for the duration of the infectious state, instead of being restricted to exponential distributions. In order to achieve this we first derive new approximations to quasistationary distribution (QSD) of SIS (Susceptible- Infected- Susceptible) and SEIS (Susceptible- Latent- Infected- Susceptible) stochastic epidemic models. We give an expression that relates the basic reproductive number, R0 and the server utilization,p. PMID:21077709
An Integrated Model of Patient and Staff Satisfaction Using Queuing Theory
Komashie, Alexander; Mousavi, Ali; Clarkson, P. John; Young, Terry
2015-02-06
and Peterborough NHS Foun- dation Trust. We also give special thanks to the following persons for their various contributions to this work: Justin Gore for being a co-supervisor of the project; Professors Lorraine De Souza of Brunel University and Janet Smart...
Symmetries and Conservation Laws in Histories-Based Theories
NASA Astrophysics Data System (ADS)
Dass, Tulsi; Joglekar, Yogesh N.
2001-02-01
Symmetries are defined in histories-based theories, paying special attention to the class of history theories admitting quasi-temporal structure (a generalization of the concept of "temporal sequences" of "events" using partial semigroups) and logic structure for "single-time histories." Symmetries are classified into orthochronous (those preserving the "temporal order" of events) and nonorthochronous. A straightforward criterion for the physical equivalence of histories is formulated in terms of orthochronous symmetries; this criterion covers various notions of physical equivalence of histories considered by Gell-Mann and Hartle (1990, in "Complexity, Entropy, and the Physics of Information" (W. Zurek, Ed.), SFI Studies in the Science of Complexity, Vol. 8, p. 425, Addison-Wesley, Reading, MA) as special cases. In familiar situations, a reciprocal relationship between traditional symmetries (Wigner symmetries in quantum mechanics and Borel-measurable transformations of phase space in classical mechanics) and symmetries defined in this work is established. In a restricted class of theories, definition of a conservation law is given in the history language which agrees with the standard ones in familiar situations; in a smaller subclass of theories, a Noether-type theorem (implying a connection between continuous symmetries of dynamics and conservation laws) is proved. The formalism evolved is applied to histories (of particles, fields, or more general objects) in general curved spacetimes. Sharpening the definition of symmetry so as to include a continuity requirement, it is shown that a symmetry in our formalism implies a conformal isometry of the spacetime metric.
Improved virtual queuing and dynamic EPD techniques for TCP over ATM
Wu, Y.; Siu, K.Y.; Ren, W.
1998-11-01
It is known that TCP throughput can degrade significantly over UBR service in a congested ATM network, and the early packet discard (EPD) technique has been proposed to improve the performance. However, recent studies show that EPD cannot ensure fairness among competing VCs in a congested network, but the degree of fairness can be improved using various forms of fair buffer allocation techniques. The authors propose an improved scheme that utilizes only a single shared FIFO queue for all VCs and admits simple implementation for high speed ATM networks. The scheme achieves nearly perfect fairness and throughput among multiple TCP connections, comparable to the expensive per-VC queuing technique. Analytical and simulation results are presented to show the validity of this new scheme and significant improvement in performance as compared with existing fair buffer allocation techniques for TCP over ATM.
NASA Technical Reports Server (NTRS)
Long, Dou; Lee, David; Johnson, Jesse; Gaier, Eric; Kostiuk, Peter
1999-01-01
This report describes an integrated model of air traffic management (ATM) tools under development in two National Aeronautics and Space Administration (NASA) programs -Terminal Area Productivity (TAP) and Advanced Air Transport Technologies (AATT). The model is made by adjusting parameters of LMINET, a queuing network model of the National Airspace System (NAS), which the Logistics Management Institute (LMI) developed for NASA. Operating LMINET with models of various combinations of TAP and AATT will give quantitative information about the effects of the tools on operations of the NAS. The costs of delays under different scenarios are calculated. An extension of Air Carrier Investment Model (ACIM) under ASAC developed by the Institute for NASA maps the technologies' impacts on NASA operations into cross-comparable benefits estimates for technologies and sets of technologies.
Control theory based airfoil design using the Euler equations
NASA Technical Reports Server (NTRS)
Jameson, Antony; Reuther, James
1994-01-01
This paper describes the implementation of optimization techniques based on control theory for airfoil design. In our previous work it was shown that control theory could be employed to devise effective optimization procedures for two-dimensional profiles by using the potential flow equation with either a conformal mapping or a general coordinate system. The goal of our present work is to extend the development to treat the Euler equations in two-dimensions by procedures that can readily be generalized to treat complex shapes in three-dimensions. Therefore, we have developed methods which can address airfoil design through either an analytic mapping or an arbitrary grid perturbation method applied to a finite volume discretization of the Euler equations. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented for both the inverse problem and drag minimization problem.
Quantum Game Theory Based on the Schmidt Decomposition
Tsubasa Ichikawa; Izumi Tsutsui; Taksu Cheon
2013-01-30
We present a novel formulation of quantum game theory based on the Schmidt decomposition, which has the merit that the entanglement of quantum strategies is manifestly quantified. We apply this formulation to 2-player, 2-strategy symmetric games and obtain a complete set of quantum Nash equilibria. Apart from those available with the maximal entanglement, these quantum Nash equilibria are extensions of the Nash equilibria in classical game theory. The phase structure of the equilibria is determined for all values of entanglement, and thereby the possibility of resolving the dilemmas by entanglement in the game of Chicken, the Battle of the Sexes, the Prisoners' Dilemma, and the Stag Hunt, is examined. We find that entanglement transforms these dilemmas with each other but cannot resolve them, except in the Stag Hunt game where the dilemma can be alleviated to a certain degree.
Effects of gauge theory based number scaling on geometry
Paul Benioff
2013-06-19
Effects of local availability of mathematics (LAM) and space time dependent number scaling on physics and, especially, geometry are described. LAM assumes separate mathematical systems as structures at each space time point. Extension of gauge theories to include freedom of choice of scaling for number structures, and other structures based on numbers, results in a space time dependent scaling factor based on a scalar boson field. Scaling has no effect on comparison of experimental results with one another or with theory computations. With LAM all theory expressions are elements of mathematics at some reference point. Changing the reference point introduces (external) scaling. Theory expressions with integrals or derivatives over space or time include scaling factors (internal scaling) that cannot be removed by reference point change. Line elements and path lengths, as integrals over space and/or time, show the effect of scaling on geometry. In one example, the scaling factor goes to 0 as the time goes to 0, the big bang time. All path lengths, and values of physical quantities, are crushed to 0 as $t$ goes to 0. Other examples have spherically symmetric scaling factors about some point, $x.$ In one type, a black scaling hole, the scaling factor goes to infinity as the distance, $d$, between any point $y$ and $x$ goes to 0. For scaling white holes, the scaling factor goes to 0 as $d$ goes to 0. For black scaling holes, path lengths from a reference point, $z$, to $y$ become infinite as $y$ approaches $x.$ For white holes, path lengths approach a value much less than the unscaled distance from $z$ to $x.$
Wu, Hongyi
by using Jackson network theory. While the queuing model is based on a few simplification assumptions informa- tion gathering, analysis, queuing theory, transmission overhead. I. INTRODUCTION THE recent/fault tolerability. This paper focuses on the performance evaluation of DFT-MSN. We first introduce a queuing model
Tian, Zong Z.
on gap-acceptance and queuing theory that addresses the impacts of the three types of ramp control on gap-acceptance and queuing theory are proposed to model the effect of ramp controls on freeway Ramp Controls analysis by Gap-acceptance and Queuing Models Zong Z. Tian and Ning Wu Models based
Differential Pricing for Differentiated Services Yuhong Liu and David W. Petr
Kansas, University of
such as priority queuing, class-based queuing and Weighted Fair Queuing (WFQ), the Internet is also going pricing scheme. DaSilva et al [4] adopt the game theory approach to analyze the existence and uniqueness][4], we adopt a game theory approach to address this problem. We model the system as a game, and all
Game Theory and Risk-Based Levee System Design
NASA Astrophysics Data System (ADS)
Hui, R.; Lund, J. R.; Madani, K.
2014-12-01
Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.
Forewarning model for water pollution risk based on Bayes theory.
Zhao, Jun; Jin, Juliang; Guo, Qizhong; Chen, Yaqian; Lu, Mengxiong; Tinoco, Luis
2014-02-01
In order to reduce the losses by water pollution, forewarning model for water pollution risk based on Bayes theory was studied. This model is built upon risk indexes in complex systems, proceeding from the whole structure and its components. In this study, the principal components analysis is used to screen out index systems. Hydrological model is employed to simulate index value according to the prediction principle. Bayes theory is adopted to obtain posterior distribution by prior distribution with sample information which can make samples' features preferably reflect and represent the totals to some extent. Forewarning level is judged on the maximum probability rule, and then local conditions for proposing management strategies that will have the effect of transforming heavy warnings to a lesser degree. This study takes Taihu Basin as an example. After forewarning model application and vertification for water pollution risk from 2000 to 2009 between the actual and simulated data, forewarning level in 2010 is given as a severe warning, which is well coincide with logistic curve. It is shown that the model is rigorous in theory with flexible method, reasonable in result with simple structure, and it has strong logic superiority and regional adaptability, providing a new way for warning water pollution risk. PMID:24194413
Feature selection with neighborhood entropy-based cooperative game theory.
Zeng, Kai; She, Kun; Niu, Xinzheng
2014-01-01
Feature selection plays an important role in machine learning and data mining. In recent years, various feature measurements have been proposed to select significant features from high-dimensional datasets. However, most traditional feature selection methods will ignore some features which have strong classification ability as a group but are weak as individuals. To deal with this problem, we redefine the redundancy, interdependence, and independence of features by using neighborhood entropy. Then the neighborhood entropy-based feature contribution is proposed under the framework of cooperative game. The evaluative criteria of features can be formalized as the product of contribution and other classical feature measures. Finally, the proposed method is tested on several UCI datasets. The results show that neighborhood entropy-based cooperative game theory model (NECGT) yield better performance than classical ones. PMID:25276120
Feature Selection with Neighborhood Entropy-Based Cooperative Game Theory
Zeng, Kai; She, Kun; Niu, Xinzheng
2014-01-01
Feature selection plays an important role in machine learning and data mining. In recent years, various feature measurements have been proposed to select significant features from high-dimensional datasets. However, most traditional feature selection methods will ignore some features which have strong classification ability as a group but are weak as individuals. To deal with this problem, we redefine the redundancy, interdependence, and independence of features by using neighborhood entropy. Then the neighborhood entropy-based feature contribution is proposed under the framework of cooperative game. The evaluative criteria of features can be formalized as the product of contribution and other classical feature measures. Finally, the proposed method is tested on several UCI datasets. The results show that neighborhood entropy-based cooperative game theory model (NECGT) yield better performance than classical ones. PMID:25276120
Risk Matrix Integrating Risk Attitudes Based on Utility Theory.
Ruan, Xin; Yin, Zhiyi; Frangopol, Dan M
2015-08-01
Recent studies indicate that absence of the consideration of risk attitudes of decisionmakers in the risk matrix establishment process has become a major limitation. In order to evaluate risk in a more comprehensive manner, an approach to establish risk matrices that integrates risk attitudes based on utility theory is proposed. There are three main steps within this approach: (1) describing risk attitudes of decisionmakers by utility functions, (2) bridging the gap between utility functions and the risk matrix by utility indifference curves, and (3) discretizing utility indifference curves. A complete risk matrix establishment process based on practical investigations is introduced. This process utilizes decisionmakers' answers to questionnaires to formulate required boundary values for risk matrix establishment and utility functions that effectively quantify their respective risk attitudes. PMID:25958890
Master equation based steady-state cluster perturbation theory
NASA Astrophysics Data System (ADS)
Nuss, Martin; Dorn, Gerhard; Dorda, Antonius; von der Linden, Wolfgang; Arrigoni, Enrico
2015-09-01
A simple and efficient approximation scheme to study electronic transport characteristics of strongly correlated nanodevices, molecular junctions, or heterostructures out of equilibrium is provided by steady-state cluster perturbation theory. In this work, we improve the starting point of this perturbative, nonequilibrium Green's function based method. Specifically, we employ an improved unperturbed (so-called reference) state ??S, constructed as the steady state of a quantum master equation within the Born-Markov approximation. This resulting hybrid method inherits beneficial aspects of both the quantum master equation as well as the nonequilibrium Green's function technique. We benchmark this scheme on two experimentally relevant systems in the single-electron transistor regime: an electron-electron interaction based quantum diode and a triple quantum dot ring junction, which both feature negative differential conductance. The results of this method improve significantly with respect to the plain quantum master equation treatment at modest additional computational cost.
SARA: A Text-Based and Reader-Based Theory of Signaling
ERIC Educational Resources Information Center
Lemarie, Julie; Lorch, Robert F., Jr.; Eyrolle, Helene; Virbel, Jacques
2008-01-01
We propose a two-component theory of text signaling devices. The first component is a text-based analysis that characterizes any signaling device along four dimensions: (a) the type of information it makes available, (b) its scope, (c) how it is realized in the text, and (d) its location with respect to the content it cues. The second component is…
Investigating the Learning-Theory Foundations of Game-Based Learning: A Meta-Analysis
ERIC Educational Resources Information Center
Wu, W-H.; Hsiao, H-C.; Wu, P-L.; Lin, C-H.; Huang, S-H.
2012-01-01
Past studies on the issue of learning-theory foundations in game-based learning stressed the importance of establishing learning-theory foundation and provided an exploratory examination of established learning theories. However, we found research seldom addressed the development of the use or failure to use learning-theory foundations and…
Cassandras, Christos G.
to analytical techniques from classical queuing theory, we find that traditional traffic models, largely based is provided through queuing systems. However, the huge traffic volume that networks are supporting today makes approximations to queuing-based models or primary models in their own right. In any event, its justification
: Network processors, queuing theory, optimal bandwidth allocation. 1 Introduction The bandwidth growth of an optimal bandwidth allocation strategy using queuing network for NP-based architectures at sys- tem level- ronment. It encompasses a new formula based on the op- timal capacity allocation concept in queuing
A molecularly based theory for electron transfer reorganization energy.
Zhuang, Bilin; Wang, Zhen-Gang
2015-12-14
Using field-theoretic techniques, we develop a molecularly based dipolar self-consistent-field theory (DSCFT) for charge solvation in pure solvents under equilibrium and nonequilibrium conditions and apply it to the reorganization energy of electron transfer reactions. The DSCFT uses a set of molecular parameters, such as the solvent molecule's permanent dipole moment and polarizability, thus avoiding approximations that are inherent in treating the solvent as a linear dielectric medium. A simple, analytical expression for the free energy is obtained in terms of the equilibrium and nonequilibrium electrostatic potential profiles and electric susceptibilities, which are obtained by solving a set of self-consistent equations. With no adjustable parameters, the DSCFT predicts activation energies and reorganization energies in good agreement with previous experiments and calculations for the electron transfer between metallic ions. Because the DSCFT is able to describe the properties of the solvent in the immediate vicinity of the charges, it is unnecessary to distinguish between the inner-sphere and outer-sphere solvent molecules in the calculation of the reorganization energy as in previous work. Furthermore, examining the nonequilibrium free energy surfaces of electron transfer, we find that the nonequilibrium free energy is well approximated by a double parabola for self-exchange reactions, but the curvature of the nonequilibrium free energy surface depends on the charges of the electron-transferring species, contrary to the prediction by the linear dielectric theory. PMID:26671385
INTEGRATION OF BOUNDARY FINDING AND REGION BASED SEGMENTATION USING GAME THEORY
Duncan, James S.
INTEGRATION OF BOUNDARY FINDING AND REGION BASED SEGMENTATION USING GAME THEORY AMIT CHAKRABORTY and gradient based boundary finding using game theory in an effort to form a unified approach that is robust at best suboptimal. One of the biggest advantages of using game theory is that it can dealing
Bertelsen, Olav W.
ATIT 2004 Proceedings of the First International Workshop on Activity Theory Based Practical was to discus and refine methods for IT design based on activity theory. Thereby, stimulate the evolution for practitioners, and a short paper, reflecting on the method, its basis in activity theory, its history, its use
Time-dependent density functional theory based Ehrenfest dynamics.
Wang, Fan; Yam, Chi Yung; Hu, LiHong; Chen, GuanHua
2011-07-28
Time-dependent density functional theory based Ehrenfest dynamics with atom-centered basis functions is developed in present work. The equation of motion for electrons is formulated in terms of first-order reduced density matrix and an additional term arises due to the time-dependence of basis functions through their dependence on nuclear coordinates. This time-dependence of basis functions together with the imaginary part of density matrix leads to an additional term for nuclear force. The effects of the two additional terms are examined by studying the dynamics of H(2) and C(2)H(4), and it is concluded that the inclusion of these two terms is essential for correct electronic and nuclear dynamics. PMID:21806109
Intelligent control based on fuzzy logic and neural net theory
NASA Technical Reports Server (NTRS)
Lee, Chuen-Chien
1991-01-01
In the conception and design of intelligent systems, one promising direction involves the use of fuzzy logic and neural network theory to enhance such systems' capability to learn from experience and adapt to changes in an environment of uncertainty and imprecision. Here, an intelligent control scheme is explored by integrating these multidisciplinary techniques. A self-learning system is proposed as an intelligent controller for dynamical processes, employing a control policy which evolves and improves automatically. One key component of the intelligent system is a fuzzy logic-based system which emulates human decision making behavior. It is shown that the system can solve a fairly difficult control learning problem. Simulation results demonstrate that improved learning performance can be achieved in relation to previously described systems employing bang-bang control. The proposed system is relatively insensitive to variations in the parameters of the system environment.
Correlated digital back propagation based on perturbation theory.
Liang, Xiaojun; Kumar, Shiva
2015-06-01
We studied a simplified digital back propagation (DBP) scheme by including the correlation between neighboring signal samples. An analytical expression for calculating the correlation coefficients is derived based on a perturbation theory. In each propagation step, nonlinear distortion due to phase-dependent terms in the perturbative expansion are ignored which enhances the computational efficiency. The performance of the correlated DBP is evaluated by simulating a single-channel single-polarization fiber-optic system operating at 28 Gbaud, 32-quadrature amplitude modulation (32-QAM), and 40 × 80 km transmission distance. As compared to standard DBP, correlated DBP reduces the total number of propagation steps by a factor of 10 without performance penalty. Correlated DBP with only 2 steps per link provides about one dB improvement in Q-factor over linear compensation. PMID:26072825
Theory for a gas composition sensor based on acoustic properties
NASA Astrophysics Data System (ADS)
Phillips, Scott; Dain, Yefim; Lueptow, Richard M.
2003-01-01
Sound travelling through a gas propagates at different speeds and its intensity attenuates to different degrees depending upon the composition of the gas. Theoretically, a real-time gaseous composition sensor could be based on measuring the sound speed and the acoustic attenuation. To this end, the speed of sound was modelled using standard relations, and the acoustic attenuation was modelled using the theory for vibrational relaxation of gas molecules. The concept for a gas composition sensor is demonstrated theoretically for nitrogen-methane-water and hydrogen-oxygen-water mixtures. For a three-component gas mixture, the measured sound speed and acoustic attenuation each define separate lines in the composition plane of two of the gases. The intersection of the two lines defines the gas composition. It should also be possible to use the concept for mixtures of more than three components, if the nature of the gas composition is known to some extent.
Xinjiang resources efficiency based on superior technical theory
NASA Astrophysics Data System (ADS)
Amut, Aniwaer; Li, Zeyuan
2005-09-01
The new concept about the resource efficiency in Xinjiang has been discussed in this study based on the advanced technology theory in policy making perspective. The analysis is focused on the resources advantage in the development, resource pressure, resource efficiency and technical approach to resource efficiency. The idea of industrialized development centered on resource efficiency, its control factors and basic technical framework for realization of resource efficiency factors, which include technique in application of recycled materials; water-saving technique oriented for efficiency in applying water resource; bio-technology for high yield and better quality of farm crops; comprehensive technique involved in farm-produce further process and agricultural industrialization; information technology around information support and information-oriented society; technique in transforming resources including oil and natural gas, mineral products and wind power; technique in control of desertification and biological security.
Magri, Giorgio, 1975-
2009-01-01
Part I of this dissertation proposes an implicature-based theory of individual-level predicates. The idea is that we cannot say '#John is sometimes tall' because the sentence triggers the scalar implicature that the ...
A Lie based 4-dimensional higher Chern-Simons theory
Roberto Zucchini
2015-12-18
We present and study a model of 4-dimensional higher Chern-Simons theory, special Chern-Simons (SCS) theory, instances of which have appeared in the string literature, whose symmetry is encoded in a skeletal semistrict Lie 2-algebra constructed from a compact Lie group with non discrete center. The field content of SCS theory consists of a Lie valued 2-connection coupled to a background closed 3-form. SCS theory enjoys a large gauge and gauge for gauge symmetry organized in an infinite dimensional strict Lie 2-group. The partition function of SCS theory is simply related to that of a topological gauge theory localizing on flat connections with degree 3 second characteristic class determined by the background 3-form. Finally, SCS theory is related to a 3-dimensional special gauge theory whose 2-connection space has a natural symplectic structure with respect to which the 1-gauge transformation action is Hamiltonian, the 2-curvature map acting as moment map.
A Lie based 4-dimensional higher Chern-Simons theory
Zucchini, Roberto
2015-01-01
We present and study a model of 4-dimensional higher Chern-Simons theory, special Chern-Simons (SCS) theory, instances of which have appeared in the string literature, whose symmetry is encoded in a skeletal semistrict Lie 2-algebra constructed from a compact Lie group with non discrete center. The field content of SCS theory consists of a Lie valued 2-connection coupled to a background closed 3-form. SCS theory enjoys a large gauge and gauge for gauge symmetry organized in an infinite dimensional strict Lie 2-group. The partition function of SCS theory is simply related to that of a topological gauge theory localizing on flat connections with degree 3 second characteristic class determined by the background 3-form. Finally, SCS theory is related to a 3-dimensional special gauge theory whose 2-connection space has a natural symplectic structure with respect to which the 1-gauge transformation action is Hamiltonian, the 2-curvature map acting as moment map.
Evaluating Theory-Based Evaluation: Information, Norms, and Adherence
ERIC Educational Resources Information Center
Jacobs, W. Jake; Sisco, Melissa; Hill, Dawn; Malter, Frederic; Figueredo, Aurelio Jose
2012-01-01
Programmatic social interventions attempt to produce appropriate social-norm-guided behavior in an open environment. A marriage of applicable psychological theory, appropriate program evaluation theory, and outcome of evaluations of specific social interventions assures the acquisition of cumulative theory and the production of successful social…
Image integrity authentication scheme based on fixed point theory.
Li, Xu; Sun, Xingming; Liu, Quansheng
2015-02-01
Based on the fixed point theory, this paper proposes a new scheme for image integrity authentication, which is very different from digital signature and fragile watermarking. By the new scheme, the sender transforms an original image into a fixed point image (very close to the original one) of a well-chosen transform and sends the fixed point image (instead of the original one) to the receiver; using the same transform, the receiver checks the integrity of the received image by testing whether it is a fixed point image and locates the tampered areas if the image has been modified during the transmission. A realization of the new scheme is based on Gaussian convolution and deconvolution (GCD) transform, for which an existence theorem of fixed points is proved. The semifragility is analyzed via commutativity of transforms, and three commutativity theorems are found for the GCD transform. Three iterative algorithms are presented for finding a fixed point image with a few numbers of iterations, and for the whole procedure of image integrity authentication; a fragile authentication system and a semifragile one are separately built. Experiments show that both the systems have good performance in transparence, fragility, security, and tampering localization. In particular, the semifragile system can perfectly resist the rotation by a multiple of 90° flipping and brightness attacks. PMID:25420259
Mending metacognitive illusions: a comparison of mnemonic-based and theory-based procedures.
Koriat, Asher; Bjork, Robert A
2006-09-01
Previous research indicated that learners experience an illusion of competence during learning (termed foresight bias) because judgments of learning (JOLs) are made in the presence of information that will be absent at test. The authors examined the following 2 procedures for alleviating foresight bias: enhancing learners' sensitivity to mnemonic cues pertaining to ease of retrieval and inducing learners to resort to theory-based judgments as a basis for JOLs. Both procedures proved effective in mending metacognitive illusions-as reflected in JOLs and self-regulation of study time-but only theory-based debiasing yielded transfer to new items. The results support the notion that improved metacognition is 1 key to optimizing transfer but also that educating subjective experience does not guarantee generalization to new situations. PMID:16938051
Hellinga, Bruce
a deterministic model such as a deterministic queuing model or shock wave theory if the future values presents a fuzzy queuing model that can be used to predict the possible delay that a vehicle will experience at an incident location based on real-time information on current queuing conditions, future
Real-time, Adaptive Prediction of Incident Delay for Advanced Traffic Management Systems
Hellinga, Bruce
a deterministic model such as deterministic queuing model or shock wave theory if the future values queuing model that can be used to predict the possible delay or interval of delay that a vehicle will experience at an incident location based on real-time information on current queuing condition, future
Some highlights on the work in probability theory in India during 19802008: A report
Giri, Ranjit K.
of queues, opti- mization problems in manufacturing and inventory control, numerical methods such as matrix the study of birth and death processes, growth of cancer cells, queuing theory and congestion in networks processes, John Wiley, New York 2002; · R B Lenin and P R Parthasarathy, Birth and death models
Density functional theory based generalized effective fragment potential method.
Nguyen, Kiet A; Pachter, Ruth; Day, Paul N
2014-06-28
We present a generalized Kohn-Sham (KS) density functional theory (DFT) based effective fragment potential (EFP2-DFT) method for the treatment of solvent effects. Similar to the original Hartree-Fock (HF) based potential with fitted parameters for water (EFP1) and the generalized HF based potential (EFP2-HF), EFP2-DFT includes electrostatic, exchange-repulsion, polarization, and dispersion potentials, which are generated for a chosen DFT functional for a given isolated molecule. The method does not have fitted parameters, except for implicit parameters within a chosen functional and the dispersion correction to the potential. The electrostatic potential is modeled with a multipolar expansion at each atomic center and bond midpoint using Stone's distributed multipolar analysis. The exchange-repulsion potential between two fragments is composed of the overlap and kinetic energy integrals and the nondiagonal KS matrices in the localized molecular orbital basis. The polarization potential is derived from the static molecular polarizability. The dispersion potential includes the intermolecular D3 dispersion correction of Grimme et al. [J. Chem. Phys. 132, 154104 (2010)]. The potential generated from the CAMB3LYP functional has mean unsigned errors (MUEs) with respect to results from coupled cluster singles, doubles, and perturbative triples with a complete basis set limit (CCSD(T)/CBS) extrapolation, of 1.7, 2.2, 2.0, and 0.5 kcal/mol, for the S22, water-benzene clusters, water clusters, and n-alkane dimers benchmark sets, respectively. The corresponding EFP2-HF errors for the respective benchmarks are 2.41, 3.1, 1.8, and 2.5 kcal/mol. Thus, the new EFP2-DFT-D3 method with the CAMB3LYP functional provides comparable or improved results at lower computational cost and, therefore, extends the range of applicability of EFP2 to larger system sizes. PMID:24985612
IMMAN: free software for information theory-based chemometric analysis.
Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo
2015-05-01
The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA supervised algorithms. Graphic representation for Shannon's distribution of MD calculating software. PMID:25620721
Density functional theory based generalized effective fragment potential method
Nguyen, Kiet A. E-mail: ruth.pachter@wpafb.af.mil; Pachter, Ruth E-mail: ruth.pachter@wpafb.af.mil; Day, Paul N.
2014-06-28
We present a generalized Kohn-Sham (KS) density functional theory (DFT) based effective fragment potential (EFP2-DFT) method for the treatment of solvent effects. Similar to the original Hartree-Fock (HF) based potential with fitted parameters for water (EFP1) and the generalized HF based potential (EFP2-HF), EFP2-DFT includes electrostatic, exchange-repulsion, polarization, and dispersion potentials, which are generated for a chosen DFT functional for a given isolated molecule. The method does not have fitted parameters, except for implicit parameters within a chosen functional and the dispersion correction to the potential. The electrostatic potential is modeled with a multipolar expansion at each atomic center and bond midpoint using Stone's distributed multipolar analysis. The exchange-repulsion potential between two fragments is composed of the overlap and kinetic energy integrals and the nondiagonal KS matrices in the localized molecular orbital basis. The polarization potential is derived from the static molecular polarizability. The dispersion potential includes the intermolecular D3 dispersion correction of Grimme et al. [J. Chem. Phys. 132, 154104 (2010)]. The potential generated from the CAMB3LYP functional has mean unsigned errors (MUEs) with respect to results from coupled cluster singles, doubles, and perturbative triples with a complete basis set limit (CCSD(T)/CBS) extrapolation, of 1.7, 2.2, 2.0, and 0.5 kcal/mol, for the S22, water-benzene clusters, water clusters, and n-alkane dimers benchmark sets, respectively. The corresponding EFP2-HF errors for the respective benchmarks are 2.41, 3.1, 1.8, and 2.5 kcal/mol. Thus, the new EFP2-DFT-D3 method with the CAMB3LYP functional provides comparable or improved results at lower computational cost and, therefore, extends the range of applicability of EFP2 to larger system sizes.
The Application of Carl Rogers' Person-Centered Learning Theory to Web-Based Instruction.
ERIC Educational Resources Information Center
Miller, Christopher T.
This paper provides a review of literature that relates research on Carl Rogers' person-centered learning theory to Web-based learning. Based on the review of the literature, a set of criteria is described that can be used to determine how closely a Web-based course matches the different components of Rogers' person-centered learning theory. Using…
ERIC Educational Resources Information Center
Han, Gang; Newell, Jay
2014-01-01
This study explores the adoption of the team-based learning (TBL) method in knowledge-based and theory-oriented journalism and mass communication (J&MC) courses. It first reviews the origin and concept of TBL, the relevant theories, and then introduces the TBL method and implementation, including procedures and assessments, employed in an…
LSST Telescope Alignment Plan Based on Nodal Aberration Theory
NASA Astrophysics Data System (ADS)
Sebag, J.; Gressler, W.; Schmid, T.; Rolland, J. P.; Thompson, K. P.
2012-04-01
The optical alignment of the Large Synoptic Survey Telescope (LSST) is potentially challenging, due to its fast three-mirror optical design and its large 3.5° field of view (FOV). It is highly advantageous to align the three-mirror optical system prior to the integration of the complex science camera on the telescope, which corrects the FOV via three refractive elements and includes the operational wavefront sensors. A telescope alignment method based on nodal aberration theory (NAT) is presented here to address this challenge. Without the science camera installed on the telescope, the on-axis imaging performance of the telescope is diffraction-limited, but the field of view is not corrected. The nodal properties of the three-mirror telescope design have been analyzed and an alignment approach has been developed using the intrinsically linear nodal behavior, which is linked via sensitivities to the misalignment parameters. Since mirror figure errors will exist in any real application, a methodology to introduce primary-mirror figure errors into the analysis has been developed and is also presented.
The Development of an Attribution-Based Theory of Motivation: A History of Ideas
ERIC Educational Resources Information Center
Weiner, Bernard
2010-01-01
The history of ideas guiding the development of an attribution-based theory of motivation is presented. These influences include the search for a "grand" theory of motivation (from drive and expectancy/value theory), an attempt to represent how the past may influence the present and the future (as Thorndike accomplished), and the incorporation of…
A Constructive Theory of Sampling for Image Synthesis using Reproducing Kernel Bases
Desbrun, Mathieu
A Constructive Theory of Sampling for Image Synthesis using Reproducing Kernel Bases Christian rule derived using our theory of sampling, providing substantially reduced visual noise and lower propose an alternative theory of sampling. In contrast to traditional formulations for image synthesis
NASA Astrophysics Data System (ADS)
Lu, Nianduan; Li, Ling; Banerjee, Writam; Sun, Pengxiao; Gao, Nan; Liu, Ming
2015-07-01
Charge carrier hopping transport is generally taken from Miller-Abrahams and Marcus transition rates. Based on the Miller-Abrahams theory and nearest-neighbour range hopping theory, Apsley and Hughes developed a concise calculation method (A-H method) to study the hopping conduction in disordered systems. Here, we improve the A-H method to investigate the charge carrier hopping transport by introducing polaron effect and electric field based on Marcus theory and variable-range hopping theory. This improved method can well describe the contribution of polaron effect, energetic disorder, carrier density, and electric field to the charge carrier transport in disordered organic semiconductor. In addition, the calculated results clearly show that the charge carrier mobility represents different polaron effect dependence with the polaron activation energy and decreases with increasing electric field strength for large fields.
Stochastic extension of cellular manufacturing systems: a queuing-based analysis
NASA Astrophysics Data System (ADS)
Fardis, Fatemeh; Zandi, Afagh; Ghezavati, Vahidreza
2013-07-01
Clustering parts and machines into part families and machine cells is a major decision in the design of cellular manufacturing systems which is defined as cell formation. This paper presents a non-linear mixed integer programming model to design cellular manufacturing systems which assumes that the arrival rate of parts into cells and machine service rate are stochastic parameters and described by exponential distribution. Uncertain situations may create a queue behind each machine; therefore, we will consider the average waiting time of parts behind each machine in order to have an efficient system. The objective function will minimize summation of idleness cost of machines, sub-contracting cost for exceptional parts, non-utilizing machine cost, and holding cost of parts in the cells. Finally, the linearized model will be solved by the Cplex solver of GAMS, and sensitivity analysis will be performed to illustrate the effectiveness of the parameters.
Dynamic Neural-Based Buffer Management for Queuing Systems with Self-Similar Characteristics
Yousefi'zadeh, Homayoun
and fairness measured in terms of individual source packet loss. Complete partitioning (CP) of a buffer been successfully utilized in dynamic allocation of band- width for Variable Bit Rate (VBR) video over increases. Anal- ysis of traffic data from other networks and services such as VBR video [7], ISDN traffic
Optimisation of a honeybee-colony's energetics via social learning based on queuing delays
NASA Astrophysics Data System (ADS)
Thenius, Ronald; Schmickl, Thomas; Crailsheim, Karl
2008-06-01
Natural selection shaped the foraging-related processes of honeybees in such a way that a colony can react to changing environmental conditions optimally. To investigate this complex dynamic social system, we developed a multi-agent model of the nectar flow inside and outside of a honeybee colony. In a honeybee colony, a temporal caste collects nectar in the environment. These foragers bring their harvest into the colony, where they unload their nectar loads to one or more storer bees. Our model predicts that a cohort of foragers, collecting nectar from a single nectar source, is able to detect changes in quality in other food sources they have never visited, via the nectar processing system of the colony. We identified two novel pathways of forager-to-forager communication. Foragers can gain information about changes in the nectar flow in the environment via changes in their mean waiting time for unloadings and the number of experienced multiple unloadings. This way two distinct groups of foragers that forage on different nectar sources and that never communicate directly can share information via a third cohort of worker bees. We show that this noisy and loosely knotted social network allows a colony to perform collective information processing, so that a single forager has all necessary information available to be able to 'tune' its social behaviour, like dancing or dance-following. This way the net nectar gain of the colony is increased.
Tuset-Peiro, Pere; Vazquez-Gallego, Francisco; Alonso-Zarate, Jesus; Alonso, Luis; Vilajosana, Xavier
2014-01-01
Data collection is a key scenario for the Internet of Things because it enables gathering sensor data from distributed nodes that use low-power and long-range wireless technologies to communicate in a single-hop approach. In this kind of scenario, the network is composed of one coordinator that covers a particular area and a large number of nodes, typically hundreds or thousands, that transmit data to the coordinator upon request. Considering this scenario, in this paper we experimentally validate the energy consumption of two Medium Access Control (MAC) protocols, Frame Slotted ALOHA (FSA) and Distributed Queuing (DQ). We model both protocols as a state machine and conduct experiments to measure the average energy consumption in each state and the average number of times that a node has to be in each state in order to transmit a data packet to the coordinator. The results show that FSA is more energy efficient than DQ if the number of nodes is known a priori because the number of slots per frame can be adjusted accordingly. However, in such scenarios the number of nodes cannot be easily anticipated, leading to additional packet collisions and a higher energy consumption due to retransmissions. Contrarily, DQ does not require to know the number of nodes in advance because it is able to efficiently construct an ad hoc network schedule for each collection round. This kind of a schedule ensures that there are no packet collisions during data transmission, thus leading to an energy consumption reduction above 10% compared to FSA. PMID:25061839
Tuset-Peiro, Pere; Vazquez-Gallego, Francisco; Alonso-Zarate, Jesus; Alonso, Luis; Vilajosana, Xavier
2014-01-01
Data collection is a key scenario for the Internet of Things because it enables gathering sensor data from distributed nodes that use low-power and long-range wireless technologies to communicate in a single-hop approach. In this kind of scenario, the network is composed of one coordinator that covers a particular area and a large number of nodes, typically hundreds or thousands, that transmit data to the coordinator upon request. Considering this scenario, in this paper we experimentally validate the energy consumption of two Medium Access Control (MAC) protocols, Frame Slotted ALOHA (FSA) and Distributed Queuing (DQ). We model both protocols as a state machine and conduct experiments to measure the average energy consumption in each state and the average number of times that a node has to be in each state in order to transmit a data packet to the coordinator. The results show that FSA is more energy efficient than DQ if the number of nodes is known a priori because the number of slots per frame can be adjusted accordingly. However, in such scenarios the number of nodes cannot be easily anticipated, leading to additional packet collisions and a higher energy consumption due to retransmissions. Contrarily, DQ does not require to know the number of nodes in advance because it is able to efficiently construct an ad hoc network schedule for each collection round. This kind of a schedule ensures that there are no packet collisions during data transmission, thus leading to an energy consumption reduction above 10% compared to FSA. PMID:25061839
Multidimensional Computerized Adaptive Testing Based on Bayesian Theory
Desmarais, Michel C.
MIRT. Index Terms Assessment, e-Learning, Bayesian theory, computerized adaptive testing (CAT), multidimensional, item response theory (IRT). INTRODUCTION Compared with traditional classroom instruction, e-learning and efficient assessment of a learner's proficiency has always been a high priority for intelligent e-Learning
An Approach to Theory-Based Youth Programming
ERIC Educational Resources Information Center
Duerden, Mat D.; Gillard, Ann
2011-01-01
A key but often overlooked aspect of intentional, out-of-school-time programming is the integration of a guiding theoretical framework. The incorporation of theory in programming can provide practitioners valuable insights into essential processes and principles of successful programs. While numerous theories exist that relate to youth development…
Toward A Brain-Based Theory of Beauty
Ishizu, Tomohiro; Zeki, Semir
2011-01-01
We wanted to learn whether activity in the same area(s) of the brain correlate with the experience of beauty derived from different sources. 21 subjects took part in a brain-scanning experiment using functional magnetic resonance imaging. Prior to the experiment, they viewed pictures of paintings and listened to musical excerpts, both of which they rated on a scale of 1–9, with 9 being the most beautiful. This allowed us to select three sets of stimuli–beautiful, indifferent and ugly–which subjects viewed and heard in the scanner, and rated at the end of each presentation. The results of a conjunction analysis of brain activity showed that, of the several areas that were active with each type of stimulus, only one cortical area, located in the medial orbito-frontal cortex (mOFC), was active during the experience of musical and visual beauty, with the activity produced by the experience of beauty derived from either source overlapping almost completely within it. The strength of activation in this part of the mOFC was proportional to the strength of the declared intensity of the experience of beauty. We conclude that, as far as activity in the brain is concerned, there is a faculty of beauty that is not dependent on the modality through which it is conveyed but which can be activated by at least two sources–musical and visual–and probably by other sources as well. This has led us to formulate a brain-based theory of beauty. PMID:21755004
Development and Evaluation of a Theory-Based Physical Activity Guidebook for Breast Cancer Survivors
ERIC Educational Resources Information Center
Vallance, Jeffrey K.; Courneya, Kerry S.; Taylor, Lorian M.; Plotnikoff, Ronald C.; Mackey, John R.
2008-01-01
This study's objective was to develop and evaluate the suitability and appropriateness of a theory-based physical activity (PA) guidebook for breast cancer survivors. Guidebook content was constructed based on the theory of planned behavior (TPB) using salient exercise beliefs identified by breast cancer survivors in previous research. Expert…
How Is a Science Lesson Developed and Implemented Based on Multiple Intelligences Theory?
ERIC Educational Resources Information Center
Kaya, Osman Nafiz
2008-01-01
The purpose of this study is to present the whole process step-by-step of how a science lesson can be planned and implemented based on Multiple Intelligences (MI) theory. First, it provides the potential of the MI theory for science teaching and learning. Then an MI science lesson that was developed based on a modified model in the literature and…
Quantum-Based Theories of Condensed Matter Emily A. Carter
Simons, Jack
, 2005 "Stainless steel optimization from DFT", Vitos et al., Nature: materials, 2002 "Interface between & Spin-Dependent Pseudopotential Theory for Open-Shell and Magnetic Systems - Materials Applications science Bulk, surface, interface, defects Electronic structure Mechanical properties Magnetic properties
RAMANUJAN'S THEORIES OF ELLIPTIC FUNCTIONS TO ALTERNATIVE BASES
Garvan, Frank
RAMANUJAN'S THEORIES OF ELLIPTIC FUNCTIONS Contents 1. Introduction 2. Ramanujan's Cubic Transformation, the Borweins' Cubic Theta-Function Identit 1. Introduction In his famous paper [Ramanujan1], [RamanujanCP, pp. 23-39], Ramanujan offers
Quantum mechanical embedding theory based on a unique embedding potential
Chen Huang; Pavone, Michele; Carter, Emily A.
2011-04-21
We remove the nonuniqueness of the embedding potential that exists in most previous quantum mechanical embedding schemes by letting the environment and embedded region share a common embedding (interaction) potential. To efficiently solve for the embedding potential, an optimized effective potential method is derived. This embedding potential, which eschews use of approximate kinetic energy density functionals, is then used to describe the environment while a correlated wavefunction (CW) treatment of the embedded region is employed. We first demonstrate the accuracy of this new embedded CW (ECW) method by calculating the van der Waals binding energy curve between a hydrogen molecule and a hydrogen chain. We then examine the prototypical adsorption of CO on a metal surface, here the Cu(111) surface. In addition to obtaining proper site ordering (top site most stable) and binding energies within this theory, the ECW exhibits dramatic changes in the p-character of the CO 4{sigma} and 5{sigma} orbitals upon adsorption that agree very well with x-ray emission spectra, providing further validation of the theory. Finally, we generalize our embedding theory to spin-polarized quantum systems and discuss the connection between our theory and partition density functional theory.
CHIU, CHI-HSUN
2013-05-31
This study involved a comprehensive review of the literature on multimedia design to identify theory based design principles applicable to online instruction. Seven theories were reviewed. They included Cultural Historical Activity Theory (CHAT...
Evaluating theory-based evaluation: information, norms, and adherence.
Jacobs, W Jake; Sisco, Melissa; Hill, Dawn; Malter, Frederic; Figueredo, Aurelio José
2012-08-01
Programmatic social interventions attempt to produce appropriate social-norm-guided behavior in an open environment. A marriage of applicable psychological theory, appropriate program evaluation theory, and outcome of evaluations of specific social interventions assures the acquisition of cumulative theory and the production of successful social interventions--the marriage permits us to advance knowledge by making use of both success and failures. We briefly review well-established principles within the field of program evaluation, well-established processes involved in changing social norms and social-norm adherence, the outcome of several program evaluations focusing on smoking prevention, pro-environmental behavior, and rape prevention and, using the principle of learning from our failures, examine why these programs often do not perform as expected. Finally, we discuss the promise of learning from our collective experiences to develop a cumulative science of program evaluation and to improve the performance of extant and future interventions. PMID:22277114
Videogames, Tools for Change: A Study Based on Activity Theory
ERIC Educational Resources Information Center
Méndez, Laura; Lacasa, Pilar
2015-01-01
Introduction: The purpose of this study is to provide a framework for analysis from which to interpret the transformations that take place, as perceived by the participants, when commercial video games are used in the classroom. We will show how Activity Theory (AT) is able to explain and interpret these changes. Method: Case studies are…
PDAs as Lifelong Learning Tools: An Activity Theory Based Analysis
ERIC Educational Resources Information Center
Waycott, Jenny; Jones, Ann; Scanlon, Eileen
2005-01-01
This paper describes the use of an activity theory (AT) framework to analyze the ways that distance part time learners and mobile workers adapted and appropriated mobile devices for their activities and in turn how their use of these new tools changed the ways that they carried out their learning or their work. It is argued that there are two key…
arXiv:quant-ph/0702167v116Feb2007 Quantum Game Theory Based on the Schmidt Decomposition
Tsutsui, Izumi
arXiv:quant-ph/0702167v116Feb2007 Quantum Game Theory Based on the Schmidt Decomposition: Can formulation of quantum game theory based on the Schmidt de- composition, which has the merit.cheon@kochi-tech.ac.jp #12;1. Introduction Quantum game theory, which is a theory of games with quantum strategies, has been
A theory-based approach to teaching young children about health: A recipe for understanding
Nguyen, Simone P.; McCullough, Mary Beth; Noble, Ashley
2011-01-01
The theory-theory account of conceptual development posits that children’s concepts are integrated into theories. Concept learning studies have documented the central role that theories play in children’s learning of experimenter-defined categories, but have yet to extensively examine complex, real-world concepts such as health. The present study examined whether providing young children with coherent and causally-related information in a theory-based lesson would facilitate their learning about the concept of health. This study used a pre-test/lesson/post-test design, plus a five month follow-up. Children were randomly assigned to one of three conditions: theory (i.e., 20 children received a theory-based lesson); nontheory (i.e., 20 children received a nontheory-based lesson); and control (i.e., 20 children received no lesson). Overall, the results showed that children in the theory condition had a more accurate conception of health than children in the nontheory and control conditions, suggesting the importance of theories in children’s learning of complex, real-world concepts. PMID:21894237
Ground Movement Analysis Based on Stochastic Medium Theory
Fei, Meng; Li-chun, Wu; Jia-sheng, Zhang; Guo-dong, Deng; Zhi-hui, Ni
2014-01-01
In order to calculate the ground movement induced by displacement piles driven into horizontal layered strata, an axisymmetric model was built and then the vertical and horizontal ground movement functions were deduced using stochastic medium theory. Results show that the vertical ground movement obeys normal distribution function, while the horizontal ground movement is an exponential function. Utilizing field measured data, parameters of these functions can be obtained by back analysis, and an example was employed to verify this model. Result shows that stochastic medium theory is suitable for calculating the ground movement in pile driving, and there is no need to consider the constitutive model of soil or contact between pile and soil. This method is applicable in practice. PMID:24701184
Fragment-Based Time-Dependent Density Functional Theory
NASA Astrophysics Data System (ADS)
Mosquera, Martín A.; Jensen, Daniel; Wasserman, Adam
2013-07-01
Using the Runge-Gross theorem that establishes the foundation of time-dependent density functional theory, we prove that for a given electronic Hamiltonian, choice of initial state, and choice of fragmentation, there is a unique single-particle potential (dubbed time-dependent partition potential) which, when added to each of the preselected fragment potentials, forces the fragment densities to evolve in such a way that their sum equals the exact molecular density at all times. This uniqueness theorem suggests new ways of computing the time-dependent properties of electronic systems via fragment-time-dependent density functional theory calculations. We derive a formally exact relationship between the partition potential and the total density, and illustrate our approach on a simple model system for binary fragmentation in a laser field.
Fragment-based time-dependent density functional theory.
Mosquera, Martín A; Jensen, Daniel; Wasserman, Adam
2013-07-12
Using the Runge-Gross theorem that establishes the foundation of time-dependent density functional theory, we prove that for a given electronic Hamiltonian, choice of initial state, and choice of fragmentation, there is a unique single-particle potential (dubbed time-dependent partition potential) which, when added to each of the preselected fragment potentials, forces the fragment densities to evolve in such a way that their sum equals the exact molecular density at all times. This uniqueness theorem suggests new ways of computing the time-dependent properties of electronic systems via fragment-time-dependent density functional theory calculations. We derive a formally exact relationship between the partition potential and the total density, and illustrate our approach on a simple model system for binary fragmentation in a laser field. PMID:23889390
Circuit theory and model-based inference for landscape connectivity
Hanks, Ephraim M.; Hooten, Mevin B.
2013-01-01
Circuit theory has seen extensive recent use in the field of ecology, where it is often applied to study functional connectivity. The landscape is typically represented by a network of nodes and resistors, with the resistance between nodes a function of landscape characteristics. The effective distance between two locations on a landscape is represented by the resistance distance between the nodes in the network. Circuit theory has been applied to many other scientific fields for exploratory analyses, but parametric models for circuits are not common in the scientific literature. To model circuits explicitly, we demonstrate a link between Gaussian Markov random fields and contemporary circuit theory using a covariance structure that induces the necessary resistance distance. This provides a parametric model for second-order observations from such a system. In the landscape ecology setting, the proposed model provides a simple framework where inference can be obtained for effects that landscape features have on functional connectivity. We illustrate the approach through a landscape genetics study linking gene flow in alpine chamois (Rupicapra rupicapra) to the underlying landscape.
Gravitational Cherenkov losses in theories based on modified Newtonian dynamics.
Milgrom, Mordehai
2011-03-18
Survival of high-energy cosmic rays (HECRs) against gravitational Cherenkov losses is shown not to cast strong constraints on modified Newtonian dynamics (MOND) theories that are compatible with general relativity (GR): theories that coincide with GR for accelerations ?a(0) (a(0) is the MOND constant). The energy-loss rate, E, is many orders smaller than those derived in the literature for theories with no extra scale. Modification to GR, which underlies E, enters only beyond the MOND radius of the particle: r(M)=(Gp/ca(0))(1/2). The spectral cutoff, entering E quadratically, is thus r(M)(-1), not k(dB)=p/?. Thus, E is smaller than published rates, which use k(dB), by a factor ?(r(M)k(dB))(2)?10(39)(cp/3×10(11)??Gev)(3). Losses are important only beyond D(loss)?q?(M), where q is a dimensionless factor, and ?(M)=c(2)/a(0) is the MOND length, which is ?2? times the Hubble distance. PMID:21469855
The Efficiency of a Homology Algorithm based on Discrete Morse Theory and Coreductions
Mrozek, Marian
;coded implementations of such homology algorithms in which the construction of the discrete Morse Morse theory as developed by R. Forman [10]. Let K be the collection of cells of a finite, regular, CWThe Efficiency of a Homology Algorithm based on Discrete Morse Theory and Coreductions (Extended
Evaluations of certain theta functions in Ramanujan theory of alternative modular bases
N. D. Bagis
2015-11-11
We give evaluations of certain Borwein's theta functions which appear in Ramanujan theory of alternative elliptic modular bases. Most of this theory where developed by B.C. Berndt, S. Bhargava and F.G. Garvan. We also study the most general class of these theta functions and give evaluation conjectures.
A Model of Rater Behavior in Essay Grading Based on Signal Detection Theory
ERIC Educational Resources Information Center
DeCarlo, Lawrence T.
2005-01-01
An approach to essay grading based on signal detection theory (SDT) is presented. SDT offers a basis for understanding rater behavior with respect to the scoring of construct responses, in that it provides a theory of psychological processes underlying the raters' behavior. The approach also provides measures of the precision of the raters and the…
Cooperative Learning: Improving University Instruction by Basing Practice on Validated Theory
ERIC Educational Resources Information Center
Johnson, David W.; Johnson, Roger T.; Smith, Karl A.
2014-01-01
Cooperative learning is an example of how theory validated by research may be applied to instructional practice. The major theoretical base for cooperative learning is social interdependence theory. It provides clear definitions of cooperative, competitive, and individualistic learning. Hundreds of research studies have validated its basic…
ERIC Educational Resources Information Center
McKown, Clark
2005-01-01
Several school-based racial prejudice-reduction interventions have demonstrated some benefit. Ecological theory serves as a framework within which to understand the limits and to enhance the efficacy of prejudice-reduction interventions. Using ecological theory, this article examines three prejudice-reduction approaches, including social cognitive…
ERIC Educational Resources Information Center
O'Connor, Thomas G.; Matias, Carla; Futh, Annabel; Tantam, Grace; Scott, Stephen
2013-01-01
Parenting programs for school-aged children are typically based on behavioral principles as applied in social learning theory. It is not yet clear if the benefits of these interventions extend beyond aspects of the parent-child relationship quality conceptualized by social learning theory. The current study examined the extent to which a social…
The TEACH Method: An Interactive Approach for Teaching the Needs-Based Theories Of Motivation
ERIC Educational Resources Information Center
Moorer, Cleamon, Jr.
2014-01-01
This paper describes an interactive approach for explaining and teaching the Needs-Based Theories of Motivation. The acronym TEACH stands for Theory, Example, Application, Collaboration, and Having Discussion. This method can help business students to better understand and distinguish the implications of Maslow's Hierarchy of Needs,…
Minimum bases for equational theories of groups and rings: The work of Alfred Tarski
McNulty, George F.
Minimum bases for equational theories of groups and rings: The work of Alfred Tarski and Thomas Abstract Suppose that T is an equational theory of groups or of rings. If T is finitely axioma- tizable, then there is a least number Âµ so that T can be axiomatized by Âµ equations. This Âµ can depend on the operation symbols
Effects of a social cognitive theory-based hip fracture prevention web site for older adults.
Nahm, Eun-Shim; Barker, Bausell; Resnick, Barbara; Covington, Barbara; Magaziner, Jay; Brennan, Patricia Flatley
2010-01-01
The purposes of this study were to develop a Social Cognitive Theory-based, structured Hip Fracture Prevention Web site for older adults and conduct a preliminary evaluation of its effectiveness. The Theory-based, structured Hip Fracture Prevention Web site is composed of learning modules and a moderated discussion board. A total of 245 older adults recruited from two Web sites and a newspaper advertisement were randomized into the Theory-based, structured Hip Fracture Prevention Web site and the conventional Web sites groups. Outcomes included (1) knowledge (hip fractures and osteoporosis), (2) self-efficacy and outcome expectations, and (3) calcium intake and exercise and were assessed at baseline, end of treatment (2 weeks), and follow-up (3 months). Both groups showed significant improvement in most outcomes. For calcium intake, only the Theory-based, structured Hip Fracture Prevention Web site group showed improvement. None of the group and time interactions were significant. The Theory-based, structured Hip Fracture Prevention Web site group, however, was more satisfied with the intervention. The discussion board usage was significantly correlated with outcome gains. Despite several limitations, the findings showed some preliminary effectiveness of Web-based health interventions for older adults and the use of a Theory-based, structured Hip Fracture Prevention Web site as a sustainable Web structure for online health behavior change interventions. PMID:20978408
Everett, Louis J.
92 IEEE TRANSACTIONS ON EDUCATION, VOL. 43, NO. 2, MAY 2000 Performance Theory Based Outcome Measurement in Engineering Education and Training William E. Dillon, Member, IEEE, George V. Kondraske, Senior
Joint inversion of receiver function and ambient noise based on Bayesian theory
van der Hilst, Robert D.
In this study, we present a method for the joint inversion of receiver function and ambient noise based on Bayesian inverse theory (Tarantola, 1987, 2005). The nonlinear inversion method of the complex spectrum ratio of ...
A reactive mobile robot based on a formal theory of action
Baral, C. Floriano, L.; Gabaldon, A.
1996-12-31
One of the agenda behind research in reasoning about actions is to develop autonomous agents (robots) that can act in a dynamic world. The early attempts to use theories of reasoning about actions and planning to formulate a robot control architecture were not successful for several reasons: The early theories based on STRIPS and its extensions allowed only observations about the initial state. A robot control architecture using these theories was usually of the form: (i) make observations (ii) Use the action theory to construct a plan to achieve the goal, and (iii) execute the plan.
Wireless network traffic modeling based on extreme value theory
NASA Astrophysics Data System (ADS)
Liu, Chunfeng; Shu, Yantai; Yang, Oliver W. W.; Liu, Jiakun; Dong, Linfang
2006-10-01
In this paper, Extreme Value Theory (EVT) is presented to analyze wireless network traffic. The role of EVT is to allow the development of procedures that are scientifically and statistically rational to estimate the extreme behavior of random processes. There are two primary methods for studying extremes: the Block Maximum (BM) method and the Points Over Threshold (POT) method. By taking limited traffic data that is greater than the threshold value, our experiment and analysis show the wireless network traffic model obtained with the EVT fits well with that of empirical distribution of traffic, thus illustrating that EVT has a good application foreground in the analysis of wireless network traffic.
Interactive image segmentation framework based on control theory
NASA Astrophysics Data System (ADS)
Zhu, Liangjia; Kolesov, Ivan; Ratner, Vadim; Karasev, Peter; Tannenbaum, Allen
2015-03-01
Segmentation of anatomical structures in medical imagery is a key step in a variety of clinical applications. Designing a generic, automated method that works for various structures and imaging modalities is a daunting task. Instead of proposing a new specific segmentation algorithm, in this paper, we present a general design principle on how to integrate user interactions from the perspective of control theory. In this formulation, Lyapunov stability analysis is employed to design an interactive segmentation system. The effectiveness and robustness of the proposed method are demonstrated.
Wajid, Bilal
2015-06-25
approach [3]. This method recognizes the individual bases of the DNA by flagging the bioluminescence produced by the bases as and when they attach themselves onto a primed DNA template. This scheme is referred to as ‘sequencing by synthesis’ (SS). SS takes...
NASA Technical Reports Server (NTRS)
Slemp, Wesley C. H.; Kapania, Rakesh K.; Tessler, Alexander
2010-01-01
Computation of interlaminar stresses from the higher-order shear and normal deformable beam theory and the refined zigzag theory was performed using the Sinc method based on Interpolation of Highest Derivative. The Sinc method based on Interpolation of Highest Derivative was proposed as an efficient method for determining through-the-thickness variations of interlaminar stresses from one- and two-dimensional analysis by integration of the equilibrium equations of three-dimensional elasticity. However, the use of traditional equivalent single layer theories often results in inaccuracies near the boundaries and when the lamina have extremely large differences in material properties. Interlaminar stresses in symmetric cross-ply laminated beams were obtained by solving the higher-order shear and normal deformable beam theory and the refined zigzag theory with the Sinc method based on Interpolation of Highest Derivative. Interlaminar stresses and bending stresses from the present approach were compared with a detailed finite element solution obtained by ABAQUS/Standard. The results illustrate the ease with which the Sinc method based on Interpolation of Highest Derivative can be used to obtain the through-the-thickness distributions of interlaminar stresses from the beam theories. Moreover, the results indicate that the refined zigzag theory is a substantial improvement over the Timoshenko beam theory due to the piecewise continuous displacement field which more accurately represents interlaminar discontinuities in the strain field. The higher-order shear and normal deformable beam theory more accurately captures the interlaminar stresses at the ends of the beam because it allows transverse normal strain. However, the continuous nature of the displacement field requires a large number of monomial terms before the interlaminar stresses are computed as accurately as the refined zigzag theory.
Capacity and delay estimation for roundabouts using conflict theory.
Qu, Zhaowei; Duan, Yuzhou; Hu, Hongyu; Song, Xianmin
2014-01-01
To estimate the capacity of roundabouts more accurately, the priority rank of each stream is determined through the classification technique given in the Highway Capacity Manual 2010 (HCM2010), which is based on macroscopical analysis of the relationship between entry flow and circulating flow. Then a conflict matrix is established using the additive conflict flow method and by considering the impacts of traffic characteristics and limited priority with high volume. Correspondingly, the conflict relationships of streams are built using probability theory. Furthermore, the entry capacity model of roundabouts is built, and sensitivity analysis is conducted on the model parameters. Finally, the entrance delay model is derived using queuing theory, and the proposed capacity model is compared with the model proposed by Wu and that in the HCM2010. The results show that the capacity calculated by the proposed model is lower than the others for an A-type roundabout, while it is basically consistent with the estimated values from HCM2010 for a B-type roundabout. PMID:24982982
Capacity and Delay Estimation for Roundabouts Using Conflict Theory
Qu, Zhaowei; Duan, Yuzhou; Hu, Hongyu; Song, Xianmin
2014-01-01
To estimate the capacity of roundabouts more accurately, the priority rank of each stream is determined through the classification technique given in the Highway Capacity Manual 2010 (HCM2010), which is based on macroscopical analysis of the relationship between entry flow and circulating flow. Then a conflict matrix is established using the additive conflict flow method and by considering the impacts of traffic characteristics and limited priority with high volume. Correspondingly, the conflict relationships of streams are built using probability theory. Furthermore, the entry capacity model of roundabouts is built, and sensitivity analysis is conducted on the model parameters. Finally, the entrance delay model is derived using queuing theory, and the proposed capacity model is compared with the model proposed by Wu and that in the HCM2010. The results show that the capacity calculated by the proposed model is lower than the others for an A-type roundabout, while it is basically consistent with the estimated values from HCM2010 for a B-type roundabout. PMID:24982982
Non-fragile H? synchronization of memristor-based neural networks using passivity theory.
Mathiyalagan, K; Anbuvithya, R; Sakthivel, R; Park, Ju H; Prakash, P
2016-02-01
In this paper, we formulate and investigate the mixed H? and passivity based synchronization criteria for memristor-based recurrent neural networks with time-varying delays. Some sufficient conditions are obtained to guarantee the synchronization of the considered neural network based on the master-slave concept, differential inclusions theory and Lyapunov-Krasovskii stability theory. Also, the memristive neural network is considered with two different types of memductance functions and two types of gain variations. The results for non-fragile observer-based synchronization are derived in terms of linear matrix inequalities (LMIs). Finally, the effectiveness of the proposed criterion is demonstrated through numerical examples. PMID:26655373
Determination of the Sediment Carrying Capacity Based on Perturbed Theory
Ni, Zhi-hui; Zeng, Qiang; Li-chun, Wu
2014-01-01
According to the previous studies of sediment carrying capacity, a new method of sediment carrying capacity on perturbed theory was proposed. By taking into account the average water depth, average flow velocity, settling velocity, and other influencing factors and introducing the median grain size as one main influencing factor in deriving the new formula, we established a new sediment carrying capacity formula. The coefficients were determined by the principle of dimensional analysis, multiple linear regression method, and the least square method. After that, the new formula was verified through measuring data of natural rivers and flume tests and comparing the verified results calculated by Cao Formula, Zhang Formula, Li Formula, Engelung-Hansen Formula, Ackers-White Formula, and Yang Formula. According to the compared results, it can be seen that the new method is of high accuracy. It could be a useful reference for the determination of sediment carrying capacity. PMID:25136652
Content Based Image Retrieval and Information Theory: A General Approach.
ERIC Educational Resources Information Center
Zachary, John; Iyengar, S. S.; Barhen, Jacob
2001-01-01
Proposes an alternative real valued representation of color based on the information theoretic concept of entropy. A theoretical presentation of image entropy is accompanied by a practical description of the merits and limitations of image entropy compared to color histograms. Results suggest that image entropy is a promising approach to image…
Learning Trajectory Based Instruction: Toward a Theory of Teaching
ERIC Educational Resources Information Center
Sztajn, Paola; Confrey, Jere; Wilson, P. Holt; Edgington, Cynthia
2012-01-01
In this article, we propose a theoretical connection between research on learning and research on teaching through recent research on students' learning trajectories (LTs). We define learning trajectory based instruction (LTBI) as teaching that uses students' LTs as the basis for instructional decisions. We use mathematics as the context for our…
Improved routing strategy based on gravitational field theory
NASA Astrophysics Data System (ADS)
Song, Hai-Quan; Guo, Jin
2015-10-01
Routing and path selection are crucial for many communication and logistic applications. We study the interaction between nodes and packets and establish a simple model for describing the attraction of the node to the packet in transmission process by using the gravitational field theory, considering the real and potential congestion of the nodes. On the basis of this model, we propose a gravitational field routing strategy that considers the attractions of all of the nodes on the travel path to the packet. In order to illustrate the efficiency of proposed routing algorithm, we introduce the order parameter to measure the throughput of the network by the critical value of phase transition from a free flow phase to a congested phase, and study the distribution of betweenness centrality and traffic jam. Simulations show that, compared with the shortest path routing strategy, the gravitational field routing strategy considerably enhances the throughput of the network and balances the traffic load, and nearly all of the nodes are used efficiently. Project supported by the Technology and Development Research Project of China Railway Corporation (Grant No. 2012X007-D) and the Key Program of Technology and Development Research Foundation of China Railway Corporation (Grant No. 2012X003-A).
Cryptography based on operator theory (I): quantum no-key protocols
Li Yang; Min Liang
2012-10-31
We study cryptography based on operator theory, and propose quantum no-key (QNK) protocols from the perspective of operator theory, then present a framework of QNK protocols. The framework is expressed in two forms: trace-preserving quantum operators and natural presentations. Then we defined the information-theoretical security of QNK protocols and the security of identification keys. Two kinds of QNK protocols are also proposed. The first scheme is constructed based on unitary transformation, and the other is constructed based on two multiplicative commutative sets.
A Proxy Signature Scheme Based on Coding Theory
NASA Astrophysics Data System (ADS)
Jannati, Hoda; Falahati, Abolfazl
Proxy signature helps the proxy signer to sign messages on behalf of the original signer. This signature is used when the original signer is not available to sign a specific document. In this paper, we introduce a new proxy signature scheme based on Stern's identification scheme whose security depends on syndrome decoding problem. The proposed scheme is the first code-based proxy signature and can be used in a quantum computer. In this scheme, the operations to perform are linear and very simple thus the signature is performed quickly and can be implemented using smart card in a quite efficient way. The proposed scheme also satisfies unforgeability, undeniability, non-transferability and distinguishability properties which are the security requirements for a proxy signature.
Scale-invariant entropy-based theory for dynamic ordering
Mahulikar, Shripad P. E-mail: spm@aero.iitb.ac.in; Kumari, Priti
2014-09-01
Dynamically Ordered self-organized dissipative structure exists in various forms and at different scales. This investigation first introduces the concept of an isolated embedding system, which embeds an open system, e.g., dissipative structure and its mass and/or energy exchange with its surroundings. Thereafter, scale-invariant theoretical analysis is presented using thermodynamic principles for Order creation, existence, and destruction. The sustainability criterion for Order existence based on its structured mass and/or energy interactions with the surroundings is mathematically defined. This criterion forms the basis for the interrelationship of physical parameters during sustained existence of dynamic Order. It is shown that the sufficient condition for dynamic Order existence is approached if its sustainability criterion is met, i.e., its destruction path is blocked. This scale-invariant approach has the potential to unify the physical understanding of universal dynamic ordering based on entropy considerations.
Imitation dynamics of vaccine decision-making behaviors based on the game theory
Martcheva, Maia
Imitation dynamics of vaccine decision-making behaviors based on the game theory Junyuan Yang1 Maia the imitation dynamics of vaccine uptake, age-structured model is intro- duced, based on a model proposed from the endemic and vaccinator equilibrium. Our study shows that imitation behavior is one factor
Lung Sound Recognition Using Model-Theory Based Feature Selection and Fusion
Kokar, Mieczyslaw M.
Lung Sound Recognition Using Model-Theory Based Feature Selection and Fusion Zbigniew Korona recognition methodology to the recognition of lung sounds. Two main features of this method- ology are features using an entropy-based criterion. To evaluate the methodology we used both normal lung sounds
Using Game Theory and Competition-Based Learning to Stimulate Student Motivation and Performance
ERIC Educational Resources Information Center
Burguillo, Juan C.
2010-01-01
This paper introduces a framework for using Game Theory tournaments as a base to implement Competition-based Learning (CnBL), together with other classical learning techniques, to motivate the students and increase their learning performance. The paper also presents a description of the learning activities performed along the past ten years of a…
An Instructional Design Theory for Interactions in Web-Based Learning Environments.
ERIC Educational Resources Information Center
Lee, Miyoung; Paulus, Trena
This study developed and formatively evaluated an instructional design theory to guide designers in selecting when and how to utilize interactions as instructional methods in a Web-based distance learning higher education environment. Research questions asked: What are the types and outcomes of interactions between participants in a Web-based…
Applying Item Response Theory Methods to Design a Learning Progression-Based Science Assessment
ERIC Educational Resources Information Center
Chen, Jing
2012-01-01
Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1)…
Research on e-learning services based on ontology theory
NASA Astrophysics Data System (ADS)
Liu, Rui
2013-07-01
E-learning services can realize network learning resource sharing and interoperability, but they can't realize automatic discovery, implementation and integration of services. This paper proposes a framework of e-learning services based on ontology, the ontology technology is applied to the publication and discovery process of e-learning services, in order to realize accurate and efficient retrieval and utilization of e-learning services.
Sensor-Based Collision Avoidance: Theory and Experiments
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Steele, Robert; Ivlev, Robert
1996-01-01
A new on-line control strategy for sensor-based collision avoidance of manipulators and supporting experimental results are presented in this article. This control strategy is based on nullification of virtual forces applied to the end-effector by a hypothetical spring-plus-damper attached to the object's surface. In the proposed approach, the real-time arm control software continuously monitors the object distance measured by the arm-mounted proximity sensors. When this distance is less than a preset threshold, the collision avoidance control action is initiated to inhibit motion toward the object and thus prevent collision. This is accomplished by employing an outer feedback loop to perturb the end-effector nominal motion trajectory in real-time based on the sensory data. The perturbation is generated by a proportional-plus-integral (PI) collision avoidance controller acting on the difference between the sensed distance and the preset threshold. This approach is computationally very fast, requires minimal modification to the existing manipulator positioning system, and provides the manipulator with an on-line collision avoidance capability to react autonomously and intelligently. A dexterous RRC robotic arm is instrumented with infrared proximity sensors and is operated under the proposed collision avoidance strategy. Experimental results are presented to demonstrate end-effector collision avoidance both with an approaching object and while reaching inside a constricted opening.
ERIC Educational Resources Information Center
Willis, Jerry
2011-01-01
This article is the second in a series (see Willis, 2011) that looks at the current status of instructional design scholarship and theory. In this concluding article, the focus is on two cultures of ID work, one based on constructivist and interpretivist theory and the other based on critical theory and critical pedagogy. There are distinct…
Venture Capital Investment Base on Grey Relational Theory
NASA Astrophysics Data System (ADS)
Zhang, Xubo
This paper builds a venture capital investment projects selection evaluation model base on risk-weight investment return using grey relational analysis. The risk and return in venture capital investment projects selection process is analyses. These risk and return mainly constricted in management ability, operation ability, market ability, exit obtain and investment cost. The 18 sub-indicators are the impact factors contributed to these five evaluation aspects. Grey relation analysis is use to evaluate the venture capital investment selection. Get the optimal solution of risk-weight double objective investment selection evaluation model. An example is used to demonstrate the model in this paper.
Kinematics and dynamics of deployable structures with scissor-like-elements based on screw theory
NASA Astrophysics Data System (ADS)
Sun, Yuantao; Wang, Sanmin; Mills, James K.; Zhi, Changjian
2014-07-01
Because the deployable structures are complex multi-loop structures and methods of derivation which lead to simpler kinematic and dynamic equations of motion are the subject of research effort, the kinematics and dynamics of deployable structures with scissor-like-elements are presented based on screw theory and the principle of virtual work respectively. According to the geometric characteristic of the deployable structure examined, the basic structural unit is the common scissor-like-element(SLE). First, a spatial deployable structure, comprised of three SLEs, is defined, and the constraint topology graph is obtained. The equations of motion are then derived based on screw theory and the geometric nature of scissor elements. Second, to develop the dynamics of the whole deployable structure, the local coordinates of the SLEs and the Jacobian matrices of the center of mass of the deployable structure are derived. Then, the equivalent forces are assembled and added in the equations of motion based on the principle of virtual work. Finally, dynamic behavior and unfolded process of the deployable structure are simulated. Its figures of velocity, acceleration and input torque are obtained based on the simulate results. Screw theory not only provides an efficient solution formulation and theory guidance for complex multi-closed loop deployable structures, but also extends the method to solve dynamics of deployable structures. As an efficient mathematical tool, the simper equations of motion are derived based on screw theory.
Density functional theory investigations of graphene-based heterostructures
NASA Astrophysics Data System (ADS)
Ebnonnasir, Abbas
Graphene, a two-dimensional single crystal of carbon atoms arranged in a honeycomb lattice, is attractive for applications in nanoelectromechanical devices; in high-performance, low-power electronics, and as transparent electrodes. The present study employs Density Functional Theory (DFT) to identify the atomic and electronic structure of graphene (Gr) on three different types of substrates: transition metals (nickel, palladium), insulators (hBN) and semiconductors (MoS2). Our DFT calculations show that graphene layer on Ni(111) and Ni(110) becomes metallic owing to large binding energies and strong hybridization between nickel and carbon bands. Furthermore, in Gr/Gr/palladium systems, we find that the electrostatic dipoles at the Gr/palladium and Gr/Gr interfaces are oppositely oriented. This leads to a work function of bilayer graphene domains on palladium (111) higher than that of monolayer graphene; the strengths of these dipoles are sensitive to the relative orientation between the two graphene layers and between the graphene and palladium (111). Additionally, the binding energy of graphene on palladium (111) depends on its orientation. We elucidate the physical origin of the effect of growing graphene on hBN/Ni(111) on the binding of hBN to a Ni(111) substrate, and on the electronic properties of hBN. We find that hBN/Ni has two configurational minima, one chemisorbed and one physisorbed, whose properties are not altered when graphene is placed atop hBN. However, a switch from chemisorbed to physisorbed hBN on Ni can occur due to the processing conditions during graphene growth; this switch is solely responsible for changing the hBN layer from metallic to insulating, and not the interactions with graphene. Finally, we find that the relative orientation between graphene and MoS2 layers affects the value and the nature of the bandgap of MoS2, while keeping the electronic structure of graphene unaltered. This relative orientation does not affect the binding energy or the distance between graphene and MoS2 layers. However, it changes the registry between the two layers, which strongly influences the value and type of the bandgap in MoS 2.
NASA Astrophysics Data System (ADS)
Oral, I.; Dogan, O.
2007-04-01
The aim of this study is to find out the effect of the course materials based on Multiple Intelligence Theory upon the intelligence groups' learning process. In conclusion, the results proved that the materials prepared according to Multiple Intelligence Theory have a considerable effect on the students' learning process. This effect was particularly seen on the student groups of the musical-rhythmic, verbal-linguistic, interpersonal-social and naturalist intelligence.
[Training -- competency-based education -- learning theory and practice].
Breuer, Georg
2013-11-01
A lifelong learning process is necessarily the basis for the specialization and expertise in the field of anesthesiology. Thus competency as a physician is a complex, multidimensional construction of knowledge, skills and attitudes to be able to solve and persist the complex daily work challenges in a flexible and responsible way. Experts therefore showflexible and intuitive capabilities in pursuing their profession. Accordingly modern competency based learning objectives are very helpful. The DGAI Commission for “Further Education” already thought ahead in defining a competencybased curriculum for the specialization in the field of anesthesiology and could be integrated into the frameworks of the German Medical Association. In addition to the curricular framework elements of assessment are necessary. A single oral exam is consequently not representative for different levels of competencies. However, there is beside the responsibility of the learners for their learning processalso a high obligation of the clinical teachers to attend the learning process and to ensure a positive learning atmosphere with scope for feedback. Some competencies potentially could be better learned in a “sheltered” room based on simulation outside the OR, for example to train rare incidents or emergency procedures. In general there should be ongoing effort to enhance the process of expertise development, also in context of patient safety and quality management. PMID:24343144
Safety models incorporating graph theory based transit indicators.
Quintero, Liliana; Sayed, Tarek; Wahba, Mohamed M
2013-01-01
There is a considerable need for tools to enable the evaluation of the safety of transit networks at the planning stage. One interesting approach for the planning of public transportation systems is the study of networks. Network techniques involve the analysis of systems by viewing them as a graph composed of a set of vertices (nodes) and edges (links). Once the transport system is visualized as a graph, various network properties can be evaluated based on the relationships between the network elements. Several indicators can be calculated including connectivity, coverage, directness and complexity, among others. The main objective of this study is to investigate the relationship between network-based transit indicators and safety. The study develops macro-level collision prediction models that explicitly incorporate transit physical and operational elements and transit network indicators as explanatory variables. Several macro-level (zonal) collision prediction models were developed using a generalized linear regression technique, assuming a negative binomial error structure. The models were grouped into four main themes: transit infrastructure, transit network topology, transit route design, and transit performance and operations. The safety models showed that collisions were significantly associated with transit network properties such as: connectivity, coverage, overlapping degree and the Local Index of Transit Availability. As well, the models showed a significant relationship between collisions and some transit physical and operational attributes such as the number of routes, frequency of routes, bus density, length of bus and 3+ priority lanes. PMID:22831497
ERIC Educational Resources Information Center
Knox, A. Whitney; Miller, Bruce A.
1980-01-01
Describes a method for estimating the number of cathode ray tube terminals needed for public use of an online library catalog. Authors claim method could also be used to estimate needed numbers of microform readers for a computer output microform (COM) catalog. Formulae are included. (Author/JD)
Game Theory Based Trust Model for Cloud Environment.
Gokulnath, K; Uthariaraj, Rhymend
2015-01-01
The aim of this work is to propose a method to establish trust at bootload level in cloud computing environment. This work proposes a game theoretic based approach for achieving trust at bootload level of both resources and users perception. Nash equilibrium (NE) enhances the trust evaluation of the first-time users and providers. It also restricts the service providers and the users to violate service level agreement (SLA). Significantly, the problem of cold start and whitewashing issues are addressed by the proposed method. In addition appropriate mapping of cloud user's application to cloud service provider for segregating trust level is achieved as a part of mapping. Thus, time complexity and space complexity are handled efficiently. Experiments were carried out to compare and contrast the performance of the conventional methods and the proposed method. Several metrics like execution time, accuracy, error identification, and undecidability of the resources were considered. PMID:26380365
Game Theory Based Trust Model for Cloud Environment
Gokulnath, K.; Uthariaraj, Rhymend
2015-01-01
The aim of this work is to propose a method to establish trust at bootload level in cloud computing environment. This work proposes a game theoretic based approach for achieving trust at bootload level of both resources and users perception. Nash equilibrium (NE) enhances the trust evaluation of the first-time users and providers. It also restricts the service providers and the users to violate service level agreement (SLA). Significantly, the problem of cold start and whitewashing issues are addressed by the proposed method. In addition appropriate mapping of cloud user's application to cloud service provider for segregating trust level is achieved as a part of mapping. Thus, time complexity and space complexity are handled efficiently. Experiments were carried out to compare and contrast the performance of the conventional methods and the proposed method. Several metrics like execution time, accuracy, error identification, and undecidability of the resources were considered. PMID:26380365
[The Chinese urban metabolisms based on the emergy theory].
Song, Tao; Cai, Jian-Ming; Ni, Pan; Yang, Zhen-Shan
2014-04-01
By using emergy indices of urban metabolisms, this paper analyzed 31 Chinese urban metabolisms' systematic structures and characteristics in 2000 and 2010. The results showed that Chinese urban metabolisms were characterized as resource consumption and coastal external dependency. Non-renewable resource emergy accounted for a higher proportion of the total emergy in the inland cities' urban metabolisms. The emergy of imports and exports accounted for the vast majority of urban metabolic systems in metropolises and coastal cities such as Beijing and Shanghai, showing a significant externally-oriented metabolic characteristic. Based on that, the related policies were put forward: to develop the renewable resource and energy industry; to improve the non-renewable resource and energy utilization efficiencies; to optimize the import and export structure of services, cargo and fuel; and to establish the flexible management mechanism of urban metabolisms. PMID:25011303
Microfluidic, Bead-Based Assay: Theory and Experiments
Thompson, Jason A.; Bau, Haim H.
2009-01-01
Microbeads are frequently used as a solid support for biomolecules such as proteins and nucleic acids in heterogeneous microfluidic assays. However, relatively few studies investigate the binding kinetics on modified bead surfaces in a microfluidics context. In this study, a customized hot embossing technique is used to stamp microwells in a thin plastic substrate where streptavidin-coated agarose beads are selectively placed and subsequently immobilized within a conduit. Biotinylated quantum dots are used as a label to monitor target analyte binding to the bead's surface. Three-dimensional finite element simulations are carried out to model the binding kinetics on the bead's surface. The model accounts for surface exclusion effects resulting from a single quantum dot occluding multiple receptor sites. The theoretical predictions are compared and favorably agree with experimental observations. The theoretical simulations provide a useful tool to predict how varying parameters affect microbead reaction kinetics and sensor performance. This study enhances our understanding of bead-based microfluidic assays and provides a design tool for developers of point-of-care, lab-on-chip devices for medical diagnosis, food and water quality inspection, and environmental monitoring. PMID:19766545
Rotthoff, Thomas; Schneider, Matthias; Ritz-Timme, Stefanie; Windolf, Joachim
2015-01-01
Objective: Already during their studies, medical students should intensively train their clinical thinking and practice skills, enhancing their clinical expertise in theoretical and practical terms. Methods: Based on the findings of educational research, a new curriculum for clinical training was developed at Duesseldorf University, focussing on workplace-based teaching, learning and assessment. Results: For students in their 3rd, 4th and 5th year of study, our curriculum is based on learning with patient complaint items in regard to multidisciplinary areas of outpatient and inpatient care. For this educational format, 123 complaint items were defined and their compatibility with diseases from various disciplines was tested. Based on the complaint of a specific case, students locate the underlying disease pattern, the differential diagnostic and therapeutical procedures and thereby deepen the required knowledge in the basic subjects. Study books have been created by the clinical departments to support this process. Learning is integrated in competence-oriented and workplace-based learning and assessment, offering a close-knit contact between students and doctors. Conclusion: The concept allows the integration of theory into practice and the integration of knowledge from the basic, clinical-theoretical and clinical subjects into clinical thinking and action. PMID:25699107
Quantifying Uncertainty in Physics-Based Models with Measure Theory
NASA Astrophysics Data System (ADS)
Butler, T.
2014-12-01
The ultimate goal in scientific inference is to predict some unobserved behavior of a system often described by a specific set of quantities of interest (QoI) computed from the solution to a mathematical model. For example, given a contaminant transport model in a regional aquifer with specified porosity, initial concentrations, etc., we may analyze remediation strategies in order to achieve threshold tolerances in certain wells. Solution to this prediction problem is complicated by several sources of uncertainty. A primary source of uncertainty is in the specification of the input data and parameters that characterize the physical properties of a given state of the system. It is often impossible to experimentally observe the input data and parameters that characterize the physical properties of the modeled system, and any observable QoI are generally of the state of the system itself and may differ substantially from the QoI to be predicted. Thus, we must first solve an inverse problem using observable QoI to quantify uncertainties in the inputs to the model. The inverse problem is complicated by several issues. The solution of the model induces a "QoI map" from the space of model inputs to the observable QoI computed from the solution of the model. This QoI map is generally a "many-to-few" map that reduces the dimension implying that the deterministic inverse problem has set-valued solutions. Additionally, available data on the QoI are subject to natural variation and experimental/observational error modeled as probability measures implying that solutions of the inverse problem and forward prediction problem are given in terms of probability measures. We describe a novel measure-theoretic framework for the formulation and solution of the stochastic inverse problem for a deterministic physics-based model requiring minimal assumptions. A computational algorithm and error analysis are described accounting for both deterministic and stochastic sources of error.
An Approach for Leukemia Classification Based on Cooperative Game Theory
Torkaman, Atefeh; Charkari, Nasrollah Moghaddam; Aghaeipour, Mahnaz
2011-01-01
Hematological malignancies are the types of cancer that affect blood, bone marrow and lymph nodes. As these tissues are naturally connected through the immune system, a disease affecting one of them will often affect the others as well. The hematological malignancies include; Leukemia, Lymphoma, Multiple myeloma. Among them, leukemia is a serious malignancy that starts in blood tissues especially the bone marrow, where the blood is made. Researches show, leukemia is one of the common cancers in the world. So, the emphasis on diagnostic techniques and best treatments would be able to provide better prognosis and survival for patients. In this paper, an automatic diagnosis recommender system for classifying leukemia based on cooperative game is presented. Through out this research, we analyze the flow cytometry data toward the classification of leukemia into eight classes. We work on real data set from different types of leukemia that have been collected at Iran Blood Transfusion Organization (IBTO). Generally, the data set contains 400 samples taken from human leukemic bone marrow. This study deals with cooperative game used for classification according to different weights assigned to the markers. The proposed method is versatile as there are no constraints to what the input or output represent. This means that it can be used to classify a population according to their contributions. In other words, it applies equally to other groups of data. The experimental results show the accuracy rate of 93.12%, for classification and compared to decision tree (C4.5) with (90.16%) in accuracy. The result demonstrates that cooperative game is very promising to be used directly for classification of leukemia as a part of Active Medical decision support system for interpretation of flow cytometry readout. This system could assist clinical hematologists to properly recognize different kinds of leukemia by preparing suggestions and this could improve the treatment of leukemic patients. PMID:21988887
Theory of normal and superconducting properties of fullerene-based solids
Cohen, M.L.
1992-10-01
Recent experiments on the normal-state and superconducting properties of fullerene-based solids are used to constrain the proposal theories of the electronic nature of these materials. In general, models of superconductivity based on electron pairing induced by phonons are consistent with electronic band theory. The latter experiments also yield estimates of the parameters characterizing these type H superconductors. It is argued that, at this point, a ``standard model`` of phonons interacting with itinerant electrons may be a good first approximation for explaining the properties of the metallic fullerenes.
Theory of normal and superconducting properties of fullerene-based solids
Cohen, M.L.
1992-10-01
Recent experiments on the normal-state and superconducting properties of fullerene-based solids are used to constrain the proposal theories of the electronic nature of these materials. In general, models of superconductivity based on electron pairing induced by phonons are consistent with electronic band theory. The latter experiments also yield estimates of the parameters characterizing these type H superconductors. It is argued that, at this point, a standard model'' of phonons interacting with itinerant electrons may be a good first approximation for explaining the properties of the metallic fullerenes.
Enhancing quality of practice through theory of change-based evaluation: science or practice?
Julian, David A
2005-06-01
This paper describes the evaluation component of Partnerships for Success (PfS), a comprehensive community effort designed to address youth development issues. The evaluation component is referred to as "theory of change-based evaluation." The author considers the implications of applying community practice tools such as theory of change-based evaluation to the current conceptualization of community science. More specifically, the author argues that the current conceptualization of community science pays scant attention to community practice. This paper concludes by suggesting that the current conceptualization of community science be modified to recognize the importance of community practice as an equal aspiration for community psychologists. PMID:15909792
Studying thin film damping in a micro-beam resonator based on non-classical theories
NASA Astrophysics Data System (ADS)
Ghanbari, Mina; Hossainpour, Siamak; Rezazadeh, Ghader
2015-09-01
In this paper, a mathematical model is presented for studying thin film damping of the surrounding fluid in an in-plane oscillating micro-beam resonator. The proposed model for this study is made up of a clamped-clamped micro-beam bound between two fixed layers. The micro-gap between the micro-beam and fixed layers is filled with air. As classical theories are not properly capable of predicting the size dependence behaviors of the micro-beam, and also behavior of micro-scale fluid media, hence in the presented model, equation of motion governing longitudinal displacement of the micro-beam has been extracted based on non-local elasticity theory. Furthermore, the fluid field has been modeled based on micro-polar theory. These coupled equations have been simplified using Newton-Laplace and continuity equations. After transforming to non-dimensional form and linearizing, the equations have been discretized and solved simultaneously using a Galerkin-based reduced order model. Considering slip boundary conditions and applying a complex frequency approach, the equivalent damping ratio and quality factor of the micro-beam resonator have been obtained. The obtained values for the quality factor have been compared to those based on classical theories. We have shown that applying non-classical theories underestimate the values of the quality factor obtained based on classical theories. The effects of geometrical parameters of the micro-beam and micro-scale fluid field on the quality factor of the resonator have also been investigated.
Tomimatsu,, Hiroshi
)Evolutionary developmental biology () Charles Darwin's theory of evolution is based on three principles products, morphogens, that act analogously to the external stimuli in bacteria. These discoveries drew
Tools and Experiments Supporting a Testing-based Theory of Component Composition
Hamlet, Richard
Tools and Experiments Supporting a Testing-based Theory of Component Composition DICK HAMLET: Dick Hamlet, Department of Computer Science, Portland State University, P.O. Box 751, Portland, Vol. V, No. N, Month 20YY, Pages 140. #12;2 · Dick Hamlet Other engineering disciplines have been
Time-dependent self-consistent-field dynamics based on a reaction path Hamiltonian. I. Theory
Hammes-Schiffer, Sharon
Time-dependent self-consistent-field dynamics based on a reaction path Hamiltonian. I. Theory Jian the time-dependent self-consistent-field TDSCF method with the reaction path Hamiltonian RPH derived the calculation of the real-time quantum dynamics of chemical reactions involving polyatomic molecules. When both
Design of Secure Image Transmission In MANET using Number Theory Based Image Compression and
International Association for Cryptologic Research (IACR)
Design of Secure Image Transmission In MANET using Number Theory Based Image Compression algorithm has been implemented on color images. The image coding results, calculated from actual image size utilization and the encryption problems at the same time. 2. Architectural Design Fig. 1 Block Diagram
Glacier mapping based on rough set theory in the Manas River watershed
NASA Astrophysics Data System (ADS)
Yan, Lili; Wang, Jian; Hao, Xiaohua; Tang, Zhiguang
2014-04-01
Precise glacier information is important for assessing climate change in remote mountain areas. To obtain more accurate glacier mapping, rough set theory, which can deal with vague and uncertainty information, was introduced to obtain optimal knowledge rules for glacier mapping. Optical images, thermal infrared band data, texture information and morphometric parameters were combined to build a decision table used in our proposed rough set theory method. After discretizing the real value attributes, decision rules were calculated through the decision rule generation algorithm for glacier mapping. A decision classifier based on the generated rules classified the multispectral image into glacier and non-glacier areas. The result of maximum likelihood classification (MLC) was used to compare with the result of the classification based on the rough set theory. Confusion matrix and visual interpretation were used to evaluate the overall accuracy of the results of the two methods. The accuracies of the rough set method and maximum likelihood classification were compared, yielding overall accuracies of 94.15% and 93.88%, respectively. It showed the area difference based on rough set was smaller by comparing the glacier areas of the rough set method and MLC with visual interpreter, respectively. The high accuracy for glacier mapping and the small area difference for glacier based on rough set theory demonstrated that this method was effective and promising for glacier mapping.
ERIC Educational Resources Information Center
Gabriel, Rachael
2011-01-01
In 1999, Ball and Cohen proposed a practice-based theory of professional education, which would end inadequate professional development efforts with a more comprehensive approach. Their work has been referenced over the past decade, yet there have been limited attempts to actualize their ideals and research their implications. In this article, I…
Portuguese Public University Student Satisfaction: A Stakeholder Theory-Based Approach
ERIC Educational Resources Information Center
Mainardes, Emerson; Alves, Helena; Raposo, Mario
2013-01-01
In accordance with the importance of the student stakeholder to universities, the objective of this research project was to evaluate student satisfaction at Portuguese public universities as regards their self-expressed core expectations. The research was based both on stakeholder theory itself and on previous studies of university stakeholders.…
Redox Potentials and Acidity Constants from Density Functional Theory Based Molecular Dynamics
Cheng, Jun; Liu, Xiandong; VandeVondele, Joost; Sulpizi, Marialore; Sprik, Michiel
2014-11-03
Implementation. Phys. Chem. Chem. Phys 2008, 10, 5238–5249. 11 Cheng, J.; Sulpizi, M.; Sprik, M. Redox Potentials and pKa for Benzoquinone from Den- sity Functional Theory Based Molecular Dynamics. J. Chem. Phys. 2009, 131, 154504. 12 Costanzo, F.; Della Valle, R...
The Idea of National HRD: An Analysis Based on Economics and Theory Development Methodology
ERIC Educational Resources Information Center
Wang, Greg G.; Swanson, Richard A.
2008-01-01
Recent human resource development (HRD) literature focuses attention on national HRD (NHRD) research and represents problems in both HRD identity and research methodology. Based on a review of development economics and international development literature, this study analyzes the existing NHRD literature with respect to the theory development…
Schema Theory-based Computational Approach to Support Children's Conceptual Understanding
Dimitrova, Vania
is based on schema theory that explains how meaning-making occurs and stresses the importance of prior on schema activation and modification is described. The architecture addresses three important issues system `Going to the Moon', as an integrated component in a reading session. An experimental study
Text Empirics-Based Mining of Biomolecular Interactions from Texts: Theory and Application
Berleant, Daniel
1 Text Empirics-Based Mining of Biomolecular Interactions from Texts: Theory and Application Daniel-569-3448; 2 Iowa State University, Ames; 3 Ohio State University Medical Center; 4 Procter & Gamble Co., Miami, Ohio Abstract Vast quantities of biomedical data exist, not only in well-structured databases
ERIC Educational Resources Information Center
Nezhnov, Peter; Kardanova, Elena; Vasilyeva, Marina; Ludlow, Larry
2015-01-01
The present study tested the possibility of operationalizing levels of knowledge acquisition based on Vygotsky's theory of cognitive growth. An assessment tool (SAM-Math) was developed to capture a hypothesized hierarchical structure of mathematical knowledge consisting of procedural, conceptual, and functional levels. In Study 1, SAM-Math was…
Theory and Utility-Key Themes in Evidence-Based Assessment: Comment on the Special Section
ERIC Educational Resources Information Center
McFall, Richard M.
2005-01-01
This article focuses on two key themes in the four featured reviews on evidence-based assessment. The first theme is the essential role of theory in psychological assessment. An overview of this complex, multilayered role is presented. The second theme is the need for a common metric with which to gauge the utility of specific psychological tests…
A Context-Based Theory of Recency and Contiguity in Free Recall Per B. Sederberg
Howard, Marc
A Context-Based Theory of Recency and Contiguity in Free Recall Per B. Sederberg Princeton present a new model of free recall on the basis of M. W. Howard and M. J. Kahana's (2002a) temporal rise to short-term and long-term contiguity effects. Recall decisions are controlled by a race between
Portfolio Theory-Based Resource Assignment in a Cloud Computing System
Pedram, Massoud
Portfolio Theory-Based Resource Assignment in a Cloud Computing System Inkwon Hwang and Massoud-- The focus of this paper is on energy-aware resource management in a cloud computing system. Much. Keywords- Cloud computing; portfolio effect; bin-packing; resource allocation I. INTRODUCTION Cloud
English Textbooks Based on Research and Theory--A Possible Dream.
ERIC Educational Resources Information Center
Suhor, Charles
1984-01-01
Research based text materials will probably never dominate the textbook market. To begin with, translating theory and research into practice is a chancy business. There are also creative problems such as the inherent oversimplification involved in textbook writing. Every textbook writer who has been a classroom teacher will acknowledge that such…
Poverty Lines Based on Fuzzy Sets Theory and Its Application to Malaysian Data
ERIC Educational Resources Information Center
Abdullah, Lazim
2011-01-01
Defining the poverty line has been acknowledged as being highly variable by the majority of published literature. Despite long discussions and successes, poverty line has a number of problems due to its arbitrary nature. This paper proposes three measurements of poverty lines using membership functions based on fuzzy set theory. The three…
ERIC Educational Resources Information Center
Liaw, Shu-Sheng; Hatala, Marek; Huang, Hsiu-Mei
2010-01-01
Mobile devices could facilitate human interaction and access to knowledge resources anytime and anywhere. With respect to wide application possibilities of mobile learning, investigating learners' acceptance towards it is an essential issue. Based on activity theory approach, this research explores positive factors for the acceptance of m-learning…
Revisiting Transactional Distance Theory in a Context of Web-Based High-School Distance Education
ERIC Educational Resources Information Center
Murphy, Elizabeth Anne; Rodriguez-Manzanares, Maria Angeles
2008-01-01
The purpose of this paper is to report on a study that provided an opportunity to consider Transactional Distance Theory (TDT) in a current technology context of web-based learning in distance education (DE), high-school classrooms. Data collection relied on semi-structured interviews conducted with 22 e-teachers and managers in Newfoundland and…
From Theory to Practice: Concept-Based Inquiry in a High School Art Classroom
ERIC Educational Resources Information Center
Walker, Margaret A.
2014-01-01
This study examines what an emerging educational theory looks like when put into practice in an art classroom. It explores the teaching methodology of a high school art teacher who has utilized concept-based inquiry in the classroom to engage his students in artmaking and analyzes the influence this methodology has had on his adolescent students.…
ERIC Educational Resources Information Center
Li, Zhenying
2012-01-01
Based on Constructivism Theory, this paper aims to investigate the application of online multimedia courseware to college English teaching. By making experiments and students' feedback, some experience has been accumulated, and some problems are discovered and certain revelations are acquired as well in English teaching practice, which pave the…
ERIC Educational Resources Information Center
Henderson, Ronald W.; And Others
Theory-based prototype computer-video instructional modules were developed to serve as an instructional supplement for students experiencing difficulty in learning mathematics, with special consideration given to students underrepresented in mathematics (particularly women and minorities). Modules focused on concepts and operations for factors,…
Imitation dynamics of vaccine decision-making behaviours based on the game theory.
Yang, Junyuan; Martcheva, Maia; Chen, Yuming
2016-12-01
Based on game theory, we propose an age-structured model to investigate the imitation dynamics of vaccine uptake. We first obtain the existence and local stability of equilibria. We show that Hopf bifurcation can occur. We also establish the global stability of the boundary equilibria and persistence of the disease. The theoretical results are supported by numerical simulations. PMID:26536171
Eustice, Ryan
Magnetometer Bias Calibration Based on Relative Angular Position: Theory and Experimental for esti- mating the sensor bias of three-axis magnetometers (or any other field sensor). Our approach employs relative angular position measurements to estimate the three-axis magnetometer measurement bias
Electron-phonon coupling in low dimensional graphene-based systems: theory and numerical simulation
Botti, Silvana
Electron-phonon coupling in low dimensional graphene-based systems: theory and numerical simulation, Ecole Polytechnique. The design of new opto-electronic devices requires understanding of the properties of valence electrons. In nano-objects, these properties are unique due to the electronic confinement in low
Assessment of Prevalence of Persons with Down Syndrome: A Theory-Based Demographic Model
ERIC Educational Resources Information Center
de Graaf, Gert; Vis, Jeroen C.; Haveman, Meindert; van Hove, Geert; de Graaf, Erik A. B.; Tijssen, Jan G. P.; Mulder, Barbara J. M.
2011-01-01
Background: The Netherlands are lacking reliable empirical data in relation to the development of birth and population prevalence of Down syndrome. For the UK and Ireland there are more historical empirical data available. A theory-based model is developed for predicting Down syndrome prevalence in the Netherlands from the 1950s onwards. It is…
ERIC Educational Resources Information Center
Fein, Lance; Jones, Don
2015-01-01
This study addresses the compromise skills that are taught to students diagnosed with autistic spectrum disorders (ASD) and related social and communication deficits. A private school in the southeastern United States implemented an emergence theory-based curriculum to address these skills, yet no formal analysis was conducted to determine its…
Evidence-Based Practice in Kinesiology: The Theory to Practice Gap Revisited
ERIC Educational Resources Information Center
Knudson, Duane
2005-01-01
As evidence-based practice sweeps the applied health professions, it is a good time to evaluate the generation of knowledge in Kinesiology and its transmission to professionals and the public. Knowledge transmission has been debated in the past from the perspectives of the theory-to-practice gap and the discipline versus profession emphasis.…
ERIC Educational Resources Information Center
Campbell, Chris; MacPherson, Seonaigh; Sawkins, Tanis
2014-01-01
This case study describes how sociocultural and activity theory were applied in the design of a publicly funded, Canadian Language Benchmark (CLB)-based English as a Second Language (ESL) credential program and curriculum for immigrant and international students in postsecondary institutions in British Columbia, Canada. The ESL Pathways Project…
Critically Evaluating Competing Theories: An Exercise Based on the Kitty Genovese Murder
ERIC Educational Resources Information Center
Sagarin, Brad J.; Lawler-Sagarin, Kimberly A.
2005-01-01
We describe an exercise based on the 1964 murder of Catherine Genovese--a murder observed by 38 witnesses, none of whom called the police. Students read a summary of the murder and worked in small groups to design an experiment to test the competing theories for the inaction of the witnesses (Americans' selfishness and insensitivity vs. diffusion…
Transdiagnostic Theory and Application of Family-Based Treatment for Youth with Eating Disorders
ERIC Educational Resources Information Center
Loeb, Katharine L.; Lock, James; Greif, Rebecca; le Grange, Daniel
2012-01-01
This paper describes the transdiagnostic theory and application of family-based treatment (FBT) for children and adolescents with eating disorders. We review the fundamentals of FBT, a transdiagnostic theoretical model of FBT and the literature supporting its clinical application, adaptations across developmental stages and the diagnostic spectrum…
Social Theory, Sacred Text, and Sing-Sing Prison: A Sociology of Community-Based Reconciliation.
ERIC Educational Resources Information Center
Erickson, Victoria Lee
2002-01-01
Examines the sociological component of the urban community-based professional education programs at New York Theological Seminary offered at Sing-Sing Prison. Explores the simultaneous use of social theory and sacred texts as teaching tools and intervention strategies in the educational and personal transformation processes of men incarcerated for…
Density functional theory based calculations of the vibrational properties of chlorophyll-a
Hastings, Gary
Density functional theory based calculations of the vibrational properties of chlorophyll-a Ruili in revised form 8 March 2007; accepted 13 March 2007 Available online 23 March 2007 Abstract Chlorophyll organisms, such as plants algae and cyanobacteria. To study the chlorophyll-a species at the heart
preprint --preprint --preprint --preprint --preprint A limiter based on kinetic theory
Junk, Michael
preprint -- preprint -- preprint -- preprint -- preprint A limiter based on kinetic theory Mapundi K. Banda Michael Junk Axel Klar Abstract In the present paper the low Mach number limit of kinetic equations is used to develop a discretization for the incompressible Euler equation. The kinetic equation
Effects of Guided Writing Strategies on Students' Writing Attitudes Based on Media Richness Theory
ERIC Educational Resources Information Center
Lan, Yu-Feng; Hung, Chun-Ling; Hsu, Hung-Ju
2011-01-01
The purpose of this paper is to develop different guided writing strategies based on media richness theory and further evaluate the effects of these writing strategies on younger students' writing attitudes in terms of motivation, enjoyment and anxiety. A total of 66 sixth-grade elementary students with an average age of twelve were invited to…
A Theory-Based Hydrometeor Identification Algorithm for X-Band Polarimetric Radars
Collett Jr., Jeffrey L.
pose significant problems. This study seeks to develop a hydrometeor identifi- cation (HID) algorithm- tions that form the basis of the new X-band HID. The theory-based X-band HID is applied to a case from-band polarimetric Next Generation Weather Radar (NEXRAD) prototype radar, KOUN. The X- band HID shows promise
Predicting Study Abroad Intentions Based on the Theory of Planned Behavior
ERIC Educational Resources Information Center
Schnusenberg, Oliver; de Jong, Pieter; Goel, Lakshmi
2012-01-01
The emphasis on study abroad programs is growing in the academic context as U.S. based universities seek to incorporate a global perspective in education. Using a model that has underpinnings in the theory of planned behavior (TPB), we predict students' intention to participate in short-term study abroad program. We use TPB to identify behavioral,…
Aligning Theory and Web-Based Instructional Design Practice with Design Patterns.
ERIC Educational Resources Information Center
Frizell, Sherri S.; Hubscher, Roland
Designing instructionally sound Web courses is a difficult task for instructors who lack experience in interaction and Web-based instructional design. Learning theories and instructional strategies can provide course designers with principles and design guidelines associated with effective instruction that can be utilized in the design of…
Interpretation-Based Processing: A Unified Theory of Semantic Sentence Comprehension
ERIC Educational Resources Information Center
Budiu, Raluca; Anderson, John R.
2004-01-01
We present interpretation-based processing--a theory of sentence processing that builds a syntactic and a semantic representation for a sentence and assigns an interpretation to the sentence as soon as possible. That interpretation can further participate in comprehension and in lexical processing and is vital for relating the sentence to the…
Mixture theory-based poroelasticity as a model of interstitial tissue growth
Cowin, Stephen C.; Cardoso, Luis
2011-01-01
This contribution presents an alternative approach to mixture theory-based poroelasticity by transferring some poroelastic concepts developed by Maurice Biot to mixture theory. These concepts are a larger RVE and the subRVE-RVE velocity average tensor, which Biot called the micro-macro velocity average tensor. This velocity average tensor is assumed here to depend upon the pore structure fabric. The formulation of mixture theory presented is directed toward the modeling of interstitial growth, that is to say changing mass and changing density of an organism. Traditional mixture theory considers constituents to be open systems, but the entire mixture is a closed system. In this development the mixture is also considered to be an open system as an alternative method of modeling growth. Growth is slow and accelerations are neglected in the applications. The velocity of a solid constituent is employed as the main reference velocity in preference to the mean velocity concept from the original formulation of mixture theory. The standard development of statements of the conservation principles and entropy inequality employed in mixture theory are modified to account for these kinematic changes and to allow for supplies of mass, momentum and energy to each constituent and to the mixture as a whole. The objective is to establish a basis for the development of constitutive equations for growth of tissues. PMID:22184481
Cost performance satellite design using queueing theory. [via digital simulation
NASA Technical Reports Server (NTRS)
Hein, G. F.
1975-01-01
A modified Poisson arrival, infinite server queuing model is used to determine the effects of limiting the number of broadcast channels (C) of a direct broadcast satellite used for public service purposes (remote health care, education, etc.). The model is based on the reproductive property of the Poisson distribution. A difference equation has been developed to describe the change in the Poisson parameter. When all initially delayed arrivals reenter the system a (C plus 1) order polynomial must be solved to determine the effective value of the Poisson parameter. When less than 100% of the arrivals reenter the system the effective value must be determined by solving a transcendental equation. The model was used to determine the minimum number of channels required for a disaster warning satellite without degradation in performance. Results predicted by the queuing model were compared with the results of digital simulation.
Learning control system design based on 2-D theory - An application to parallel link manipulator
NASA Technical Reports Server (NTRS)
Geng, Z.; Carroll, R. L.; Lee, J. D.; Haynes, L. H.
1990-01-01
An approach to iterative learning control system design based on two-dimensional system theory is presented. A two-dimensional model for the iterative learning control system which reveals the connections between learning control systems and two-dimensional system theory is established. A learning control algorithm is proposed, and the convergence of learning using this algorithm is guaranteed by two-dimensional stability. The learning algorithm is applied successfully to the trajectory tracking control problem for a parallel link robot manipulator. The excellent performance of this learning algorithm is demonstrated by the computer simulation results.
NASA Astrophysics Data System (ADS)
Xie, Li; Han, Kui; Ma, Yanping; Zhou, Jùn
2013-09-01
The electrification of sand grains lifting off from sand bed is investigated experimentally. It was found that sand grains were able to carry charges, which is comparable in magnitude with the experimental results and is related to grain sizes, pH of soil, relative humidity, and electric field. Based on the theory of diffuse double layer (DDL) and Hertz contact theory, an electrification mechanism due to the break of DDLs of sand grains is presented and a formula which takes environmental conditions and grain parameters into consideration is obtained to calculate the charge-mass ratio of lift-off sand grains.
Free vibration of size-dependent magneto-electro-elastic nanobeams based on the nonlocal theory
NASA Astrophysics Data System (ADS)
Ke, Liao-Liang; Wang, Yue-Sheng
2014-09-01
This paper investigates the free vibration of magneto-electro-elastic (MEE) nanobeams based on the nonlocal theory and Timoshenko beam theory. The MEE nanobeam is subjected to the external electric potential, magnetic potential and uniform temperature rise. The governing equations and boundary conditions are derived by using the Hamilton principle and discretized by using the differential quadrature (DQ) method to determine the natural frequencies and mode shapes. A detailed parametric study is conducted to study the influences of the nonlocal parameter, temperature rise, external electric and magnetic potentials on the size-dependent vibration characteristics of MEE nanobeams.
Reference Frame Fields based on Quantum Theory Representations of Real and Complex Numbers
Paul Benioff
2008-03-05
A quantum theory representations of real (R) and complex (C) numbers is given that is based on states of single, finite strings of qukits for any base k > 1. Both unary representations and the possibility that qukits with k a prime number are elementary and the rest composite are discussed. Cauchy sequences of qukit string states are defined from the arithmetic properties. The representations of R and C, as equivalence classes of these sequences, differ from classical kit string state representations in two ways: the freedom of choice of basis states, and the fact that each quantum theory representation is part of a mathematical structure that is itself based on the real and complex numbers. These aspects enable the description of 3 dimensional frame fields labeled by different k values, different basis or gauge choices, and different iteration stages. The reference frames in the field are based on each R and C representation where each frame contains representations of all physical theories as mathematical structures based on the R and C representation. Approaches to integrating this with physics are described. It is observed that R and C values of physical quantities, matrix elements, etc. which are viewed in a frame as elementary and featureless, are seen in a parent frame as equivalence classes of Cauchy sequences of qukit string states.
Kerner, B S; Brakemeier, A
2007-01-01
A testbed for wireless vehicle communication based on a microscopic model in the framework of three-phase traffic theory is presented. In this testbed, vehicle motion in traffic flow and analyses of a vehicle communication channel access based on IEEE 802.11e mechanisms, radio propagation modeling, message reception characteristics as well as all other effects associated with ad-hoc networks are integrated into a three-phase traffic flow model. Based on simulations of this testbed, some statistical features of ad-hoc vehicle networks as well as the effect of C2C communication on increase in the efficiency and safety of traffic are studied.
Liu, Mengying; Sun, Peihua
2014-01-01
A typical model of hypersonic vehicle has the complicated dynamics such as the unstable states, the nonminimum phases, and the strong coupling input-output relations. As a result, designing a robust stabilization controller is essential to implement the anticipated tasks. This paper presents a robust stabilization controller based on the guardian maps theory for hypersonic vehicle. First, the guardian maps theories are provided to explain the constraint relations between the open subsets of complex plane and the eigenvalues of the state matrix of closed-loop control system. Then, a general control structure in relation to the guardian maps theories is proposed to achieve the respected design demands. Furthermore, the robust stabilization control law depending on the given general control structure is designed for the longitudinal model of hypersonic vehicle. Finally, a simulation example is provided to verify the effectiveness of the proposed methods. PMID:24795535
Failure analysis of a cracked plate based on endochronic plastic theory coupled with damage
NASA Astrophysics Data System (ADS)
Chow, C. L.; Chen, X. F.
1993-03-01
An anisotropic model of damage mechanics for ductile fracture incorporating the endochronic theory of plasticity is presented in order to take into account material deterioration during plastic deformation. An alternative form of endochronic (internal time) theory which is actually an elasto-plastic damage theory with isotropic-nonlinear kinematic hardening is developed for ease of numerical computation. Based on this new damage model, a finite element algorithm is formulated and then employed to characterize the fracture of thin aluminum plate containing a center crack. A new criterion termed as Y(sub R)-criterion is proposed to define both the crack initiation angle and load. Experiments have been conducted to verify the validity of the proposed damage model and it is found that the theoretical crack initiation loads correspond closely with the measured values.
NASA Astrophysics Data System (ADS)
Marshall, Bennett D.; Chapman, Walter G.
2013-01-01
In the framework of Wertheim's theory, we develop the first classical density functional theory for patchy colloids where the patch can bond more than once. To test the theory we perform new Monte Carlo simulations for the model system of patchy colloids in a planar slit pore. The theory is shown to be in excellent agreement with simulation for the density profiles and bonding fractions. It is also shown that the theory obeys the wall contact rule by accurately predicting bulk pressures from the wall contact density.
Marshall, Bennett D; Chapman, Walter G
2013-01-28
In the framework of Wertheim's theory, we develop the first classical density functional theory for patchy colloids where the patch can bond more than once. To test the theory we perform new Monte Carlo simulations for the model system of patchy colloids in a planar slit pore. The theory is shown to be in excellent agreement with simulation for the density profiles and bonding fractions. It is also shown that the theory obeys the wall contact rule by accurately predicting bulk pressures from the wall contact density. PMID:23387619
On the Foundations of the Theory of a new Collatz Based Number System
Michael A. Idowu
2015-03-18
Set out here are some fundamental theories that may be regarded as newly discovered metamathematics of the odd integers in relation to the Collatz conjecture (also called the 3x+1 problem). Originally motivated by the requirement to invent a new optimised integer factorisation method, this foundational paper primarily focuses on the foundation, formalisation and presentation of a new theoretical framework (schema or blueprint) of a Collatz based number system. The proposed framework is based on metamathematical theories meticulously derived through iterative analyses and reverse engineering (i.e., by hand and mathematical computations) of many large subsets of integers. A collation of the fundamental results from these analytical attempts has led to the establishment of a completely deterministic model of a generalised Collatz based number system that is fundamentally and strangely associated with nonchaotic patterns. The proposed Collatz based number schema comprises of both visual and theoretical representations of many hidden patterns in Collatz sequences yet to be reported in literature. This novel theoretical approach may be viewed as a new method to contemporary Collatz conjecture research which may be connected to the proofs of many other mathematical theorems in number theory and discrete mathematics.
Membrane-Based Characterization of a Gas Component — A Transient Sensor Theory
Lazik, Detlef
2014-01-01
Based on a multi-gas solution-diffusion problem for a dense symmetrical membrane this paper presents a transient theory of a planar, membrane-based sensor cell for measuring gas from both initial conditions: dynamic and thermodynamic equilibrium. Using this theory, the ranges for which previously developed, simpler approaches are valid will be discussed; these approaches are of vital interest for membrane-based gas sensor applications. Finally, a new theoretical approach is introduced to identify varying gas components by arranging sensor cell pairs resulting in a concentration independent gas-specific critical time. Literature data for the N2, O2, Ar, CH4, CO2, H2 and C4H10 diffusion coefficients and solubilities for a polydimethylsiloxane membrane were used to simulate gas specific sensor responses. The results demonstrate the influence of (i) the operational mode; (ii) sensor geometry and (iii) gas matrices (air, Ar) on that critical time. Based on the developed theory the case-specific suitable membrane materials can be determined and both operation and design options for these sensors can be optimized for individual applications. The results of mixing experiments for different gases (O2, CO2) in a gas matrix of air confirmed the theoretical predictions. PMID:24608004
Personality and Psychopathology: a Theory-Based Revision of Eysenck’s PEN Model
van Kampen, Dirk
2009-01-01
The principal aim of this paper is to investigate whether it is possible to create a personality taxonomy of clinical relevance out of Eysenck’s original PEN model by repairing the various shortcomings that can be noted in Eysenck’s personality theory, particularly in relation to P or Psychoticism. Addressing three approaches that have been followed to answer the question ‘which personality factors are basic?’, arguments are listed to show that particularly the theory-informed approach, originally defended by Eysenck, may lead to scientific progress. However, also noting the many deficiencies in the nomological network surrounding P, the peculiar situation arises that we adhere to Eysenck’s theory-informed methodology, but criticize his theory. These arguments and criticisms led to the replacement of P by three orthogonal and theory-based factors, Insensitivity (S), Orderliness (G), and Absorption (A), that together with the dimensions E or Extraversion and N or Neuroticism, that were retained from Eysenck’s PEN model, appear to give a comprehensive account of the main vulnerability factors in schizophrenia and affective disorders, as well as in other psychopathological conditions. PMID:20498694
NASA Astrophysics Data System (ADS)
Rahmani, O.; Jandaghian, A. A.
2015-06-01
In this paper, a general third-order beam theory that accounts for nanostructure-dependent size effects and two-constituent material variation through the nanobeam thickness, i.e., functionally graded material (FGM) beam is presented. The material properties of FG nanobeams are assumed to vary through the thickness according to the power law. A detailed derivation of the equations of motion based on Eringen nonlocal theory using Hamilton's principle is presented, and a closed-form solution is derived for buckling behavior of the new model with various boundary conditions. The nonlocal elasticity theory includes a material length scale parameter that can capture the size effect in a functionally graded material. The proposed model is efficient in predicting the shear effect in FG nanobeams by applying third-order shear deformation theory. The proposed approach is validated by comparing the obtained results with benchmark results available in the literature. In the following, a parametric study is conducted to investigate the influences of the length scale parameter, gradient index, and length-to-thickness ratio on the buckling of FG nanobeams and the improvement on nonlocal third-order shear deformation theory comparing with the classical (local) beam model has been shown. It is found out that length scale parameter is crucial in studying the stability behavior of the nanobeams.
Web-Based Learning Environment: A Theory-Based Design Process for Development and Evaluation
ERIC Educational Resources Information Center
Nam, Chang S.; Smith-Jackson, Tonya L.
2007-01-01
Web-based courses and programs have increasingly been developed by many academic institutions, organizations, and companies worldwide due to their benefits for both learners and educators. However, many of the developmental approaches lack two important considerations needed for implementing Web-based learning applications: (1) integration of the…
The Circuit Theory Behind Coupled-Mode Magnetic Resonance-Based Wireless Power Transmission.
Kiani, Mehdi; Ghovanloo, Maysam
2012-09-01
Inductive coupling is a viable scheme to wirelessly energize devices with a wide range of power requirements from nanowatts in radio frequency identification tags to milliwatts in implantable microelectronic devices, watts in mobile electronics, and kilowatts in electric cars. Several analytical methods for estimating the power transfer efficiency (PTE) across inductive power transmission links have been devised based on circuit and electromagnetic theories by electrical engineers and physicists, respectively. However, a direct side-by-side comparison between these two approaches is lacking. Here, we have analyzed the PTE of a pair of capacitively loaded inductors via reflected load theory (RLT) and compared it with a method known as coupled-mode theory (CMT). We have also derived PTE equations for multiple capacitively loaded inductors based on both RLT and CMT. We have proven that both methods basically result in the same set of equations in steady state and either method can be applied for short- or midrange coupling conditions. We have verified the accuracy of both methods through measurements, and also analyzed the transient response of a pair of capacitively loaded inductors. Our analysis shows that the CMT is only applicable to coils with high quality factor (Q) and large coupling distance. It simplifies the analysis by reducing the order of the differential equations by half compared to the circuit theory. PMID:24683368
The Circuit Theory Behind Coupled-Mode Magnetic Resonance-Based Wireless Power Transmission
Kiani, Mehdi; Ghovanloo, Maysam
2014-01-01
Inductive coupling is a viable scheme to wirelessly energize devices with a wide range of power requirements from nanowatts in radio frequency identification tags to milliwatts in implantable microelectronic devices, watts in mobile electronics, and kilowatts in electric cars. Several analytical methods for estimating the power transfer efficiency (PTE) across inductive power transmission links have been devised based on circuit and electromagnetic theories by electrical engineers and physicists, respectively. However, a direct side-by-side comparison between these two approaches is lacking. Here, we have analyzed the PTE of a pair of capacitively loaded inductors via reflected load theory (RLT) and compared it with a method known as coupled-mode theory (CMT). We have also derived PTE equations for multiple capacitively loaded inductors based on both RLT and CMT. We have proven that both methods basically result in the same set of equations in steady state and either method can be applied for short- or midrange coupling conditions. We have verified the accuracy of both methods through measurements, and also analyzed the transient response of a pair of capacitively loaded inductors. Our analysis shows that the CMT is only applicable to coils with high quality factor (Q) and large coupling distance. It simplifies the analysis by reducing the order of the differential equations by half compared to the circuit theory. PMID:24683368
Ewing, E Stephanie Krauthamer; Diamond, Guy; Levy, Suzanne
2015-01-01
Attachment-Based Family Therapy (ABFT) is a manualized family-based intervention designed for working with depressed adolescents, including those at risk for suicide, and their families. It is an empirically informed and supported treatment. ABFT has its theoretical underpinnings in attachment theory and clinical roots in structural family therapy and emotion focused therapies. ABFT relies on a transactional model that aims to transform the quality of adolescent-parent attachment, as a means of providing the adolescent with a more secure relationship that can support them during challenging times generally, and the crises related to suicidal thinking and behavior, specifically. This article reviews: (1) the theoretical foundations of ABFT (attachment theory, models of emotional development); (2) the ABFT clinical model, including training and supervision factors; and (3) empirical support. PMID:25778674
In-medium ?' mass and ?'N interaction based on chiral effective theory
NASA Astrophysics Data System (ADS)
Sakai, Shuntaro; Jido, Daisuke
2013-12-01
The in-medium ?' mass and the ?'N interaction are investigated in an effective theory based on the linear realization of the SU(3) chiral symmetry. We find that a large part of the ?' mass is generated by the spontaneous breaking of chiral symmetry through the UA(1) anomaly. As a consequence of this observation, the ?' mass is reduced in nuclear matter where chiral symmetry is partially restored. In our model, the mass reduction is found to be 80 MeV at the saturation density. Estimating the ?'N interaction based on the same effective theory, we find that the ?'N interaction in the scalar channel is attractive sufficiently to form a bound state in the ?'N system with a several MeV binding energy. We discuss the origin of attraction by emphasizing the special role of the ? meson in the linear sigma model for the mass generation of ?' and N.
Finding theory- and evidence-based alternatives to fear appeals: Intervention Mapping
Kok, Gerjo; Bartholomew, L Kay; Parcel, Guy S; Gottlieb, Nell H; Fernández, María E
2014-01-01
Fear arousal—vividly showing people the negative health consequences of life-endangering behaviors—is popular as a method to raise awareness of risk behaviors and to change them into health-promoting behaviors. However, most data suggest that, under conditions of low efficacy, the resulting reaction will be defensive. Instead of applying fear appeals, health promoters should identify effective alternatives to fear arousal by carefully developing theory- and evidence-based programs. The Intervention Mapping (IM) protocol helps program planners to optimize chances for effectiveness. IM describes the intervention development process in six steps: (1) assessing the problem and community capacities, (2) specifying program objectives, (3) selecting theory-based intervention methods and practical applications, (4) designing and organizing the program, (5) planning, adoption, and implementation, and (6) developing an evaluation plan. Authors who used IM indicated that it helped in bringing the development of interventions to a higher level. PMID:24811880
Design of Flexure-based Precision Transmission Mechanisms using Screw Theory
Hopkins, J B; Panas, R M
2011-02-07
This paper enables the synthesis of flexure-based transmission mechanisms that possess multiple decoupled inputs and outputs of any type (e.g. rotations, translations, and/or screw motions), which are linked by designer-specified transmission ratios. A comprehensive library of geometric shapes is utilized from which every feasible concept that possesses the desired transmission characteristics may be rapidly conceptualized and compared before an optimal concept is selected. These geometric shapes represent the rigorous mathematics of screw theory and uniquely link a body's desired motions to the flexible constraints that enable those motions. This paper's impact is most significant to the design of nano-positioners, microscopy stages, optical mounts, and sensors. A flexure-based microscopy stage was designed, fabricated, and tested to demonstrate the utility of the theory.
A theory-based logic model for innovation policy and evaluation.
Jordan, Gretchen B.
2010-04-01
Current policy and program rationale, objectives, and evaluation use a fragmented picture of the innovation process. This presents a challenge since in the United States officials in both the executive and legislative branches of government see innovation, whether that be new products or processes or business models, as the solution to many of the problems the country faces. The logic model is a popular tool for developing and describing the rationale for a policy or program and its context. This article sets out to describe generic logic models of both the R&D process and the diffusion process, building on existing theory-based frameworks. Then a combined, theory-based logic model for the innovation process is presented. Examples of the elements of the logic, each a possible leverage point or intervention, are provided, along with a discussion of how this comprehensive but simple model might be useful for both evaluation and policy development.
Nesman, Teresa M; Batsche, Catherine; Hernandez, Mario
2007-08-01
Latino student access to higher education has received significant national attention in recent years. This article describes a theory-based evaluation approach used with ENLACE of Hillsborough, a 5-year project funded by the W.K. Kellogg Foundation for the purpose of increasing Latino student graduation from high school and college. Theory-based evaluation guided planning, implementation as well as evaluation through the process of developing consensus on the Latino population of focus, adoption of culturally appropriate principles and values to guide the project, and identification of strategies to reach, engage, and impact outcomes for Latino students and their families. The approach included interactive development of logic models that focused the scope of interventions and guided evaluation designs for addressing three stages of the initiative. Challenges and opportunities created by the approach are discussed, as well as ways in which the initiative impacted Latino students and collaborating educational institutions. PMID:17689332
[Research on ECG signal analysis based on the cloudy model theory].
Li, Xin; Hong, Wenxue; Wang, Xiuqing; Wang, Huini
2011-02-01
The characteristics of electrocardiogram (ECG) signal are fuzzy and random, so that they are difficult for automatic analysis and diagnosis. To solve this problem, an uncertainties transformation model-Cloud Model, which is a fusion of qualitative and quantitative information, was tried to use to analyze the ECG signal. The model fusions the characters of fuzzy and random, just suit to the ECG automatic analysis and diagnosis system. Based on the theory of the cloudy transform and comprehensive cloud, the clustering of ECG signal was finished. Further more, the clinic experience of expert was summarized as classification rules based on the theory. The experiment data were from MIT/ BIH database. The experiment results showed more close to those of the expert's analysis. The describing result was more close to those of the more expert's with qualitative and quantitative information. It is well concluded that the method is an effective ECG signal analysis method. PMID:21485177
Liu, Hong; Zhu, Jingping; Wang, Kai
2015-08-24
The geometrical attenuation model given by Blinn was widely used in the geometrical optics bidirectional reflectance distribution function (BRDF) models. Blinn's geometrical attenuation model based on symmetrical V-groove assumption and ray scalar theory causes obvious inaccuracies in BRDF curves and negatives the effects of polarization. Aiming at these questions, a modified polarized geometrical attenuation model based on random surface microfacet theory is presented by combining of masking and shadowing effects and polarized effect. The p-polarized, s-polarized and unpolarized geometrical attenuation functions are given in their separate expressions and are validated with experimental data of two samples. It shows that the modified polarized geometrical attenuation function reaches better physical rationality, improves the precision of BRDF model, and widens the applications for different polarization. PMID:26368247
Microvibration Attenuation based on H?/LPV Theory for High Stability Space Missions
NASA Astrophysics Data System (ADS)
Preda, Valentin; Cieslak, Jerome; Henry, David; Bennani, Samir; Falcoz, Alexandre
2015-11-01
This paper presents a LPV (Linear Parameter Varying) solution for a mixed passive-active architecture used to mitigate the microvibrations generated by reaction wheels in satellites. In particular, H?/LPV theory is used to mitigate low frequency disturbances, current baseline for high frequency microvibration mitigation being based on elastomer materials. The issue of multiple harmonic microvibrations is also investigated. Simulation results from a test benchmark provided by Airbus Defence and Space demonstrate the potential of the proposed method.
Godin, Gaston; Bélanger-Gravel, Ariane; Eccles, Martin; Grimshaw, Jeremy
2008-01-01
Background There is an important gap between the implications of clinical research evidence and the routine clinical practice of healthcare professionals. Because individual decisions are often central to adoption of a clinical-related behaviour, more information about the cognitive mechanisms underlying behaviours is needed to improve behaviour change interventions targeting healthcare professionals. The aim of this study was to systematically review the published scientific literature about factors influencing health professionals' behaviours based on social cognitive theories. These theories refer to theories where individual cognitions/thoughts are viewed as processes intervening between observable stimuli and responses in real world situations. Methods We searched psycINFO, MEDLINE, EMBASE, CIHNAL, Index to theses, PROQUEST dissertations and theses and Current Contents for articles published in English only. We included studies that aimed to predict healthcare professionals' intentions and behaviours with a clear specification of relying on a social cognitive theory. Information on percent of explained variance (R2) was used to compute the overall frequency-weighted mean R2 to evaluate the efficacy of prediction in several contexts and according to different methodological aspects. The cognitive factors most consistently associated with prediction of healthcare professionals' intention and behaviours were documented. Results Seventy eight studies met the inclusion criteria. Among these studies, 72 provided information on the determinants of intention and 16 prospective studies provided information on the determinants of behaviour. The theory most often used as reference was the Theory of Reasoned Action (TRA) or its extension the Theory of Planned Behaviour (TPB). An overall frequency-weighted mean R2 of 0.31 was observed for the prediction of behaviour; 0.59 for the prediction of intention. A number of moderators influenced the efficacy of prediction; frequency-weighted mean R2 varied from 0.001 to 0.58 for behaviour and 0.19 to 0.81 for intention. Conclusion Our results suggest that the TPB appears to be an appropriate theory to predict behaviour whereas other theories better capture the dynamic underlying intention. In addition, given the variations in efficacy of prediction, special care should be given to methodological issues, especially to better define the context of behaviour performance. PMID:18631386
Testing a Theory-Based Mobility Monitoring Protocol Using In-Home Sensors: A Feasibility Study
Reeder, Blaine; Chung, Jane; Lazar, Amanda; Joe, Jonathan; Demiris, George; Thompson, Hilaire J.
2014-01-01
Mobility is a key factor in the performance of many everyday tasks required for independent living as a person grows older. The purpose of this mixed methods study was to test a theory-based mobility monitoring protocol by comparing sensor-based measures to self-report measures of mobility and assessing the acceptability of in-home sensors with older adults. Standardized instruments to measure physical, psychosocial and cognitive parameters were administered to 8 community-dwelling older adults at baseline, 3 month and 6 month visits (examples: FES, GDS-SF, Mini-cog). Semi-structured interviews to characterize acceptability of the technology were conducted at 3 month and 6 month visits. Technical issues prevented comparison of sensor-based measures with self-report measures. In-home sensor technology for monitoring mobility is acceptable to older adults. Implementing our theory-based mobility monitoring protocol in a field study in the homes of older adults is a feasible undertaking but requires more robust technology for sensor-based measure validation. PMID:23938159
Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets
NASA Astrophysics Data System (ADS)
Cifter, Atilla
2011-06-01
This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.
Game Theory Based Dynamic Bit-Rate Adaptation for H.264 Scalable Video Transmission in 4G
Jagannatham, Aditya K.
Game Theory Based Dynamic Bit-Rate Adaptation for H.264 Scalable Video Transmission in 4G Wireless adaptation game based on the quasi-concavity of the net video utility function. Existence of Nash equilibrium based services such as video conferencing, interactive gaming and subscription based broadcast
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1992-01-01
The application of a sector-based stability theory approach to the formulation of useful uncertainty descriptions for linear, time-invariant, multivariable systems is explored. A review of basic sector properties and sector-based approach are presented first. The sector-based approach is then applied to several general forms of parameter uncertainty to investigate its advantages and limitations. The results indicate that the sector uncertainty bound can be used effectively to evaluate the impact of parameter uncertainties on the frequency response of the design model. Inherent conservatism is a potential limitation of the sector-based approach, especially for highly dependent uncertain parameters. In addition, the representation of the system dynamics can affect the amount of conservatism reflected in the sector bound. Careful application of the model can help to reduce this conservatism, however, and the solution approach has some degrees of freedom that may be further exploited to reduce the conservatism.
CDA6530: Performance Models of Computers and Networks
Zou, Cliff C.
practical probability theory Review several useful random processes Basic queuing theory Practical homework Mainly for the first half course on knowledge: probability, random process, queuing theory process, queuing theory Programming projects Simple project: each student individual work Complex
Størset, Elisabet; Holford, Nick; Hennig, Stefanie; Bergmann, Troels K; Bergan, Stein; Bremer, Sara; Åsberg, Anders; Midtvedt, Karsten; Staatz, Christine E
2014-01-01
Aims The aim was to develop a theory-based population pharmacokinetic model of tacrolimus in adult kidney transplant recipients and to externally evaluate this model and two previous empirical models. Methods Data were obtained from 242 patients with 3100 tacrolimus whole blood concentrations. External evaluation was performed by examining model predictive performance using Bayesian forecasting. Results Pharmacokinetic disposition parameters were estimated based on tacrolimus plasma concentrations, predicted from whole blood concentrations, haematocrit and literature values for tacrolimus binding to red blood cells. Disposition parameters were allometrically scaled to fat free mass. Tacrolimus whole blood clearance/bioavailability standardized to haematocrit of 45% and fat free mass of 60 kg was estimated to be 16.1 l h?1 [95% CI 12.6, 18.0 l h?1]. Tacrolimus clearance was 30% higher (95% CI 13, 46%) and bioavailability 18% lower (95% CI 2, 29%) in CYP3A5 expressers compared with non-expressers. An Emax model described decreasing tacrolimus bioavailability with increasing prednisolone dose. The theory-based model was superior to the empirical models during external evaluation displaying a median prediction error of ?1.2% (95% CI ?3.0, 0.1%). Based on simulation, Bayesian forecasting led to 65% (95% CI 62, 68%) of patients achieving a tacrolimus average steady-state concentration within a suggested acceptable range. Conclusion A theory-based population pharmacokinetic model was superior to two empirical models for prediction of tacrolimus concentrations and seemed suitable for Bayesian prediction of tacrolimus doses early after kidney transplantation. PMID:25279405
Bao, Peng
2013-01-01
An interaction energy decomposition analysis method based on the block-localized wavefunction (BLW-ED) approach is described. The first main feature of the BLW-ED method is that it combines concepts of valence bond and molecular orbital theories such that the intermediate and physically intuitive electron-localized states are variationally optimized by self-consistent field calculations. Furthermore, the block-localization scheme can be used both in wave function theory and in density functional theory, providing a useful tool to gain insights on intermolecular interactions that would otherwise be difficult to obtain using the delocalized Kohn–Sham DFT. These features allow broad applications of the BLW method to energy decomposition (BLW-ED) analysis for intermolecular interactions. In this perspective, we outline theoretical aspects of the BLW-ED method, and illustrate its applications in hydrogen-bonding and ?–cation intermolecular interactions as well as metal–carbonyl complexes. Future prospects on the development of a multistate density functional theory (MSDFT) are presented, making use of block-localized electronic states as the basis configurations. PMID:21369567
Ghobadi, Ahmadreza F.; Elliott, J. Richard
2013-12-21
In this work, we aim to develop a version of the Statistical Associating Fluid Theory (SAFT)-? equation of state (EOS) that is compatible with united-atom force fields, rather than experimental data. We rely on the accuracy of the force fields to provide the relation to experimental data. Although, our objective is a transferable theory of interfacial properties for soft and fused heteronuclear chains, we first clarify the details of the SAFT-? approach in terms of site-based simulations for homogeneous fluids. We show that a direct comparison of Helmholtz free energy to molecular simulation, in the framework of a third order Weeks-Chandler-Andersen perturbation theory, leads to an EOS that takes force field parameters as input and reproduces simulation results for Vapor-Liquid Equilibria (VLE) calculations. For example, saturated liquid density and vapor pressure of n-alkanes ranging from methane to dodecane deviate from those of the Transferable Potential for Phase Equilibria (TraPPE) force field by about 0.8% and 4%, respectively. Similar agreement between simulation and theory is obtained for critical properties and second virial coefficient. The EOS also reproduces simulation data of mixtures with about 5% deviation in bubble point pressure. Extension to inhomogeneous systems and united-atom site types beyond those used in description of n-alkanes will be addressed in succeeding papers.
Theory of plasma contactors in ground-based experiments and low Earth orbit
NASA Technical Reports Server (NTRS)
Gerver, M. J.; Hastings, Daniel E.; Oberhardt, M. R.
1990-01-01
Previous theoretical work on plasma contactors as current collectors has fallen into two categories: collisionless double layer theory (describing space charge limited contactor clouds) and collisional quasineutral theory. Ground based experiments at low current are well explained by double layer theory, but this theory does not scale well to power generation by electrodynamic tethers in space, since very high anode potentials are needed to draw a substantial ambient electron current across the magnetic field in the absence of collisions (or effective collisions due to turbulence). Isotropic quasineutral models of contactor clouds, extending over a region where the effective collision frequency upsilon sub e exceeds the electron cyclotron frequency omega sub ce, have low anode potentials, but would collect very little ambient electron current, much less than the emitted ion current. A new model is presented, for an anisotropic contactor cloud oriented along the magnetic field, with upsilon sub e less than omega sub ce. The electron motion along the magnetic field is nearly collisionless, forming double layers in that direction, while across the magnetic field the electrons diffuse collisionally and the potential profile is determined by quasineutrality. Using a simplified expression for upsilon sub e due to ion acoustic turbulence, an analytic solution has been found for this model, which should be applicable to current collection in space. The anode potential is low and the collected ambient electron current can be several times the emitted ion current.
Nonlinear vibration of double layered viscoelastic nanoplates based on nonlocal theory
NASA Astrophysics Data System (ADS)
Wang, Yu; Li, Feng-Ming; Wang, Yi-Ze
2015-03-01
The nonlinear flexural vibration properties of double layered viscoelastic nanoplates are investigated based on nonlocal continuum theory. The von Kámán strain-displacement relation is employed to model the geometrical nonlinearity. Based on the classical plate theory, the formulations are derived by the Hamilton's principle in conjunction with Eringen's nonlocal elasticity theory, and are further discretized by the Galerkin's method. The coordinate transformation is adopted to obtain the nonlinear governing equations of motion in the modal coordinate system. On the basis of these equations, the frequency responses of double layered nanoplates with simply supported and clamped boundary conditions are derived by the method of multiple scales. The influences of small scale and other structural parameters (e.g. the aspect ratio of the plate, van der Walls (vdW) interaction and the viscidity of the plate) on the nonlinear vibration characteristics are discussed. From the result, the vdW interaction has obvious effects on the nonlinear frequency corresponding to the second nonlinear normal mode (NNM). The nonexistence of the internal resonance is also induced from the vdW forces between the plates. The influence of the elastic matrix is also discussed. The hardening nonlinearity is observed for the primary resonance. Additionally, some interesting phenomena different from the linear vibration are observed.
An optimization program based on the method of feasible directions: Theory and users guide
NASA Technical Reports Server (NTRS)
Belegundu, Ashok D.; Berke, Laszlo; Patnaik, Surya N.
1994-01-01
The theory and user instructions for an optimization code based on the method of feasible directions are presented. The code was written for wide distribution and ease of attachment to other simulation software. Although the theory of the method of feasible direction was developed in the 1960's, many considerations are involved in its actual implementation as a computer code. Included in the code are a number of features to improve robustness in optimization. The search direction is obtained by solving a quadratic program using an interior method based on Karmarkar's algorithm. The theory is discussed focusing on the important and often overlooked role played by the various parameters guiding the iterations within the program. Also discussed is a robust approach for handling infeasible starting points. The code was validated by solving a variety of structural optimization test problems that have known solutions obtained by other optimization codes. It has been observed that this code is robust: it has solved a variety of problems from different starting points. However, the code is inefficient in that it takes considerable CPU time as compared with certain other available codes. Further work is required to improve its efficiency while retaining its robustness.
Paying for Express Checkout: Competition and Price Discrimination in Multi-Server Queuing Systems
Deck, Cary; Kimbrough, Erik O.; Mongrain, Steeve
2014-01-01
We model competition between two firms selling identical goods to customers who arrive in the market stochastically. Shoppers choose where to purchase based upon both price and the time cost associated with waiting for service. One seller provides two separate queues, each with its own server, while the other seller has a single queue and server. We explore the market impact of the multi-server seller engaging in waiting cost-based-price discrimination by charging a premium for express checkout. Specifically, we analyze this situation computationally and through the use of controlled laboratory experiments. We find that this form of price discrimination is harmful to sellers and beneficial to consumers. When the two-queue seller offers express checkout for impatient customers, the single queue seller focuses on the patient shoppers thereby driving down prices and profits while increasing consumer surplus. PMID:24667809
NASA Astrophysics Data System (ADS)
Pan, Feng; Pachepsky, Yakov A.; Guber, Andrey K.; McPherson, Brian J.; Hill, Robert L.
2012-01-01
SummaryUnderstanding streamflow patterns in space and time is important for improving flood and drought forecasting, water resources management, and predictions of ecological changes. Objectives of this work include (a) to characterize the spatial and temporal patterns of streamflow using information theory-based measures at two thoroughly-monitored agricultural watersheds located in different hydroclimatic zones with similar land use, and (b) to elucidate and quantify temporal and spatial scale effects on those measures. We selected two USDA experimental watersheds to serve as case study examples, including the Little River experimental watershed (LREW) in Tifton, Georgia and the Sleepers River experimental watershed (SREW) in North Danville, Vermont. Both watersheds possess several nested sub-watersheds and more than 30 years of continuous data records of precipitation and streamflow. Information content measures (metric entropy and mean information gain) and complexity measures (effective measure complexity and fluctuation complexity) were computed based on the binary encoding of 5-year streamflow and precipitation time series data. We quantified patterns of streamflow using probabilities of joint or sequential appearances of the binary symbol sequences. Results of our analysis illustrate that information content measures of streamflow time series are much smaller than those for precipitation data, and the streamflow data also exhibit higher complexity, suggesting that the watersheds effectively act as filters of the precipitation information that leads to the observed additional complexity in streamflow measures. Correlation coefficients between the information-theory-based measures and time intervals are close to 0.9, demonstrating the significance of temporal scale effects on streamflow patterns. Moderate spatial scale effects on streamflow patterns are observed with absolute values of correlation coefficients between the measures and sub-watershed area varying from 0.2 to 0.6 in the two watersheds. We conclude that temporal effects must be evaluated and accounted for when the information theory-based methods are used for performance evaluation and comparison of hydrological models.
Formula for the rms blur circle radius of Wolter telescope based on aberration theory
NASA Technical Reports Server (NTRS)
Shealy, David L.; Saha, Timo T.
1990-01-01
A formula for the rms blur circle for Wolter telescopes has been derived using the transverse ray aberration expressions of Saha (1985), Saha (1984), and Saha (1986). The resulting formula for the rms blur circle radius over an image plane and a formula for the surface of best focus based on third-, fifth-, and seventh-order aberration theory predict results in good agreement with exact ray tracing. It has also been shown that one of the two terms in the empirical formula of VanSpeybroeck and Chase (1972), for the rms blur circle radius of a Wolter I telescope can be justified by the aberration theory results. Numerical results are given comparing the rms blur radius and the surface of best focus vs the half-field angle computed by skew ray tracing and from analytical formulas for grazing incidence Wolter I-II telescopes and a normal incidence Cassegrain telescope.
NASA Astrophysics Data System (ADS)
Bekhtereva, E. S.; Litvinovskaya, A. G.; Konov, I. A.; Gromova, O. V.; Chertavskikh, Yu. V.; Tse, Yang Fang; Ulenikov, O. N.
2015-08-01
The form of the effective Hamiltonian of a quantum system with allowance for corrections of arbitrary order for solving arbitrary quantum-mechanical problems with perturbation operator depending not only on the same coordinates as the operator of the zero approximation, but also on an arbitrary set of other coordinates whose derivative operators may not commute with each other, is retrieved based on the operator perturbation theory (the recurrence formulas for corrections of any arbitrary order of the operator perturbation theory are presented in the paper in the most general form). The general results obtained allow the special features of the effective operators of any polyatomic molecule to be investigated. As a first step, an arbitrary diatomic molecule is investigated. Isotopic relations among different spectroscopic parameters are derived for the parent molecule and its various isotopic modifications.
Characterization of degeneration process in combustion instability based on dynamical systems theory
NASA Astrophysics Data System (ADS)
Gotoda, Hiroshi; Okuno, Yuta; Hayashi, Kenta; Tachibana, Shigeru
2015-11-01
We present a detailed study on the characterization of the degeneration process in combustion instability based on dynamical systems theory. We deal with combustion instability in a lean premixed-type gas-turbine model combustor, one of the fundamentally and practically important combustion systems. The dynamic behavior of combustion instability in close proximity to lean blowout is dominated by a stochastic process and transits to periodic oscillations created by thermoacoustic combustion oscillations via chaos with increasing equivalence ratio [Chaos 21, 013124 (2011), 10.1063/1.3563577; Chaos 22, 043128 (2012), 10.1063/1.4766589]. Thermoacoustic combustion oscillations degenerate with a further increase in the equivalence ratio, and the dynamic behavior leads to chaotic fluctuations via quasiperiodic oscillations. The concept of dynamical systems theory presented here allows us to clarify the nonlinear characteristics hidden in complex combustion dynamics.
Control theory based airfoil design for potential flow and a finite volume discretization
NASA Technical Reports Server (NTRS)
Reuther, J.; Jameson, A.
1994-01-01
This paper describes the implementation of optimization techniques based on control theory for airfoil design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for two-dimensional profiles in which the shape is determined by a conformal transformation from a unit circle, and the control is the mapping function. The goal of our present work is to develop a method which does not depend on conformal mapping, so that it can be extended to treat three-dimensional problems. Therefore, we have developed a method which can address arbitrary geometric shapes through the use of a finite volume method to discretize the potential flow equation. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented, where both target speed distributions and minimum drag are used as objective functions.
A theory-based evaluation of a community-based funding scheme in a disadvantaged suburban city area.
Hickey, Gráinne; McGilloway, Sinead; O'Brien, Morgan; Leckey, Yvonne; Devlin, Maurice
2015-10-01
Community-driven development (CDD) initiatives frequently involve funding schemes which are aimed at channelling financial investment into local need and fostering community participation and engagement. This exploratory study examined, through a program theory approach, the design and implementation of a small-scale, community-based fund in Ireland. Observations, documentary analysis, interviews and group discussions with 19 participants were utilized to develop a detailed understanding of the program mechanisms, activities and processes, as well as the experiences of key stakeholders engaged with the funding scheme and its implementation. The findings showed that there were positive perceptions of the scheme and its function within the community. Overall, the availability of funding was perceived by key stakeholders as being beneficial. However, there were concerns over the accessibility of the scheme for more marginalized members of the community, as well as dissatisfaction with the openness and transparency surrounding funding eligibility. Lessons for the implementation of small-scale CDD funds are elaborated and the utility of program theory approaches for evaluators and planners working with programs that fund community-based initiatives is outlined. PMID:25933408
Philosophy of the Spike: Rate-Based vs. Spike-Based Theories of the Brain
Brette, Romain
2015-01-01
Does the brain use a firing rate code or a spike timing code? Considering this controversial question from an epistemological perspective, I argue that progress has been hampered by its problematic phrasing. It takes the perspective of an external observer looking at whether those two observables vary with stimuli, and thereby misses the relevant question: which one has a causal role in neural activity? When rephrased in a more meaningful way, the rate-based view appears as an ad hoc methodological postulate, one that is practical but with virtually no empirical or theoretical support. PMID:26617496
Philosophy of the Spike: Rate-Based vs. Spike-Based Theories of the Brain.
Brette, Romain
2015-01-01
Does the brain use a firing rate code or a spike timing code? Considering this controversial question from an epistemological perspective, I argue that progress has been hampered by its problematic phrasing. It takes the perspective of an external observer looking at whether those two observables vary with stimuli, and thereby misses the relevant question: which one has a causal role in neural activity? When rephrased in a more meaningful way, the rate-based view appears as an ad hoc methodological postulate, one that is practical but with virtually no empirical or theoretical support. PMID:26617496
Volume Averaging Theory (VAT) based modeling and closure evaluation for fin-and-tube heat exchangers
NASA Astrophysics Data System (ADS)
Zhou, Feng; Catton, Ivan
2012-10-01
A fin-and-tube heat exchanger was modeled based on Volume Averaging Theory (VAT) in such a way that the details of the original structure was replaced by their averaged counterparts, so that the VAT based governing equations can be efficiently solved for a wide range of parameters. To complete the VAT based model, proper closure is needed, which is related to a local friction factor and a heat transfer coefficient of a Representative Elementary Volume (REV). The terms in the closure expressions are complex and sometimes relating experimental data to the closure terms is difficult. In this work we use CFD to evaluate the rigorously derived closure terms over one of the selected REVs. The objective is to show how heat exchangers can be modeled as a porous media and how CFD can be used in place of a detailed, often formidable, experimental effort to obtain closure for the model.
Josephson current in Fe-based superconducting junctions: Theory and experiment
NASA Astrophysics Data System (ADS)
Burmistrova, A. V.; Devyatov, I. A.; Golubov, Alexander A.; Yada, Keiji; Tanaka, Yukio; Tortello, M.; Gonnelli, R. S.; Stepanov, V. A.; Ding, Xiaxin; Wen, Hai-Hu; Greene, L. H.
2015-06-01
We present a theory of the dc Josephson effect in contacts between Fe-based and spin-singlet s -wave superconductors. The method is based on the calculation of temperature Green's function in the junction within the tight-binding model. We calculate the phase dependencies of the Josephson current for different orientations of the junction relative to the crystallographic axes of Fe-based superconductor. Further, we consider the dependence of the Josephson current on the thickness of an insulating layer and on temperature. Experimental data for PbIn/Ba 1 -xKx (FeAs) 2 point-contact Josephson junctions are consistent with theoretical predictions for s± symmetry of an order parameter in this material. The proposed method can be further applied to calculations of the dc Josephson current in contacts with other new unconventional multiorbital superconductors, such as Sr2RuO4 and the superconducting topological insulator CuxBi2Se3 .
ERIC Educational Resources Information Center
Sung, Dia; You, Yeongmahn; Song, Ji Hoon
2008-01-01
The purpose of this research is to explore the possibility of viable learning organizations based on identifying viable organizational learning mechanisms. Two theoretical foundations, complex system theory and viable system theory, have been integrated to provide the rationale for building the sustainable organizational learning mechanism. The…
ERIC Educational Resources Information Center
Colakoglu, Ozgur M.; Akdemir, Omur
2012-01-01
The ARCS Motivation Theory was proposed to guide instructional designers and teachers who develop their own instruction to integrate motivational design strategies into the instruction. There is a lack of literature supporting the idea that instruction for blended courses if designed based on the ARCS Motivation Theory provides different…
ERIC Educational Resources Information Center
Barnhardt, Bradford; Ginns, Paul
2014-01-01
This article orients a recently proposed alienation-based framework for student learning theory (SLT) to the empirical basis of the approaches to learning perspective. The proposed framework makes new macro-level interpretations of an established micro-level theory, across three levels of interpretation: (1) a context-free psychological state…
How Does an Activity Theory Model Help to Know Better about Teaching with Electronic-Exercise-Bases?
ERIC Educational Resources Information Center
Abboud-Blanchard, Maha; Cazes, Claire
2012-01-01
The research presented in this paper relies on Activity Theory and particularly on Engestrom's model, to better understand the use of Electronic-Exercise-Bases (EEB) by mathematics teachers. This theory provides a holistic approach to illustrate the complexity of the EEB integration. The results highlight reasons and ways of using EEB and show…
Merhav, Neri
the stage for the separation theorem of Information Theory: When a source with ratedistortion function RData Processing Inequalities Based on a Certain Structured Class of Information Measures with Application to Estimation Theory Neri Merhav Department of Electrical Engineering Technion - Israel Institute
Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)
2006-10-01
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.
ERIC Educational Resources Information Center
Al-Amri, Mohammed
2010-01-01
Discipline-Based Art Education (DBAE), a theory developed in the USA, has been influential but also used in Art Education institutions world-wide. One of its stated goals was to develop the quality of teaching art education. Today, it is used as a theory for identifying and assessing good practices in the field of Art Education. The purpose of…
A Monte Carlo exploration of threefold base geometries for 4d F-theory vacua
Washington Taylor; Yi-Nan Wang
2015-11-09
We use Monte Carlo methods to explore the set of toric threefold bases that support elliptic Calabi-Yau fourfolds for F-theory compactifications to four dimensions, and study the distribution of geometrically non-Higgsable gauge groups, matter, and quiver structure. We estimate the number of distinct threefold bases in the connected set studied to be $\\sim { 10^{48}}$. The distribution of bases peaks around $h^{1, 1}\\sim 82$. All bases encountered after "thermalization" have some geometric non-Higgsable structure. We find that the number of non-Higgsable gauge group factors grows roughly linearly in $h^{1,1}$ of the threefold base. Typical bases have $\\sim 6$ isolated gauge factors as well as several larger connected clusters of gauge factors with jointly charged matter. Approximately 76% of the bases sampled contain connected two-factor gauge group products of the form SU(3)$\\times$SU(2), which may act as the non-Abelian part of the standard model gauge group. SU(3)$\\times$SU(2) is the third most common connected two-factor product group, following SU(2)$\\times$SU(2) and $G_2\\times$SU(2), which arise more frequently.
Wang, Huaqing; Chen, Peng
2009-01-01
This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to. PMID:22574021
Constraints on Neutron Star Radii Based on Chiral Effective Field Theory Interactions
Hebeler, K.; Lattimer, J. M.; Pethick, C. J.; Schwenk, A.
2010-10-15
We show that microscopic calculations based on chiral effective field theory interactions constrain the properties of neutron-rich matter below nuclear densities to a much higher degree than is reflected in commonly used equations of state. Combined with observed neutron star masses, our results lead to a radius R=9.7-13.9 km for a 1.4M{sub {center_dot}} star, where the theoretical range is due, in about equal amounts, to uncertainties in many-body forces and to the extrapolation to high densities.
Reitberg, D P; Smith, I L; Love, S J; Lewin, H M; Schentag, J J
1985-02-01
A model-independent program for pharmacokinetic analyses based on statistical moment theory is presented and demonstrated. The program uses an inexpensive and portable TI-59; a PC-100A printer adds convenience but is optional. The program may be used in analysis of blood, serum, or plasma concentration vs. time curves originating from iv, im, po, sl, or sc administration. Drug input can be zero or first order; both single-dose and multiple-dose steady-state conditions can be evaluated. A comparison between results generated using moment analysis and traditional two-compartment nonlinear regression showed excellent agreement. PMID:3838276
Superior coexistence: systematicALLY regulatING land subsidence BASED on set pair theory
NASA Astrophysics Data System (ADS)
Chen, Y.; Gong, S.-L.
2015-11-01
Anthropogenic land subsidence is an environmental side effect of exploring and using natural resources in the process of economic development. The key points of the system for controlling land subsidence include cooperation and superior coexistence while the economy develops, exploring and using natural resources, and geological environmental safety. Using the theory and method of set pair analysis (SPA), this article anatomises the factors, effects, and transformation of land subsidence. Based on the principle of superior coexistence, this paper promotes a technical approach to the system for controlling land subsidence, in order to improve the prevention and control of geological hazards.
Unique laminar-flow stability limit based shallow-water theory
Chen, Cheng-lung
1993-01-01
Two approaches are generally taken in deriving the stability limit for the Froude member (Fs) for laminar sheet flow. The first approach used the Orr-Sommerfeld equation, while the second uses the cross-section-averaged equations of continuity and motion. Because both approaches are based on shallow-water theory, the values of Fs obtained from both approaches should be identical, yet in the literature they are not. This suggests that a defect exists in at least one of the two approaches. After examining the governing equations used in both approaches, one finds that the existing cross-section -averaged equation of motion is dependent on the frame of reference.
Risk Evaluation of Bogie System Based on Extension Theory and Entropy Weight Method
Du, Yanping; Zhang, Yuan; Zhao, Xiaogang; Wang, Xiaohui
2014-01-01
A bogie system is the key equipment of railway vehicles. Rigorous practical evaluation of bogies is still a challenge. Presently, there is overreliance on part-specific experiments in practice. In the present work, a risk evaluation index system of a bogie system has been established based on the inspection data and experts' evaluation. Then, considering quantitative and qualitative aspects, the risk state of a bogie system has been evaluated using an extension theory and an entropy weight method. Finally, the method has been used to assess the bogie system of four different samples. Results show that this method can assess the risk state of a bogie system exactly. PMID:25574159
NASA Astrophysics Data System (ADS)
Yu, Xudong; Long, Xingwu; Wei, Guo; Li, Geng; Qu, Tianliang
2015-04-01
A finite element model of the hemispherical resonator gyro (HRG) is established and the natural frequencies and vibration modes are investigated. The matrix perturbation technology in the random finite element method is first introduced to analyze the statistical characteristics of the natural frequencies of HRG. The influences of random material parameters and dimensional parameters on the natural frequencies are quantitatively described based on the probability theory. The statistics expressions of the random parameters are given and the influences of three key parameters on natural frequency are pointed out. These results are important for design and improvement of high accuracy HRG.
Jiang, Ping; Yang, Huajun; Mao, Shengqian
2015-10-01
A Cassegrain antenna system and an optical fiber coupling system which consists of a plano-concave lens and a plano-convex lens are designed based on the vector theory of reflection and refraction, so as to improve the transmission performance of the optical antenna and fiber coupling system. Three-dimensional ray tracing simulation are performed and results of the optical aberrations calculation and the experimental test show that the aberrations caused by on-axial defocusing, off-axial defocusing and deflection of receiving antenna can be well corrected by the optical fiber coupling system. PMID:26480125
Evaluation of a preschool nutrition education program based on the theory of multiple intelligences.
Cason, K L
2001-01-01
This report describes the evaluation of a preschool nutrition education program based on the theory of multiple intelligences. Forty-six nutrition educators provided a series of 12 lessons to 6102 preschool-age children. The program was evaluated using a pretest/post-test design to assess differences in fruit and vegetable identification, healthy snack choices, willingness to taste foods, and eating behaviors. Subjects showed significant improvement in food identification and recognition, healthy snack identification, willingness to taste foods, and frequency of fruit, vegetable, meat, and dairy consumption. The evaluation indicates that the program was an effective approach for educating preschool children about nutrition. PMID:11953232
Kinetic theory based wave-particle splitting scheme for Euler equations
NASA Astrophysics Data System (ADS)
Rao, S. V. R.; Deshpande, S. M.
1992-11-01
A new upwind wave-particle splitting scheme is developed, based on the connection between the kinetic theory of gased and Euler equations and using the concept of thermal velocity. The new upwind method is applied to the standard one-dimensional shock tube problem and to the problem of two-dimensional shock reflection from a flat plate. Results for the two-dimensional problem showed that the new scheme is much less dissipative than the kinetic flux vector splitting scheme of Deshpande (1986) and Mandal (1989).
Data Collection Method for Mobile Sensor Networks Based on the Theory of Thermal Fields
Macuha, Martin; Tariq, Muhammad; Sato, Takuro
2011-01-01
Many sensor applications are aimed for mobile objects, where conventional routing approaches of data delivery might fail. Such applications are habitat monitoring, human probes or vehicular sensing systems. This paper targets such applications and proposes lightweight proactive distributed data collection scheme for Mobile Sensor Networks (MSN) based on the theory of thermal fields. By proper mapping, we create distribution function which allows considering characteristics of a sensor node. We show the functionality of our proposed forwarding method when adapted to the energy of sensor node. We also propose enhancement in order to maximize lifetime of the sensor nodes. We thoroughly evaluate proposed solution and discuss the tradeoffs. PMID:22164011
Equation of state of detonation products based on statistical mechanical theory
NASA Astrophysics Data System (ADS)
Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng
2015-06-01
The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.
Equation of state of detonation products based on statistical mechanical theory
NASA Astrophysics Data System (ADS)
Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng; Iapcm Team
2013-06-01
The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.
A queuing model for designing multi-modality buried target detection systems: preliminary results
NASA Astrophysics Data System (ADS)
Malof, Jordan M.; Morton, Kenneth D.; Collins, Leslie M.; Torrione, Peter A.
2015-05-01
Many remote sensing modalities have been developed for buried target detection, each one offering its own relative advantages over the others. As a result there has been interest in combining several modalities into a single detection platform that benefits from the advantages of each constituent sensor, without suffering from their weaknesses. Traditionally this involves collecting data continuously on all sensors and then performing data, feature, or decision level fusion. While this is effective for lowering false alarm rates, this strategy neglects the potential benefits of a more general system-level fusion architecture. Such an architecture can involve dynamically changing which modalities are in operation. For example, a large standoff modality such as a forward-looking infrared (FLIR) camera can be employed until an alarm is encountered, at which point a high performance (but short standoff) sensor, such as ground penetrating radar (GPR), is employed. Because the system is dynamically changing its rate of advance and sensors, it becomes difficult to evaluate the expected false alarm rate and advance rate. In this work, a probabilistic model is proposed that can be used to estimate these quantities based on a provided operating policy. In this model the system consists of a set of states (e.g., sensors employed) and conditions encountered (e.g., alarm locations). The predictive accuracy of the model is evaluated using a collection of collocated FLIR and GPR data and the results indicate that the model is effective at predicting the desired system metrics.
Vervet monkeys solve a multiplayer "forbidden circle game" by queuing to learn restraint.
Fruteau, Cécile; van Damme, Eric; Noë, Ronald
2013-04-22
In social dilemmas, the ability of individuals to coordinate their actions is crucial to reach group optima. Unless exacted by power or force, coordination in humans relies on a common understanding of the problem, which is greatly facilitated by communication. The lack of means of consultation about the nature of the problem and how to solve it may explain why multiagent coordination in nonhuman vertebrates has commonly been observed only when multiple individuals react instantaneously to a single stimulus, either natural or experimentally simulated, for example a predator, a prey, or a neighboring group. Here we report how vervet monkeys solved an experimentally induced coordination problem. In each of three groups, we trained a low-ranking female, the "provider," to open a container holding a large amount of food, which the providers only opened when all individuals dominant to them ("dominants") stayed outside an imaginary "forbidden circle" around it. Without any human guidance, the dominants learned restraint one by one, in hierarchical order from high to low. Once all dominants showed restraint immediately at the start of the trial, the providers opened the container almost instantly, saving all individuals opportunity costs due to lost foraging time. Solving this game required trial-and-error learning based on individual feedback from the provider to each dominant, and all dominants being patient enough to wait outside the circle while others learned restraint. Communication, social learning, and policing by high-ranking animals played no perceptible role. PMID:23541727
Conductance of three-terminal molecular bridge based on tight-binding theory
NASA Astrophysics Data System (ADS)
Wang, Li-Guang; Li, Yong; Yu, Ding-Wen; Katsunori, Tagami; Masaru, Tsukada
2005-05-01
The quantum transmission characteristic of three-benzene ring nano-molecular bridge is investigated theoretically by using Green's function approach based on tight-binding theory with only a ? orbital per carbon atom at the site. The transmission probabilities that electrons transport through the molecular bridge from one terminal to the other two terminals are obtained. The electronic current distributions inside the molecular bridge are calculated and shown in graphical analogy by the current density method based on Fisher-Lee formula at the energy points E = ±0.42, ±1.06 and ±1.5, respectively, where the transmission spectra appear peaks. We find that the transmission spectra are related to the incident electronic energy and the molecular levels strongly and the current distributions agree well with Kirchhoff quantum current momentum conservation law.
Unit Template Synchronous Reference Frame Theory Based Control Algorithm for DSTATCOM
NASA Astrophysics Data System (ADS)
Bangarraju, J.; Rajagopal, V.; Jayalaxmi, A.
2014-04-01
This article proposes new and simplified unit templates instead of standard phase locked loop (PLL) for Synchronous Reference Frame Theory Control Algorithm (SRFT). The extraction of synchronizing components (sin? and cos?) for parks and inverse parks transformation using standard PLL takes more execution time. This execution time in control algorithm delays the extraction of reference source current generation. The standard PLL not only takes more execution time but also increases the reactive power burden on the Distributed Static Compensator (DSTATCOM). This work proposes a unit template based SRFT control algorithm for four-leg insulated gate bipolar transistor based voltage source converter for DSTATCOM in distribution systems. This will reduce the execution time and reactive power burden on the DSTATCOM. The proposed DSTATCOM suppress harmonics, regulates the terminal voltage along with neutral current compensation. The DSTATCOM in distribution systems with proposed control algorithm is modeled and simulated using MATLAB using SIMULINK and Simpower systems toolboxes.
GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies.
Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain
2015-01-01
Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/. PMID:26313379
NASA Astrophysics Data System (ADS)
Miyazaki, T.; Bowler, D. R.; Choudhury, R.; Gillan, M. J.
2004-10-01
Electronic structure methods based on density-functional theory, pseudopotentials, and local-orbital basis sets offer a hierarchy of techniques for modeling complex condensed-matter systems with a wide range of precisions and computational speeds. We analyze the relationships between the algorithms for atomic forces in this hierarchy of techniques, going from empirical tight-binding through ab initio tight-binding to full ab initio. The analysis gives a unified overview of the force algorithms as applied within techniques based either on diagonalization or on linear-scaling approaches. The use of these force algorithms is illustrated by practical calculations with the CONQUEST code, in which different techniques in the hierarchy are applied in a concerted manner.
Gas-Kinetic Theory Based Flux Splitting Method for Ideal Magnetohydrodynamics
NASA Technical Reports Server (NTRS)
Xu, Kun
1998-01-01
A gas-kinetic solver is developed for the ideal magnetohydrodynamics (MHD) equations. The new scheme is based on the direct splitting of the flux function of the MHD equations with the inclusion of "particle" collisions in the transport process. Consequently, the artificial dissipation in the new scheme is much reduced in comparison with the MHD Flux Vector Splitting Scheme. At the same time, the new scheme is compared with the well-developed Roe-type MHD solver. It is concluded that the kinetic MHD scheme is more robust and efficient than the Roe- type method, and the accuracy is competitive. In this paper the general principle of splitting the macroscopic flux function based on the gas-kinetic theory is presented. The flux construction strategy may shed some light on the possible modification of AUSM- and CUSP-type schemes for the compressible Euler equations, as well as to the development of new schemes for a non-strictly hyperbolic system.
Re-Examining of Moffitt’s Theory of Delinquency through Agent Based Modeling
Leaw, Jia Ning; Ang, Rebecca P.; Huan, Vivien S.; Chan, Wei Teng; Cheong, Siew Ann
2015-01-01
Moffitt’s theory of delinquency suggests that at-risk youths can be divided into two groups, the adolescence- limited group and the life-course-persistent group, predetermined at a young age, and social interactions between these two groups become important during the adolescent years. We built an agent-based model based on the microscopic interactions Moffitt described: (i) a maturity gap that dictates (ii) the cost and reward of antisocial behavior, and (iii) agents imitating the antisocial behaviors of others more successful than themselves, to find indeed the two groups emerging in our simulations. Moreover, through an intervention simulation where we moved selected agents from one social network to another, we also found that the social network plays an important role in shaping the life course outcome. PMID:26062022
Sundararajan, Mahesh; Sinha, Vivek; Bandyopadhyay, Tusar; Ghosh, Swapan K
2012-05-01
The feasibility of using cucurbituril host molecule as a probable actinyl cation binders candidate is investigated through density functional theory based calculations. Various possible binding sites of the cucurbit[5]uril host molecule to uranyl are analyzed and based on the binding energy evaluations, ?(5)-binding is predicted to be favored. For this coordination, the structure, vibrational spectra, and binding energies are evaluated for the binding of three actinyls in hexa-valent and penta-valent oxidation states with functionalized cucurbiturils. Functionalizing cucurbituril with methyl and cyclohexyl groups increases the binding affinities of actinyls, whereas fluorination decreases the binding affinities as compared to the native host molecule. Surprisingly hydroxylation of the host molecule does not distinguish the oxidation state of the three actinyls. PMID:22471316
GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies
Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain
2015-01-01
Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/. PMID:26313379
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.
2014-01-01
Nonlinear and bifurcation buckling equations for elastic, stiffened, geometrically perfect, right-circular cylindrical, anisotropic shells subjected to combined loads are presented that are based on Sanders' shell theory. Based on these equations, a three-parameter approximate Rayleigh-Ritz solution and a classical solution to the buckling problem are presented for cylinders with simply supported edges. Extensive comparisons of results obtained from these solutions with published results are also presented for a wide range of cylinder constructions. These comparisons include laminated-composite cylinders with a wide variety of shell-wall orthotropies and anisotropies. Numerous results are also given that show the discrepancies between the results obtained by using Donnell's equations and variants of Sanders' equations. For some cases, nondimensional parameters are identified and "master" curves are presented that facilitate the concise representation of results.
Predictive models based on sensitivity theory and their application to practical shielding problems
Bhuiyan, S.I.; Roussin, R.W.; Lucius, J.L.; Bartine, D.E.
1983-01-01
Two new calculational models based on the use of cross-section sensitivity coefficients have been devised for calculating radiation transport in relatively simple shields. The two models, one an exponential model and the other a power model, have been applied, together with the traditional linear model, to 1- and 2-m-thick concrete-slab problems in which the water content, reinforcing-steel content, or composition of the concrete was varied. Comparing the results obtained with the three models with those obtained from exact one-dimensional discrete-ordinates transport calculations indicates that the exponential model, named the BEST model (for basic exponential shielding trend), is a particularly promising predictive tool for shielding problems dominated by exponential attenuation. When applied to a deep-penetration sodium problem, the BEST model also yields better results than do calculations based on second-order sensitivity theory.
Optimization of a photovoltaic pumping system based on the optimal control theory
Betka, A.; Attali, A.
2010-07-15
This paper suggests how an optimal operation of a photovoltaic pumping system based on an induction motor driving a centrifugal pump can be realized. The optimization problem consists in maximizing the daily pumped water quantity via the optimization of the motor efficiency for every operation point. The proposed structure allows at the same time the minimization the machine losses, the field oriented control and the maximum power tracking of the photovoltaic array. This will be attained based on multi-input and multi-output optimal regulator theory. The effectiveness of the proposed algorithm is described by simulation and the obtained results are compared to those of a system working with a constant air gap flux. (author)
Theory and practical application of blood-based renal replacement therapy.
Murray, J S; Hinchliffe, W T; Kanagasundaram, N S
2009-12-01
The term renal replacement therapy incorporates three modalities that control or correct biochemical and fluid disturbances of renal failure. Peritoneal dialysis and renal transplantation are two forms of renal replacement therapy that are outside the remit of this article. This review focuses upon the third group which are blood-based and involve direct treatment of a patient's blood in a closed, extracorporeal circuit. They provide renal replacement for end-stage renal failure and during periods of severe acute kidney injury, and also for non-renal indications such as the management of drug overdoses. Blood-based renal replacement therapies are often loosely referred to as 'haemodialysis', although this is only one of a range of treatments. This article outlines the theory and practical applications of these treatments. PMID:20081630
Adapting evidence-based interventions using a common theory, practices, and principles.
Rotheram-Borus, Mary Jane; Swendeman, Dallas; Becker, Kimberly D
2014-01-01
Hundreds of validated evidence-based intervention programs (EBIP) aim to improve families' well-being; however, most are not broadly adopted. As an alternative diffusion strategy, we created wellness centers to reach families' everyday lives with a prevention framework. At two wellness centers, one in a middle-class neighborhood and one in a low-income neighborhood, popular local activity leaders (instructors of martial arts, yoga, sports, music, dancing, Zumba), and motivated parents were trained to be Family Mentors. Trainings focused on a framework that taught synthesized, foundational prevention science theory, practice elements, and principles, applied to specific content areas (parenting, social skills, and obesity). Family Mentors were then allowed to adapt scripts and activities based on their cultural experiences but were closely monitored and supervised over time. The framework was implemented in a range of activities (summer camps, coaching) aimed at improving social, emotional, and behavioral outcomes. Successes and challenges are discussed for (a) engaging parents and communities; (b) identifying and training Family Mentors to promote children and families' well-being; and (c) gathering data for supervision, outcome evaluation, and continuous quality improvement. To broadly diffuse prevention to families, far more experimentation is needed with alternative and engaging implementation strategies that are enhanced with knowledge harvested from researchers' past 30 years of experience creating EBIP. One strategy is to train local parents and popular activity leaders in applying robust prevention science theory, common practice elements, and principles of EBIP. More systematic evaluation of such innovations is needed. PMID:24079747
Massive Yang-Mills theory based on the nonlinearly realized gauge group
Bettinelli, D.; Ferrari, R.; Quadri, A.
2008-02-15
We propose a subtraction scheme for a massive Yang-Mills theory realized via a nonlinear representation of the gauge group [here SU(2)]. It is based on the subtraction of the poles in D-4 of the amplitudes, in dimensional regularization, after a suitable normalization has been performed. Perturbation theory is in the number of loops, and the procedure is stable under iterative subtraction of the poles. The unphysical Goldstone bosons, the Faddeev-Popov ghosts, and the unphysical mode of the gauge field are expected to cancel out in the unitarity equation. The spontaneous symmetry breaking parameter is not a physical variable. We use the tools already tested in the nonlinear sigma model: hierarchy in the number of Goldstone boson legs and weak-power-counting property (finite number of independent divergent amplitudes at each order). It is intriguing that the model is naturally based on the symmetry SU(2){sub L} local x SU(2){sub R} global. By construction the physical amplitudes depend on the mass and on the self-coupling constant of the gauge particle and moreover on the scale parameter of the radiative corrections. The Feynman rules are in the Landau gauge.
Wilbraham, Liam; Savarese, Marika; Rega, Nadia; Adamo, Carlo; Ciofini, Ilaria
2015-02-12
The excited state intramolecular proton transfer (ESIPT) reaction taking place within 2-(2-hydroxyphenyl)benzoxazole (HBT) and two recently experimentally characterized napthalimide derivatives-known as N-1 and N-4-has been investigated in order to identify and test a possible protocol for the description and complete mechanistic and electronic characterization of the reaction at the excited state. This protocol is based on density functional theory, time-dependent density functional theory, and a recently proposed electron density based index (DCT). This method is able to identify all stable species involved in the reaction, discriminate between possible reaction pathways over potential energy surfaces (PES), which are intrinsically very flat and difficult to characterize, and quantitatively measure the excited state charge transfer character throughout the reaction. The photophysical properties of the molecules (i.e., absorption and emission wavelength) are also quantitatively determined via the implicit inclusion of solvent effects in the case of toluene and, the more polar, tetrahydrofuran. The accuracy obtained with this protocol then opens up the possibility of the ab initio design of molecules exhibiting ESIPT for tailored applications such as highly selective molecular sensors. PMID:25208048
Non-Markovian theories based on a decomposition of the spectral density.
Kleinekathöfer, Ulrich
2004-08-01
For the description of dynamical effects in quantum mechanical systems on ultrashort time scales, memory effects play an important role. Meier and Tannor [J. Chem. Phys. 111, 3365 (1999)] developed an approach which is based on a time-nonlocal scheme employing a numerical decomposition of the spectral density. Here we propose two different approaches which are based on a partial time-ordering prescription, i.e., a time-local formalism and also on a numerical decomposition of the spectral density. In special cases such as the Debye spectral density the present scheme can be employed even without the numerical decomposition of the spectral density. One of the proposed schemes is valid for time-independent Hamiltonians and can be given in a compact quantum master equation. In the case of time-dependent Hamiltonians one has to introduce auxiliary operators which have to be propagated in time along with the density matrix. For the example of a damped harmonic oscillator these non-Markovian theories are compared among each other, to the Markovian limit neglecting memory effects and time dependencies, and to exact path integral calculations. Good agreement between the exact calculations and the non-Markovian results is obtained. Some of the non-Markovian theories mentioned above treat the time dependence in the system Hamiltonians nonperturbatively. Therefore these methods can be used for the simulation of experiments with arbitrary large laser fields. PMID:15281847
NASA Astrophysics Data System (ADS)
Khvorostyanov, V. I.; Curry, J. A.
2012-10-01
A new analytical parameterization of homogeneous ice nucleation is developed based on extended classical nucleation theory including new equations for the critical radii of the ice germs, free energies and nucleation rates as simultaneous functions of temperature and water saturation ratio. By representing these quantities as separable products of the analytical functions of temperature and supersaturation, analytical solutions are found for the integral-differential supersaturation equation and concentration of nucleated crystals. Parcel model simulations are used to illustrate the general behavior of various nucleation properties under various conditions, for justifications of the further key analytical simplifications, and for verification of the resulting parameterization. The final parameterization is based upon the values of the supersaturation that determines the current or maximum concentrations of the nucleated ice crystals. The crystal concentration is analytically expressed as a function of time and can be used for parameterization of homogeneous ice nucleation both in the models with small time steps and for substep parameterization in the models with large time steps. The crystal concentration is expressed analytically via the error functions or elementary functions and depends only on the fundamental atmospheric parameters and parameters of classical nucleation theory. The diffusion and kinetic limits of the new parameterization agree with previous semi-empirical parameterizations.
NASA Astrophysics Data System (ADS)
Khvorostyanov, V. I.; Curry, J. A.
2012-03-01
A new analytical parameterization of homogeneous ice nucleation is developed based on extended classical nucleation theory including new equations for the critical radii of the ice germs, free energies and nucleation rates as the functions of the temperature and water saturation ratio simultaneously. By representing these quantities as separable products of the analytical functions of the temperature and supersaturation, analytical solutions are found for the integral-differential supersaturation equation and concentration of nucleated crystals. Parcel model simulations are used to illustrate the general behavior of various nucleation properties under various conditions, for justifications of the further key analytical simplifications, and for verification of the resulting parameterization. The final parameterization is based upon the values of the supersaturation that determines the current or maximum concentrations of the nucleated ice crystals. The crystal concentration is analytically expressed as a function of time and can be used for parameterization of homogeneous ice nucleation both in the models with small time steps and for substep parameterization in the models with large time steps. The crystal concentration is expressed analytically via the error functions or elementary functions and depends only on the fundamental atmospheric parameters and parameters of classical nucleation theory. The diffusion and kinetic limits of the new parameterization agree with previous semi-empirical parameterizations.
Chen, Yu; Song, Guobao; Yang, Fenglin; Zhang, Shushen; Zhang, Yun; Liu, Zhenyu
2012-01-01
According to risk systems theory and the characteristics of the chemical industry, an index system was established for risk assessment of enterprises in chemical industrial parks (CIPs) based on the inherent risk of the source, effectiveness of the prevention and control mechanism, and vulnerability of the receptor. A comprehensive risk assessment method based on catastrophe theory was then proposed and used to analyze the risk levels of ten major chemical enterprises in the Songmu Island CIP, China. According to the principle of equal distribution function, the chemical enterprise risk level was divided into the following five levels: 1.0 (very safe), 0.8 (safe), 0.6 (generally recognized as safe, GRAS), 0.4 (unsafe), 0.2 (very unsafe). The results revealed five enterprises (50%) with an unsafe risk level, and another five enterprises (50%) at the generally recognized as safe risk level. This method solves the multi-objective evaluation and decision-making problem. Additionally, this method involves simple calculations and provides an effective technique for risk assessment and hierarchical risk management of enterprises in CIPs. PMID:23208298
Dissemination of a theory-based online bone health program: Two intervention approaches.
Nahm, Eun-Shim; Resnick, Barbara; Bellantoni, Michele; Zhu, Shijun; Brown, Clayton; Brennan, Patricia F; Charters, Kathleen; Brown, Jeanine; Rietschel, Matthew; Pinna, Joanne; An, Minjeong; Park, Bu Kyung; Plummer, Lisa
2015-06-01
With the increasing nationwide emphasis on eHealth, there has been a rapid growth in the use of the Internet to deliver health promotion interventions. Although there has been a great deal of research in this field, little information is available regarding the methodologies to develop and implement effective online interventions. This article describes two social cognitive theory-based online health behavior interventions used in a large-scale dissemination study (N = 866), their implementation processes, and the lessons learned during the implementation processes. The two interventions were a short-term (8-week) intensive online Bone Power program and a longer term (12-month) Bone Power Plus program, including the Bone Power program followed by a 10-month online booster intervention (biweekly eHealth newsletters). This study used a small-group approach (32 intervention groups), and to effectively manage those groups, an eLearning management program was used as an upper layer of the Web intervention. Both interventions were implemented successfully with high retention rates (80.7% at 18 months). The theory-based approaches and the online infrastructure used in this study showed a promising potential as an effective platform for online behavior studies. Further replication studies with different samples and settings are needed to validate the utility of this intervention structure. PMID:26021668
Coding theory based models for protein translation initiation in prokaryotic organisms.
May, Elebeoba Eni; Bitzer, Donald L. (North Carolina State University, Raleigh, NC); Rosnick, David I. (North Carolina State University, Raleigh, NC); Vouk, Mladen A.
2003-03-01
Our research explores the feasibility of using communication theory, error control (EC) coding theory specifically, for quantitatively modeling the protein translation initiation mechanism. The messenger RNA (mRNA) of Escherichia coli K-12 is modeled as a noisy (errored), encoded signal and the ribosome as a minimum Hamming distance decoder, where the 16S ribosomal RNA (rRNA) serves as a template for generating a set of valid codewords (the codebook). We tested the E. coli based coding models on 5' untranslated leader sequences of prokaryotic organisms of varying taxonomical relation to E. coli including: Salmonella typhimurium LT2, Bacillus subtilis, and Staphylococcus aureus Mu50. The model identified regions on the 5' untranslated leader where the minimum Hamming distance values of translated mRNA sub-sequences and non-translated genomic sequences differ the most. These regions correspond to the Shine-Dalgarno domain and the non-random domain. Applying the EC coding-based models to B. subtilis, and S. aureus Mu50 yielded results similar to those for E. coli K-12. Contrary to our expectations, the behavior of S. typhimurium LT2, the more taxonomically related to E. coli, resembled that of the non-translated sequence group.
Credibility theory based dynamic control bound optimization for reservoir flood limited water level
NASA Astrophysics Data System (ADS)
Jiang, Zhiqiang; Sun, Ping; Ji, Changming; Zhou, Jianzhong
2015-10-01
The dynamic control operation of reservoir flood limited water level (FLWL) can solve the contradictions between reservoir flood control and beneficial operation well, and it is an important measure to make sure the security of flood control and realize the flood utilization. The dynamic control bound of FLWL is a fundamental key element for implementing reservoir dynamic control operation. In order to optimize the dynamic control bound of FLWL by considering flood forecasting error, this paper took the forecasting error as a fuzzy variable, and described it with the emerging credibility theory in recent years. By combining the flood forecasting error quantitative model, a credibility-based fuzzy chance constrained model used to optimize the dynamic control bound was proposed in this paper, and fuzzy simulation technology was used to solve the model. The FENGTAN reservoir in China was selected as a case study, and the results show that, compared with the original operation water level, the initial operation water level (IOWL) of FENGTAN reservoir can be raised 4 m, 2 m and 5.5 m respectively in the three division stages of flood season, and without increasing flood control risk. In addition, the rationality and feasibility of the proposed forecasting error quantitative model and credibility-based dynamic control bound optimization model are verified by the calculation results of extreme risk theory.
Theory of chemical kinetics and charge transfer based on nonequilibrium thermodynamics.
Bazant, Martin Z
2013-05-21
Advances in the fields of catalysis and electrochemical energy conversion often involve nanoparticles, which can have kinetics surprisingly different from the bulk material. Classical theories of chemical kinetics assume independent reactions in dilute solutions, whose rates are determined by mean concentrations. In condensed matter, strong interactions alter chemical activities and create variations that can dramatically affect the reaction rate. The extreme case is that of a reaction coupled to a phase transformation, whose kinetics must depend not only on the order parameter but also on its gradients at phase boundaries. Reaction-driven phase transformations are common in electrochemistry, when charge transfer is accompanied by ion intercalation or deposition in a solid phase. Examples abound in Li-ion, metal-air, and lead-acid batteries, as well as metal electrodeposition-dissolution. Despite complex thermodynamics, however, the standard kinetic model is the Butler-Volmer equation, based on a dilute solution approximation. The Marcus theory of charge transfer likewise considers isolated reactants and neglects elastic stress, configurational entropy, and other nonidealities in condensed phases. The limitations of existing theories recently became apparent for the Li-ion battery material LixFePO4 (LFP). It has a strong tendency to separate into Li-rich and Li-poor solid phases, which scientists believe limits its performance. Chemists first modeled phase separation in LFP as an isotropic "shrinking core" within each particle, but experiments later revealed striped phase boundaries on the active crystal facet. This raised the question: What is the reaction rate at a surface undergoing a phase transformation? Meanwhile, dramatic rate enhancement was attained with LFP nanoparticles, and classical battery models could not predict the roles of phase separation and surface modification. In this Account, I present a general theory of chemical kinetics, developed over the past 7 years, which is capable of answering these questions. The reaction rate is a nonlinear function of the thermodynamic driving force, the free energy of reaction, expressed in terms of variational chemical potentials. The theory unifies and extends the Cahn-Hilliard and Allen-Cahn equations through a master equation for nonequilibrium chemical thermodynamics. For electrochemistry, I have also generalized both Marcus and Butler-Volmer kinetics for concentrated solutions and ionic solids. This new theory provides a quantitative description of LFP phase behavior. Concentration gradients and elastic coherency strain enhance the intercalation rate. At low currents, the charge-transfer rate is focused on exposed phase boundaries, which propagate as "intercalation waves", nucleated by surface wetting. Unexpectedly, homogeneous reactions are favored above a critical current and below a critical size, which helps to explain the rate capability of LFP nanoparticles. Contrary to other mechanisms, elevated temperatures and currents may enhance battery performance and lifetime by suppressing phase separation. The theory has also been extended to porous electrodes and could be used for battery engineering with multiphase active materials. More broadly, the theory describes nonequilibrium chemical systems at mesoscopic length and time scales, beyond the reach of molecular simulations and bulk continuum models. The reaction rate is consistently defined for inhomogeneous, nonequilibrium states, for example, with phase separation, large electric fields, or mechanical stresses. This research is also potentially applicable to fluid extraction from nanoporous solids, pattern formation in electrophoretic deposition, and electrochemical dynamics in biological cells. PMID:23520980
Admission Control in Stochastic Event Graphs Eitan ALTMAN Bruno GAUJAL y Arie HORDIJK z
. In the second part of the paper, we use this result and the optimization theory based on multimodular costs, optimal control. 1 Introduction The work on admission control in queuing systems can be split into two a queuing network with one input node. This networ
Securing Mobile Ad Hoc Networks Using Danger Theory-Based Artificial Immune Algorithm
2015-01-01
A mobile ad hoc network (MANET) is a set of mobile, decentralized, and self-organizing nodes that are used in special cases, such as in the military. MANET properties render the environment of this network vulnerable to different types of attacks, including black hole, wormhole and flooding-based attacks. Flooding-based attacks are one of the most dangerous attacks that aim to consume all network resources and thus paralyze the functionality of the whole network. Therefore, the objective of this paper is to investigate the capability of a danger theory-based artificial immune algorithm called the mobile dendritic cell algorithm (MDCA) to detect flooding-based attacks in MANETs. The MDCA applies the dendritic cell algorithm (DCA) to secure the MANET with additional improvements. The MDCA is tested and validated using Qualnet v7.1 simulation tool. This work also introduces a new simulation module for a flooding attack called the resource consumption attack (RCA) using Qualnet v7.1. The results highlight the high efficiency of the MDCA in detecting RCAs in MANETs. PMID:25946001
Debanne, T; Laffaye, G
2015-08-01
This study was based on the naturalistic decision-making paradigm and regulatory focus theory. Its aim was to model coaches' decision-making processes for handball teams' defensive systems based on relevant cues of the reward structure, and to determine the weight of each of these cues. We collected raw data by video-recording 41 games that were selected using a simple random method. We considered the defensive strategy (DEF: aligned or staged) to be the dependent variable, and the three independent variables were (a) numerical difference between the teams; (b) score difference between the teams; and (c) game periods. We used a logistic regression design (logit model) and a multivariate logistic model to explain the link between DEF and the three category independent variables. Each factor was weighted differently during the decision-making process to select the defensive system, and combining these variables increased the impact on this process; for instance, a staged defense is 43 times more likely to be chosen during the final period in an unfavorable situation and in a man advantage. Finally, this shows that the coach's decision-making process could be based on a simple match or could require a diagnosis of the situation based on the relevant cues. PMID:25262855
Securing mobile ad hoc networks using danger theory-based artificial immune algorithm.
Abdelhaq, Maha; Alsaqour, Raed; Abdelhaq, Shawkat
2015-01-01
A mobile ad hoc network (MANET) is a set of mobile, decentralized, and self-organizing nodes that are used in special cases, such as in the military. MANET properties render the environment of this network vulnerable to different types of attacks, including black hole, wormhole and flooding-based attacks. Flooding-based attacks are one of the most dangerous attacks that aim to consume all network resources and thus paralyze the functionality of the whole network. Therefore, the objective of this paper is to investigate the capability of a danger theory-based artificial immune algorithm called the mobile dendritic cell algorithm (MDCA) to detect flooding-based attacks in MANETs. The MDCA applies the dendritic cell algorithm (DCA) to secure the MANET with additional improvements. The MDCA is tested and validated using Qualnet v7.1 simulation tool. This work also introduces a new simulation module for a flooding attack called the resource consumption attack (RCA) using Qualnet v7.1. The results highlight the high efficiency of the MDCA in detecting RCAs in MANETs. PMID:25946001
Shirazi, Mandana; Emami, Amir Hosein; Mirmoosavi, ,Seyed Jamal; Alavinia, Seyed Mohammad; Zamanian, Hadi; Fathollahbeigi, Faezeh; Masiello, Italo
2014-01-01
Background: Effective leadership is of prime importance in any organization and it goes through changes based on accepted health promotion and behavior change theory. Although there are many leadership styles, transformational leadership, which emphasizes supportive leadership behaviors, seems to be an appropriate style in many settings particularly in the health care and educational sectors which are pressured by high turnover and safety demands. Iran has been moving rapidly forward and its authorities have understood and recognized the importance of matching leadership styles with effective and competent care for success in health care organizations. This study aimed to develop the Supportive Leadership Behaviors Scale based on accepted health and educational theories and to psychometrically test it in the Iranian context. Methods: The instrument was based on items from established questionnaires. A pilot study validated the instrument which was also cross-validated via re-translation. After validation, 731 participants answered the questionnaire. Results: The instrument was finalized and resulted in a 20-item questionnaire using the exploratory factor analysis, which yielded four factors of support for development, integrity, sincerity and recognition and explaining the supportive leadership behaviors (all above 0.6). Mapping these four measures of leadership behaviors can be beneficial to determine whether effective leadership could support innovation and improvements in medical education and health care organizations on the national level. The reliability measured as Cronbach’s alpha was 0.84. Conclusion: This new instrument yielded four factors of support for development, integrity, sincerity and recognition and explaining the supportive leadership behaviors which are applicable in health and educational settings and are helpful in improving self –efficacy among health and academic staff. PMID:25679004
A Theory-Based Approach to Teaching Young Children about Health: A Recipe for Understanding
ERIC Educational Resources Information Center
Nguyen, Simone P.; McCullough, Mary Beth; Noble, Ashley
2011-01-01
The theory-theory account of conceptual development posits that children's concepts are integrated into theories. Concept-learning studies have documented the central role that theories play in children's learning of experimenter-defined categories but have yet to extensively examine complex, real-world concepts, such as health. The present study…
NASA Astrophysics Data System (ADS)
Smith, L. A.
2012-12-01
Just as there are many different types of uncertainty, there are many different types of models. The best technique for quantifying and communicating uncertainty will depend on the nature of that uncertainty: is it mere imprecision in a well-defined number (as with the square-root of two), intractability (as when we know how to compute the answer, but have not yet been able to carry out the calculation), indeterminacy (as when there is no well-defined target about which to be imprecise) or other. The relevance of UQ to a decision maker or scientist will also depend on the type of quantitative model that is considered: is the model intended to explain, or to forecast, or to provide a quantitative analysis of the past? When a perfect model is available, many of these distinctions collapse. In practice, attempting to quantify one type of uncertainty via a model which may not even display that kind of uncertainty is a nonsense. One must be careful not to confuse the diversity of our models for the uncertainty in our future. Or a well-defined probability forecast for what the next model simulation will report, with a probability forecast for the world. How is UQ to recognize the line between sensitivity analysis and probability forecasting? These questions will be addressed in the context of climate science, and more broadly that of science in support of decision making. The ways and means of UQ are shown to vary with type of model considered, the extent to which that model class is deemed adequate for purpose in a specific application, and whether or not the relevant dominant uncertainty (known from the science, but perhaps absent from the models) has been considered. Uncertainty Quantification may prove to be a very wide field, extending well beyond the bounds of the probability calculus.
Skelton, JA; Buehler, C; Irby, MB; Grzywacz, JG
2014-01-01
Family-based approaches to pediatric obesity treatment are considered the ‘gold-standard,’ and are recommended for facilitating behavior change to improve child weight status and health. If family-based approaches are to be truly rooted in the family, clinicians and researchers must consider family process and function in designing effective interventions. To bring a better understanding of family complexities to family-based treatment, two relevant reviews were conducted and are presented: (1) a review of prominent and established theories of the family that may provide a more comprehensive and in-depth approach for addressing pediatric obesity; and (2) a systematic review of the literature to identify the use of prominent family theories in pediatric obesity research, which found little use of theories in intervention studies. Overlapping concepts across theories include: families are a system, with interdependence of units; the idea that families are goal-directed and seek balance; and the physical and social environment imposes demands on families. Family-focused theories provide valuable insight into the complexities of families. Increased use of these theories in both research and practice may identify key leverage points in family process and function to prevent the development of or more effectively treat obesity. The field of family studies provides an innovative approach to the difficult problem of pediatric obesity, building on the long-established approach of family-based treatment. PMID:22531090
Skelton, J A; Buehler, C; Irby, M B; Grzywacz, J G
2012-07-01
Family-based approaches to pediatric obesity treatment are considered the 'gold-standard,' and are recommended for facilitating behavior change to improve child weight status and health. If family-based approaches are to be truly rooted in the family, clinicians and researchers must consider family process and function in designing effective interventions. To bring a better understanding of family complexities to family-based treatment, two relevant reviews were conducted and are presented: (1) a review of prominent and established theories of the family that may provide a more comprehensive and in-depth approach for addressing pediatric obesity; and (2) a systematic review of the literature to identify the use of prominent family theories in pediatric obesity research, which found little use of theories in intervention studies. Overlapping concepts across theories include: families are a system, with interdependence of units; the idea that families are goal-directed and seek balance; and the physical and social environment imposes demands on families. Family-focused theories provide valuable insight into the complexities of families. Increased use of these theories in both research and practice may identify key leverage points in family process and function to prevent the development of or more effectively treat obesity. The field of family studies provides an innovative approach to the difficult problem of pediatric obesity, building on the long-established approach of family-based treatment. PMID:22531090
ERIC Educational Resources Information Center
Nissen, Poul
2011-01-01
In this paper, a model for assessment and intervention is presented. This model explains how to perform theory- and evidence- based as well as practice-based assessment and intervention. The assessment model applies a holistic approach to treatment planning, which includes recognition of the influence of community, school, peers, family and the…
A Neurosemantic Theory of Concrete Noun Representation Based on the Underlying Brain Codes
Just, Marcel Adam; Cherkassky, Vladimir L.; Aryal, Sandesh; Mitchell, Tom M.
2010-01-01
This article describes the discovery of a set of biologically-driven semantic dimensions underlying the neural representation of concrete nouns, and then demonstrates how a resulting theory of noun representation can be used to identify simple thoughts through their fMRI patterns. We use factor analysis of fMRI brain imaging data to reveal the biological representation of individual concrete nouns like apple, in the absence of any pictorial stimuli. From this analysis emerge three main semantic factors underpinning the neural representation of nouns naming physical objects, which we label manipulation, shelter, and eating. Each factor is neurally represented in 3–4 different brain locations that correspond to a cortical network that co-activates in non-linguistic tasks, such as tool use pantomime for the manipulation factor. Several converging methods, such as the use of behavioral ratings of word meaning and text corpus characteristics, provide independent evidence of the centrality of these factors to the representations. The factors are then used with machine learning classifier techniques to show that the fMRI-measured brain representation of an individual concrete noun like apple can be identified with good accuracy from among 60 candidate words, using only the fMRI activity in the 16 locations associated with these factors. To further demonstrate the generativity of the proposed account, a theory-based model is developed to predict the brain activation patterns for words to which the algorithm has not been previously exposed. The methods, findings, and theory constitute a new approach of using brain activity for understanding how object concepts are represented in the mind. PMID:20084104
Surface collision theory for suspension-based cleaning of particle-contaminated solid substrates
NASA Astrophysics Data System (ADS)
Andreev, V. A.; Prausnitz, J. M.; Radke, C. J.
2011-03-01
To quantify removal kinetics of contaminant particles on solid surfaces, we study collisions between nonspherical particles when one particle is suspended in laminar shear flow while the second is adhered to a solid surface. Based on kinetic theory of rigid nonspherical particles, we outline a theoretical framework for our previously developed binary-collision contaminant-removal model. We show that a distribution of adhered contaminant particles over orientation, size, and shape results in multiexponential decay of surface concentration of particles with time, in agreement with experimental findings [Andreev et al., J. Electrochem. Soc. 158, H55 (2011)]. Theory predicts a linear increase of removal rate constant with shear rate and with suspended solids concentration near the substrate surface, also in agreement with experiment [Andreev et al., J. Electrochem. Soc. 158, H55 (2011); Ind. Eng. Chem. Res. 49, 12461 (2010)]. To reveal the effect of geometry and size of colliding entrained particles on removal rates, an approximate singlet distribution function is derived for particles in flow at the level of the Smoluchowski theory for orthocoagulation. Two shapes of flow-suspended particles are considered: spheres and cuboids with high aspect ratio, while contaminant particles on the surface are small and spherical. Removal kinetic rate constants scale with contaminant particle size, aA, as aA3/2 for spheres and as aA for cuboids. Thus, rectangular platelet particles are effective for removal of small contaminant particles, confirming experimental observation [Andreev et al., J. Electrochem. Soc. 158, H55 (2011)]. The influence of platelet aspect ratio on removal rates is analyzed. Due to interplay between solids velocity and collision cross section, small aspect ratios improve cleaning efficiency when the size ratio of the entrained to contaminant particles is large.
Krasheninnikov, Arkady V.
Kinetic theory of semiconductor cascade laser based on quantum wells and wires V. F. Elesin and A cascade lasers based on quantum wells and wires. For the case of quantum wells, we propose an analytical quantum cascade laser proposed in the original publications by Kazarinov and Suris1 has been implemented
Schmeling, Harro
1 1 2 Effective shear and bulk viscosity of partially molten3 rock based on elastic moduli theory bulk and shear viscosity of the matrix of a partially molten rock are important2 properties viscosity models is presented based on a self-consistent4 poroelastic formulation for partially molten rock
ERIC Educational Resources Information Center
Creemers, B. P. M.; Kyriakides, Leonidas
2010-01-01
This paper refers to a dynamic perspective of educational effectiveness and improvement stressing the importance of using an evidence-based and theory-driven approach. Specifically, an approach to school improvement based on the dynamic model of educational effectiveness is offered. The recommended approach to school improvement gives emphasis to…
ERIC Educational Resources Information Center
Vanfretti, Luigi; Farrokhabadi, Mostafa
2015-01-01
This article presents the implementation of the constructive alignment theory (CAT) in a power system analysis course through a consensus-based course design process. The consensus-based design process involves both the instructor and graduate-level students and it aims to develop the CAT framework in a holistic manner with the goal of including…
Alavi, Ali
Catalytic Role of Gold in Gold-Based Catalysts: A Density Functional Theory Study on the CO Oxidation on Gold Zhi-Pan Liu and P. Hu* Contribution from the School of Chemistry, The Queen's Uni Kingdom Received April 24, 2002 Abstract: Gold-based catalysts have been of intense interests in recent
Evaluation of a social cognitive theory-based yoga intervention to reduce anxiety.
Mehta, Purvi; Sharma, Manoj
Yoga is often viewed as a form of alternative and complementary medicine, as it strives to achieve equilibrium between the body and mind that aids healing. Studies have shown the beneficial role of yoga in anxiety reduction. The purpose of this study was to design and evaluate a 10-week social cognitive theory based yoga intervention to reduce anxiety. The yoga intervention utilized the constructs of behavioral capability, expectations, self-efficacy for yoga from social cognitive theory, and included asanas (postures), pranayama (breathing techniques), shava asana (relaxation), and dhyana (meditation). A one-between and one-within group, quasi-experimental design was utilized for evaluation. Scales measuring expectations from yoga, self-efficacy for yoga, and Speilberger's State Trait Anxiety Inventory, were administered before and after the intervention. Repeated measures analyses of variance (ANOVA) were performed to compare pre-test and post-test scores in the two groups. Yoga as an approach shows promising results for anxiety reduction. PMID:23353562
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Steven S.
1996-01-01
This final report summarizes research performed under NASA contract NCC 2-531 toward generalization of constraint-based scheduling theories and techniques for application to space telescope observation scheduling problems. Our work into theories and techniques for solution of this class of problems has led to the development of the Heuristic Scheduling Testbed System (HSTS), a software system for integrated planning and scheduling. Within HSTS, planning and scheduling are treated as two complementary aspects of the more general process of constructing a feasible set of behaviors of a target system. We have validated the HSTS approach by applying it to the generation of observation schedules for the Hubble Space Telescope. This report summarizes the HSTS framework and its application to the Hubble Space Telescope domain. First, the HSTS software architecture is described, indicating (1) how the structure and dynamics of a system is modeled in HSTS, (2) how schedules are represented at multiple levels of abstraction, and (3) the problem solving machinery that is provided. Next, the specific scheduler developed within this software architecture for detailed management of Hubble Space Telescope operations is presented. Finally, experimental performance results are given that confirm the utility and practicality of the approach.
Cartographic generalization of urban street networks based on gravitational field theory
NASA Astrophysics Data System (ADS)
Liu, Gang; Li, Yongshu; Li, Zheng; Guo, Jiawei
2014-05-01
The automatic generalization of urban street networks is a constant and important aspect of geographical information science. Previous studies show that the dual graph for street-street relationships more accurately reflects the overall morphological properties and importance of streets than do other methods. In this study, we construct a dual graph to represent street-street relationship and propose an approach to generalize street networks based on gravitational field theory. We retain the global structural properties and topological connectivity of an original street network and borrow from gravitational field theory to define the gravitational force between nodes. The concept of multi-order neighbors is introduced and the gravitational force is taken as the measure of the importance contribution between nodes. The importance of a node is defined as the result of the interaction between a given node and its multi-order neighbors. Degree distribution is used to evaluate the level of maintaining the global structure and topological characteristics of a street network and to illustrate the efficiency of the suggested method. Experimental results indicate that the proposed approach can be used in generalizing street networks and retaining their density characteristics, connectivity and global structure.
Middlestadt, S E; Bhattacharyya, K; Rosenbaum, J; Fishbein, M; Shepherd, M
1996-01-01
Through one of its many HIV prevention programs, the Prevention Marketing Initiative, the Centers for Disease Control and Prevention promotes a multifaceted strategy for preventing the sexual transmission of HIV/AIDS among people less than 25 years of age. The Prevention Marketing Initiative is an application of marketing and consumer-oriented technologies that rely heavily on behavioral research and behavior change theories to bring the behavioral and social sciences to bear on practical program planning decisions. One objective of the Prevention Marketing Initiative is to encourage consistent and correct condom use among sexually active young adults. Qualitative formative research is being conducted in several segments of the population of heterosexually active, unmarried young adults between 18 and 25 using a semistructured elicitation procedure to identify and understand underlying behavioral determinants of consistent condom use. The purpose of this paper is to illustrate the use of this type of qualitative research methodology in designing effective theory-based behavior change interventions. Issues of research design and data collection and analysis are discussed. To illustrate the methodology, results of content analyses of selected responses to open-ended questions on consistent condom use are presented by gender (male, female), ethnic group (white, African American), and consistency of condom use (always, sometimes). This type of formative research can be applied immediately to designing programs and is invaluable for valid and relevant larger-scale quantitative research. PMID:8862153
Informational Theory of Aging: The Life Extension Method Based on the Bone Marrow Transplantation
Karnaukhov, Alexey V.; Karnaukhova, Elena V.; Sergievich, Larisa A.; Karnaukhova, Natalia A.; Bogdanenko, Elena V.; Manokhina, Irina A.; Karnaukhov, Valery N.
2015-01-01
The method of lifespan extension that is a practical application of the informational theory of aging is proposed. In this theory, the degradation (error accumulation) of the genetic information in cells is considered a main cause of aging. According to it, our method is based on the transplantation of genetically identical (or similar) stem cells with the lower number of genomic errors to the old recipients. For humans and large mammals, this method can be realized by cryopreservation of their own stem cells, taken in a young age, for the later autologous transplantation in old age. To test this method experimentally, we chose laboratory animals of relatively short lifespan (mouse). Because it is difficult to isolate the required amount of the stem cells (e.g., bone marrow) without significant damage for animals, we used the bone marrow transplantation from sacrificed inbred young donors. It is shown that the lifespan extension of recipients depends on level of their genetic similarity (syngeneity) with donors. We have achieved the lifespan increase of the experimental mice by 34% when the transplantation of the bone marrow with high level of genetic similarity was used. PMID:26491435
Gao, Kai; Chung, Eric T.; Gibson, Richard L.; Fu, Shubin; Efendiev, Yalchin
2015-06-05
The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elasticmore »wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.« less
Gao, Kai; Chung, Eric T.; Gibson, Richard L.; Fu, Shubin; Efendiev, Yalchin
2015-06-05
The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elastic wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.
Grand unification and proton stability based on a chiral SU(8) theory
Deshpande, N.G.; Mannheim, P.D.
1980-06-01
A grand-unified model of the strong, electromagnetic, and weak interactions is presented based on a local SU(8)/sub L/ X SU(8)/sub R/ gauge theory that possesses a global U(8)/sub L/ X U(8)/sub R/ invariance. The model is spontaneously broken by the recently introduced neutrino pairing mechanism, in which a Higgs field that transforms like a pair of right-handed neutrinos acquires a vacuum expectation value. This neutrino pairing breaks the model down to the standard Weinberg-Salam phenomenology. Further, the neutrino pairing causes the two initial global currents of the model, fermion number and axial fermion number, to mix with the non-Abelian local currents to leave unbroken two new global currents, namely, baryon number and a particular lepton number that counts charged leptons and left-handed neutrinos only. The exact conservations of these two resulting currents ensure the absolute stability of the proton, the masslessness of the observed left-handed neutrinos, and the standard lepton number conservation of the usual weak interactions. A further feature of the model is the simultaneous absence of both strong CP violations and of observable axions. The model has a testable prediction, namely, the existence of an absolutely stable, relatively light, massive neutral lepton generated entirely from the right-handed neutrino sector of the theory. 1 table.
NASA Astrophysics Data System (ADS)
Amadei, A.; Apol, M. E. F.; Di Nola, A.; Berendsen, H. J. C.
1996-01-01
A new theory is presented for calculating the Helmholtz free energy based on the potential energy distribution function. The usual expressions of free energy, internal energy and entropy involving the partition function are rephrased in terms of the potential energy distribution function, which must be a near Gaussian one, according to the central limit theorem. We obtained expressions for the free energy and entropy with respect to the ideal gas, in terms of the potential energy moments. These can be linked to the average potential energy and its derivatives in temperature. Using thermodynamical relationships we also produce a general differential equation for the free energy as a function of temperature at fixed volume. In this paper we investigate possible exact and approximated solutions. The method was tested on a theoretical model for a solid (classical harmonic solid) and some experimental liquids. The harmonic solid has an energy distribution, which can be derived exactly from the theory. Experimental free energies of water and methanol could be reproduced very well over a temperature range of more than 300 K. For water, where the appropriate experimental data were available, also the energy and heat capacity could be reproduced very well.
An integrated finite element simulation of cardiomyocyte function based on triphasic theory
Hatano, Asuka; Okada, Jun-Ichi; Washio, Takumi; Hisada, Toshiaki; Sugiura, Seiryo
2015-01-01
In numerical simulations of cardiac excitation-contraction coupling, the intracellular potential distribution and mobility of cytosol and ions have been mostly ignored. Although the intracellular potential gradient is small, during depolarization it can be a significant driving force for ion movement, and is comparable to diffusion in terms of net flux. Furthermore, fluid in the t-tubules is thought to advect ions to facilitate their exchange with the extracellular space. We extend our previous finite element model that was based on triphasic theory to examine the significance of these factors in cardiac physiology. Triphasic theory allows us to study the behavior of solids (proteins), fluids (cytosol) and ions governed by mechanics and electrochemistry in detailed subcellular structures, including myofibrils, mitochondria, the sarcoplasmic reticulum, membranes, and t-tubules. Our simulation results predicted an electrical potential gradient inside the t-tubules at the onset of depolarization, which corresponded to the Na+ channel distribution therein. Ejection and suction of fluid between the t-tubules and the extracellular compartment during isometric contraction were observed. We also examined the influence of t-tubule morphology and mitochondrial location on the electrophysiology and mechanics of the cardiomyocyte. Our results confirm that the t-tubule structure is important for synchrony of Ca2+ release, and suggest that mitochondria in the sub-sarcolemmal region might serve to cancel Ca2+ inflow through surface sarcolemma, thereby maintaining the intracellular Ca2+ environment in equilibrium. PMID:26539124
General Formalism of Decision Making Based on Theory of Open Quantum Systems
NASA Astrophysics Data System (ADS)
Asano, M.; Ohya, M.; Basieva, I.; Khrennikov, A.
2013-01-01
We present the general formalism of decision making which is based on the theory of open quantum systems. A person (decision maker), say Alice, is considered as a quantum-like system, i.e., a system which information processing follows the laws of quantum information theory. To make decision, Alice interacts with a huge mental bath. Depending on context of decision making this bath can include her social environment, mass media (TV, newspapers, INTERNET), and memory. Dynamics of an ensemble of such Alices is described by Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) equation. We speculate that in the processes of evolution biosystems (especially human beings) designed such "mental Hamiltonians" and GKSL-operators that any solution of the corresponding GKSL-equation stabilizes to a diagonal density operator (In the basis of decision making.) This limiting density operator describes population in which all superpositions of possible decisions has already been resolved. In principle, this approach can be used for the prediction of the distribution of possible decisions in human populations.
A system model for ultrasonic NDT based on the Physical Theory of Diffraction (PTD).
Darmon, M; Dorval, V; Kamta Djakou, A; Fradkin, L; Chatillon, S
2016-01-01
Simulation of ultrasonic Non Destructive Testing (NDT) is helpful for evaluating performances of inspection techniques and requires the modelling of waves scattered by defects. Two classical flaw scattering models have been previously usually employed and evaluated to deal with inspection of planar defects, the Kirchhoff approximation (KA) for simulating reflection and the Geometrical Theory of Diffraction (GTD) for simulating diffraction. Combining them so as to retain advantages of both, the Physical Theory of Diffraction (PTD) initially developed in electromagnetism has been recently extended to elastodynamics. In this paper a PTD-based system model is proposed for simulating the ultrasonic response of crack-like defects. It is also extended to provide good description of regions surrounding critical rays where the shear diffracted waves and head waves interfere. Both numerical and experimental validation of the PTD model is carried out in various practical NDT configurations, such as pulse echo and Time of Flight Diffraction (TOFD), involving both crack tip and corner echoes. Numerical validation involves comparison of this model with KA and GTD as well as the Finite-Element Method (FEM). PMID:26323548
A variable-order laminated plate theory based on the variational-asymptotical method
NASA Technical Reports Server (NTRS)
Lee, Bok W.; Sutyrin, Vladislav G.; Hodges, Dewey H.
1993-01-01
The variational-asymptotical method is a mathematical technique by which the three-dimensional analysis of laminated plate deformation can be split into a linear, one-dimensional, through-the-thickness analysis and a nonlinear, two-dimensional, plate analysis. The elastic constants used in the plate analysis are obtained from the through-the-thickness analysis, along with approximate, closed-form three-dimensional distributions of displacement, strain, and stress. In this paper, a theory based on this technique is developed which is capable of approximating three-dimensional elasticity to any accuracy desired. The asymptotical method allows for the approximation of the through-the-thickness behavior in terms of the eigenfunctions of a certain Sturm-Liouville problem associated with the thickness coordinate. These eigenfunctions contain all the necessary information about the nonhomogeneities along the thickness coordinate of the plate and thus possess the appropriate discontinuities in the derivatives of displacement. The theory is presented in this paper along with numerical results for the eigenfunctions of various laminated plates.
Effective meson masses in nuclear matter based on a cutoff field theory
Nakano, M.; Noda, N.; Mitsumori, T.; Koide, K.; Kouno, H.; Hasegawa, A.
1997-02-01
Effective masses of {sigma}, {omega}, {pi}, and {rho} mesons in nuclear matter are calculated based on a cutoff field theory. Instead of the traditional density-Feynman representation, we adopt the particle-hole-antiparticle representation for nuclear propagators so that unphysical components are not included in the meson self-energies. For an estimation of the contribution from the divergent particle-antiparticle excitations, i.e., vacuum polarization in nuclear matter, the idea of the renormalization group method is adopted. In this cutoff field theory, all the counterterms are finite and calculated numerically. It is shown that the predicted meson masses converge even if the cutoff {Lambda} is changed as long as {Lambda} is sufficiently large and that the prescription works well also for so-called nonrenormalized mesons such as {pi} and {rho}. According to this method, it is concluded that meson masses in nuclear matter have a weak dependence on the baryon density. {copyright} {ital 1997} {ital The American Physical Society}
Biello, Joseph A; Samson, René
2015-03-01
The subject of this paper is competitive effects between multiple reaction sinks. A theory based on off-center monopoles is developed for the steady-state diffusion equation and for the convection-diffusion equation with a constant flow field. The dipolar approximation for the diffusion equation with two equal reaction centres is compared with the exact solution. The former turns out to be remarkably accurate, even for two touching spheres. Numerical evidence is presented to show that the same holds for larger clusters (with more than two spheres). The theory is extended to the convection-diffusion equation with a constant flow field. As one increases the convective velocity, the competitive effects between the reactive centres gradually become less significant. This is demonstrated for a number of cluster configurations. At high flow velocities, the current methodology breaks down. Fixing this problem will be the subject of future research. The current method is useful as an easy-to-use tool for the calibration of other more complicated models in mass and/or heat transfer. PMID:25747063
Int. J. Reliability and Safety, Vol. 7, No. 1, 2013 79 Copyright 2013 Inderscience Enterprises Ltd.
Dharmaraja, S.
University of Delhi, India. Her current research interests include queuing theory, Markov modelling at the TRLabs, Winnipeg, Canada. His current research interests include queuing theory, Markov modelling
A resource management architecture based on complex network theory in cloud computing federation
NASA Astrophysics Data System (ADS)
Zhang, Zehua; Zhang, Xuejie
2011-10-01
Cloud Computing Federation is a main trend of Cloud Computing. Resource Management has significant effect on the design, realization, and efficiency of Cloud Computing Federation. Cloud Computing Federation has the typical characteristic of the Complex System, therefore, we propose a resource management architecture based on complex network theory for Cloud Computing Federation (abbreviated as RMABC) in this paper, with the detailed design of the resource discovery and resource announcement mechanisms. Compare with the existing resource management mechanisms in distributed computing systems, a Task Manager in RMABC can use the historical information and current state data get from other Task Managers for the evolution of the complex network which is composed of Task Managers, thus has the advantages in resource discovery speed, fault tolerance and adaptive ability. The result of the model experiment confirmed the advantage of RMABC in resource discovery performance.
A High Precision Feature Based on LBP and Gabor Theory for Face Recognition
Xia, Wei; Yin, Shouyi; Ouyang, Peng
2013-01-01
How to describe an image accurately with the most useful information but at the same time the least useless information is a basic problem in the recognition field. In this paper, a novel and high precision feature called BG2D2LRP is proposed, accompanied with a corresponding face recognition system. The feature contains both static texture differences and dynamic contour trends. It is based on Gabor and LBP theory, operated by various kinds of transformations such as block, second derivative, direct orientation, layer and finally fusion in a particular way. Seven well-known face databases such as FRGC, AR, FERET and so on are used to evaluate the veracity and robustness of the proposed feature. A maximum improvement of 29.41% is achieved comparing with other methods. Besides, the ROC curve provides a satisfactory figure. Those experimental results strongly demonstrate the feasibility and superiority of the new feature and method. PMID:23552103
Digital focusing of OCT images based on scalar diffraction theory and information entropy.
Liu, Guozhong; Zhi, Zhongwei; Wang, Ruikang K
2012-11-01
This paper describes a digital method that is capable of automatically focusing optical coherence tomography (OCT) en face images without prior knowledge of the point spread function of the imaging system. The method utilizes a scalar diffraction model to simulate wave propagation from out-of-focus scatter to the focal plane, from which the propagation distance between the out-of-focus plane and the focal plane is determined automatically via an image-definition-evaluation criterion based on information entropy theory. By use of the proposed approach, we demonstrate that the lateral resolution close to that at the focal plane can be recovered from the imaging planes outside the depth of field region with minimal loss of resolution. Fresh onion tissues and mouse fat tissues are used in the experiments to show the performance of the proposed method. PMID:23162717
[A new method for small displacement test and measurement based on the light reflection theory].
Chen, Ren-wen; Sun, Ya-fei; Chen, Yong
2004-01-01
A new idea for small displacement test and measurement system based on light reflection is presented in this paper. Some theoretical researches using the method and experiments in practice were carried out. The results proved that the theory is feasible and efficient. Compared with the traditional small displacement test and measurement system, such as mechanical displacement magnifier; resistance strain test and measurement method; piezoelectric material strain test and measurement system and so on, this method has the following advantages: it creates little disturbance of the test and measurement system; the displacement magnification coefficient is high and is convenient for user to adjust; the test and measurement precision is high and is very easy for its realization; and the cost is low. It fits a lot of test and measurement situations. PMID:15768967
A review of return-stroke models based on transmission line theory
NASA Astrophysics Data System (ADS)
De Conti, Alberto; Silveira, Fernando H.; Visacro, Silvério; Cardoso, Thiago C. M.
2015-12-01
This paper presents a review of lightning return-stroke models based on transmission line theory. The reviewed models are classified in three different categories, namely discharge-type, lumped-excitation, and parameter-estimation models. An attempt is made to address the difficulties that some models experience in reproducing directly or indirectly observable features of lightning, such as current characteristics and remote electromagnetic fields. It is argued that most of these difficulties are related to a poor discretization of the lightning channel, to inconsistencies in the calculation of per-unit-length channel parameters, to uncertainties in the representation of the upper end of the channel, and to assuming an ideal switch to connect the channel to ground in the transition from leader to return stroke. Applications of transmission line return-stroke models are also outlined.
NASA Astrophysics Data System (ADS)
You, W. J.; Zhang, Y. L.
2015-08-01
Huaihe River is one of the seven largest rivers in China, in which floods occurred frequently. Disasters cause huge casualties and property losses to the basin, and also make it famous for high social vulnerability to floods. Based on the latest social-economic data, the index system of social vulnerability to floods was constructed, and Catastrophe theory method was used in the assessment process. The conclusion shows that social vulnerability as a basic attribute attached to urban environment, with significant changes from city to city across the Huaihe River basin. Different distribution characteristics are present in population, economy, flood prevention vulnerability. It is important to make further development of social vulnerability, which will play a positive role in disaster prevention, improvement of comprehensive ability to respond to disasters.
The Model of Lake Operation in Water Transfer Projects Based on the Theory of Water- right
NASA Astrophysics Data System (ADS)
Bi-peng, Yan; Chao, Liu; Fang-ping, Tang
the lake operation is a very important content in Water Transfer Projects. The previous studies have not any related to water-right and water- price previous. In this paper, water right is divided into three parts, one is initialization waterright, another is by investment, and the third is government's water- right re-distribution. The water-right distribution model is also build. After analyzing the cost in water transfer project, a model and computation method for the capacity price as well as quantity price is proposed. The model of lake operation in water transfer projects base on the theory of water- right is also build. The simulation regulation for the lake was carried out by using historical data and Genetic Algorithms. Water supply and impoundment control line of the lake was proposed. The result can be used by south to north water transfer projects.
Realization of low-scattering metamaterial shell based on cylindrical wave expanding theory.
Wu, Xiaoyu; Hu, Chenggang; Wang, Min; Pu, Mingbo; Luo, Xiangang
2015-04-20
In this paper, we demonstrate the design of a low-scattering metamaterial shell with strong backward scattering reduction and a wide bandwidth at microwave frequencies. Low echo is achieved through cylindrical wave expanding theory, and such shell only contains one metamaterial layer with simultaneous low permittivity and permeability. Cut-wire structure is selected to realize the low electromagnetic (EM) parameters and low loss on the resonance brim region. The full-model simulations show good agreement with theoretical calculations, and illustrate that near -20dB reduction is achieved and the -10 dB bandwidth can reach up to 0.6 GHz. Compared with the cloak based on transformation electromagnetics, the design possesses advantage of simpler requirement of EM parameters and is much easier to be implemented when only backward scattering field is cared. PMID:25969080
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
NASA Astrophysics Data System (ADS)
Assawaroongruengchot, Monchai
Perturbation theory is a technique used for the estimation of changes in performance functionals, such as linear reaction rate ratio and eigenvalue affected by small variations in reactor core compositions. Here the algorithm of perturbation theory is developed for the multigroup integral neutron transport problems in 2D fuel assemblies with isotropic scattering. The integral transport equation is used in the perturbative formulation because it represents the interconnecting neutronic systems of the lattice assemblies via the tracking lines. When the integral neutron transport equation is used in the formulation, one needs to solve the resulting integral transport equations for the flux importance and generalized flux importance functions. The relationship between the generalized flux importance and generalized source importance functions is defined in order to transform the generalized flux importance transport equations into the integro-differential equations for the generalized adjoints. Next we develop the adjoint and generalized adjoint transport solution algorithms based on the method of cyclic characteristics (MOCC) in DRAGON code. In the MOCC method, the adjoint characteristics equations associated with a cyclic tracking line are formulated in such a way that a closed form for the adjoint angular function can be obtained. The MOCC method then requires only one cycle of scanning over the cyclic tracking lines in each spatial iteration. We also show that the source importance function by CP method is mathematically equivalent to the adjoint function by MOCC method. In order to speed up the MOCC solution algorithm, a group-reduction and group-splitting techniques based on the structure of the adjoint scattering matrix are implemented. A combined forward flux/adjoint function iteration scheme, based on the group-splitting technique and the common use of a large number of variables storing tracking-line data and exponential values, is proposed to reduce the computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and keff-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and keff-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR-BOC, CVR-EOC and keff-EOC adjustment of a CANDU lattice of which the burnup period is extended f
Detection and control of combustion instability based on the concept of dynamical system theory.
Gotoda, Hiroshi; Shinoda, Yuta; Kobayashi, Masaki; Okuno, Yuta; Tachibana, Shigeru
2014-02-01
We propose an online method of detecting combustion instability based on the concept of dynamical system theory, including the characterization of the dynamic behavior of combustion instability. As an important case study relevant to combustion instability encountered in fundamental and practical combustion systems, we deal with the combustion dynamics close to lean blowout (LBO) in a premixed gas-turbine model combustor. The relatively regular pressure fluctuations generated by thermoacoustic oscillations transit to low-dimensional intermittent chaos owing to the intermittent appearance of burst with decreasing equivalence ratio. The translation error, which is characterized by quantifying the degree of parallelism of trajectories in the phase space, can be used as a control variable to prevent LBO. PMID:25353548
A second-order accurate kinetic-theory-based method for inviscid compressible flows
NASA Technical Reports Server (NTRS)
Deshpande, Suresh M.
1986-01-01
An upwind method for the numerical solution of the Euler equations is presented. This method, called the kinetic numerical method (KNM), is based on the fact that the Euler equations are moments of the Boltzmann equation of the kinetic theory of gases when the distribution function is Maxwellian. The KNM consists of two phases, the convection phase and the collision phase. The method is unconditionally stable and explicit. It is highly vectorizable and can be easily made total variation diminishing for the distribution function by a suitable choice of the interpolation strategy. The method is applied to a one-dimensional shock-propagation problem and to a two-dimensional shock-reflection problem.
Semianalytical estimation of the four-wave mixing noise based on extreme value theory.
Neokosmidis, Ioannis; Marakis, Stylianos; Varoutas, Dimitris
2013-10-01
Four-wave mixing (FWM) is one the limiting factors for existing and future wavelength division multiplexed optical networks. A semianalytical method based on Monte Carlo and Extreme Value theory is proposed and applied to study the influence of the FWM noise on the performance of WDM systems. The statistical behavior of the FWM noise is investigated while the Bit-Error rate is calculated for various combinations of the design parameters and for both single and multiple span WDM systems. The semianalytical method is also compared to the Multicanonical Monte Carlo (MCMC) method showing the same efficiency and accuracy with the former providing however the advantage of deriving closed-form approximations for the cumulative distribution functions of the photocurrents in the mark and the space state and the BER. PMID:24104223
Study on corporate social responsibility evaluation system based on stakeholder theory
NASA Astrophysics Data System (ADS)
Ma, J.; Deng, Liming
2011-10-01
The issue of Corporate Social Responsibility (CSR) has been attracting the attention from many disciplines such as economics, management, laws, sociality and philosophy since last century. The purpose of this study is to explore the impact of CSR on performance and develop a CSR evaluation system. Building on the definition of CSR and Stakeholder theory, this article built a path-relationship model of CSR and business operation performance. The paper also constructed CSR evaluation system based on KLD index, GRJ report, CSR accounting account, SA8000, ISO14000 etc. The research provides a basis for future studies about the relationship between CSR and business performance and shed some light on the evaluation of CSR practices.
Dissolved oxygen prediction using a possibility-theory based fuzzy neural network
NASA Astrophysics Data System (ADS)
Khan, U. T.; Valeo, C.
2015-11-01
A new fuzzy neural network method to predict minimum dissolved oxygen (DO) concentration in a highly urbanised riverine environment (in Calgary, Canada) is proposed. The method uses abiotic (non-living, physical and chemical attributes) as inputs to the model, since the physical mechanisms governing DO in the river are largely unknown. A new two-step method to construct fuzzy numbers using observations is proposed. Then an existing fuzzy neural network is modified to account for fuzzy number inputs and also uses possibility-theory based intervals to train the network. Results demonstrate that the method is particularly well suited to predict low DO events in the Bow River. Model output and a defuzzification technique is used to estimate the risk of low DO so that water resource managers can implement strategies to prevent the occurrence of low DO.
Grey Situation Group Decision-Making Method Based on Prospect Theory
Zhang, Na; Fang, Zhigeng; Liu, Xiaqing
2014-01-01
This paper puts forward a grey situation group decision-making method on the basis of prospect theory, in view of the grey situation group decision-making problems that decisions are often made by multiple decision experts and those experts have risk preferences. The method takes the positive and negative ideal situation distance as reference points, defines positive and negative prospect value function, and introduces decision experts' risk preference into grey situation decision-making to make the final decision be more in line with decision experts' psychological behavior. Based on TOPSIS method, this paper determines the weight of each decision expert, sets up comprehensive prospect value matrix for decision experts' evaluation, and finally determines the optimal situation. At last, this paper verifies the effectiveness and feasibility of the method by means of a specific example. PMID:25197706
Quantum gauge confinement of multiple quarks based on the homogeneous 5D projection theory
K. W. Wong; G. Dreschhoff; H. Jungner
2015-09-24
A quick and simplified review of the 5D quantum field theory is presented. The role of topological mapping, which must preserve gauge invariance, is done in two ways, leading to the realization of the gauge transformation in the 5D space-time becoming two separate gauge constraints, one for the multi-quark state quark constituents, while the other is the quantum confinement imposed on the gluon potentials, formed from products of vector potentials generated by products of the fractional charged quark currents. The procedure presented clearly shows multi-quark states can be designed and that they can be verified by experiments, such as the penta-quark state reported. Based on these gauge constraints we propose the existence of 4, 5 and 6 quark states.
Detection and control of combustion instability based on the concept of dynamical system theory
NASA Astrophysics Data System (ADS)
Gotoda, Hiroshi; Shinoda, Yuta; Kobayashi, Masaki; Okuno, Yuta; Tachibana, Shigeru
2014-02-01
We propose an online method of detecting combustion instability based on the concept of dynamical system theory, including the characterization of the dynamic behavior of combustion instability. As an important case study relevant to combustion instability encountered in fundamental and practical combustion systems, we deal with the combustion dynamics close to lean blowout (LBO) in a premixed gas-turbine model combustor. The relatively regular pressure fluctuations generated by thermoacoustic oscillations transit to low-dimensional intermittent chaos owing to the intermittent appearance of burst with decreasing equivalence ratio. The translation error, which is characterized by quantifying the degree of parallelism of trajectories in the phase space, can be used as a control variable to prevent LBO.
Frequency Shift of Carbon-Nanotube-Based Mass Sensor Using Nonlocal Elasticity Theory
2010-01-01
The frequency equation of carbon-nanotube-based cantilever sensor with an attached mass is derived analytically using nonlocal elasticity theory. According to the equation, the relationship between the frequency shift of the sensor and the attached mass can be obtained. When the nonlocal effect is not taken into account, the variation of frequency shift with the attached mass on the sensor is compared with the previous study. According to this study, the result shows that the frequency shift of the sensor increases with increasing the attached mass. When the attached mass is small compared with that of the sensor, the nonlocal effect is obvious and increasing nonlocal parameter decreases the frequency shift of the sensor. In addition, when the location of the attached mass is closer to the free end, the frequency shift is more significant and that makes the sensor reveal more sensitive. When the attached mass is small, a high sensitivity is obtained. PMID:21124623
Phylogenetic analysis of DNA sequences based on k-word and rough set theory
NASA Astrophysics Data System (ADS)
Li, Chun; Yang, Yan; Jia, Meiduo; Zhang, Yingying; Yu, Xiaoqing; Wang, Changzhong
2014-03-01
Among alignment-free methods for sequence comparison, the model of k-word frequencies is a well-developed one. However, most existing word-based methods neglect relationships among k-word frequencies, while a few others focus on the correlation of k-words but ignore the word frequency itself. In this paper, we propose a new k-word method which succeeds in conquering the two problems. By means of characteristic sequences of a DNA sequence, we construct a 3×2k dimensional complete word-based vector. Then we present a feature selection scheme based on rough set theory (RST) to extract the most informative k-words and use only these selected features to represent the DNA sequence. To evaluate the effectiveness of our method, we test it by phylogenetic analysis on three datasets. The first one is used as a training set, by which 869 top ranked k-words are selected. The other two are used as the testing set. The results demonstrate that the proposed method can capture more important information and is more efficient for molecular phylogenetic analysis.
Extended Hartree-Fock method based on pair density functional theory
NASA Astrophysics Data System (ADS)
Hetényi, Balázs; Hauser, Andreas W.
2008-04-01
A practical electronic structure method in which a two-body functional is the fundamental variable is constructed. The basic formalism of our method is equivalent to Hartree-Fock density matrix functional theory [M. Levy, in Density Matrices and Density Functionals, edited by R. Erdahl and V. H. Smith, Jr. (Reidel, Dordrecht, 1987)]. The implementation of the method consists of solving Hartree-Fock equations and using the resulting orbitals to calculate two-body corrections to account for correlation. The correction terms are constructed so that the energy of the system in the absence of external potentials can be made to correspond to approximate expressions for the energy of the homogeneous electron gas. In this work, the approximate expressions we use are based on the high-density limit of the homogeneous electron gas. Self-interaction is excluded from the two-body functional itself. It is shown that our pair density based functional does not suffer from the divergence present in many density functionals when homogeneous scaling is applied. Calculations based on our pair density functional lead to quantitative results for the correlation energies of atomic test cases.
Theory of nodal s±-wave pairing symmetry in the Pu-based 115 superconductor family
NASA Astrophysics Data System (ADS)
Das, Tanmoy; Zhu, Jian-Xin; Graf, Matthias J.
2015-02-01
The spin-fluctuation mechanism of superconductivity usually results in the presence of gapless or nodal quasiparticle states in the excitation spectrum. Nodal quasiparticle states are well established in copper-oxide, and heavy-fermion superconductors, but not in iron-based superconductors. Here, we study the pairing symmetry and mechanism of a new class of plutonium-based high-Tc superconductors and predict the presence of a nodal s+- wave pairing symmetry in this family. Starting from a density-functional theory (DFT) based electronic structure calculation we predict several three-dimensional (3D) Fermi surfaces in this 115 superconductor family. We identify the dominant Fermi surface ``hot-spots'' in the inter-band scattering channel, which are aligned along the wavevector Q = (?, ?, ?), where degeneracy could induce sign-reversal of the pairing symmetry. Our calculation demonstrates that the s+- wave pairing strength is stronger than the previously thought d-wave pairing; and more importantly, this pairing state allows for the existence of nodal quasiparticles. Finally, we predict the shape of the momentum- and energy-dependent magnetic resonance spectrum for the identification of this pairing symmetry.
Nanobatteries in redox-based resistive switches require extension of memristor theory
NASA Astrophysics Data System (ADS)
Valov, I.; Linn, E.; Tappertzhofen, S.; Schmelzer, S.; van den Hurk, J.; Lentz, F.; Waser, R.
2013-04-01
Redox-based nanoionic resistive memory cells are one of the most promising emerging nanodevices for future information technology with applications for memory, logic and neuromorphic computing. Recently, the serendipitous discovery of the link between redox-based nanoionic-resistive memory cells and memristors and memristive devices has further intensified the research in this field. Here we show on both a theoretical and an experimental level that nanoionic-type memristive elements are inherently controlled by non-equilibrium states resulting in a nanobattery. As a result, the memristor theory must be extended to fit the observed non-zero-crossing I-V characteristics. The initial electromotive force of the nanobattery depends on the chemistry and the transport properties of the materials system but can also be introduced during redox-based nanoionic-resistive memory cell operations. The emf has a strong impact on the dynamic behaviour of nanoscale memories, and thus, its control is one of the key factors for future device development and accurate modelling.
Nanobatteries in redox-based resistive switches require extension of memristor theory.
Valov, I; Linn, E; Tappertzhofen, S; Schmelzer, S; van den Hurk, J; Lentz, F; Waser, R
2013-01-01
Redox-based nanoionic resistive memory cells are one of the most promising emerging nanodevices for future information technology with applications for memory, logic and neuromorphic computing. Recently, the serendipitous discovery of the link between redox-based nanoionic-resistive memory cells and memristors and memristive devices has further intensified the research in this field. Here we show on both a theoretical and an experimental level that nanoionic-type memristive elements are inherently controlled by non-equilibrium states resulting in a nanobattery. As a result, the memristor theory must be extended to fit the observed non-zero-crossing I-V characteristics. The initial electromotive force of the nanobattery depends on the chemistry and the transport properties of the materials system but can also be introduced during redox-based nanoionic-resistive memory cell operations. The emf has a strong impact on the dynamic behaviour of nanoscale memories, and thus, its control is one of the key factors for future device development and accurate modelling. PMID:23612312
Theory of nodal s±-wave pairing symmetry in the Pu-based 115 superconductor family
Das, Tanmoy; Zhu, Jian-Xin; Graf, Matthias J.
2015-01-01
The spin-fluctuation mechanism of superconductivity usually results in the presence of gapless or nodal quasiparticle states in the excitation spectrum. Nodal quasiparticle states are well established in copper-oxide, and heavy-fermion superconductors, but not in iron-based superconductors. Here, we study the pairing symmetry and mechanism of a new class of plutonium-based high-Tc superconductors and predict the presence of a nodal s+? wave pairing symmetry in this family. Starting from a density-functional theory (DFT) based electronic structure calculation we predict several three-dimensional (3D) Fermi surfaces in this 115 superconductor family. We identify the dominant Fermi surface “hot-spots” in the inter-band scattering channel, which are aligned along the wavevector Q = (?, ?, ?), where degeneracy could induce sign-reversal of the pairing symmetry. Our calculation demonstrates that the s+? wave pairing strength is stronger than the previously thought d-wave pairing; and more importantly, this pairing state allows for the existence of nodal quasiparticles. Finally, we predict the shape of the momentum- and energy-dependent magnetic resonance spectrum for the identification of this pairing symmetry. PMID:25721375
Theory of nodal s±-wave pairing symmetry in the Pu-based 115 superconductor family
Das, Tanmoy; Zhu, Jian -Xin; Graf, Matthias J.
2015-02-27
The spin-fluctuation mechanism of superconductivity usually results in the presence of gapless or nodal quasiparticle states in the excitation spectrum. Nodal quasiparticle states are well established in copper-oxide, and heavy-fermion superconductors, but not in iron-based superconductors. Here, we study the pairing symmetry and mechanism of a new class of plutonium-based high-Tc superconductors and predict the presence of a nodal s?? wave pairing symmetry in this family. Starting from a density-functional theory (DFT) based electronic structure calculation we predict several three-dimensional (3D) Fermi surfaces in this 115 superconductor family. We identify the dominant Fermi surface “hot-spots” in the inter-band scattering channel,more »which are aligned along the wavevector Q = (?, ?, ?), where degeneracy could induce sign-reversal of the pairing symmetry. Our calculation demonstrates that the s?? wave pairing strength is stronger than the previously thought d-wave pairing; and more importantly, this pairing state allows for the existence of nodal quasiparticles. Finally, we predict the shape of the momentum- and energy-dependent magnetic resonance spectrum for the identification of this pairing symmetry.« less
Kinematics and dynamics Hessian matrices of manipulators based on screw theory
NASA Astrophysics Data System (ADS)
Zhao, Tieshi; Geng, Mingchao; Chen, Yuhang; Li, Erwei; Yang, Jiantao
2015-03-01
The complexity of the kinematics and dynamics of a manipulator makes it necessary to simplify the modeling process. However, the traditional representations cannot achieve this because of the absence of coordinate invariance. Therefore, the coordinate invariant method is an important research issue. First, the rigid-body acceleration, the time derivative of the twist, is proved to be a screw, and its physical meaning is explained. Based on the twist and the rigid-body acceleration, the acceleration of the end-effector is expressed as a linear-bilinear form, and the kinematics Hessian matrix of the manipulator(represented by Lie bracket) is deduced. Further, Newton-Euler's equation is rewritten as a linear-bilinear form, from which the dynamics Hessian matrix of a rigid body is obtained. The formulae and the dynamics Hessian matrix are proved to be coordinate invariant. Referring to the principle of virtual work, the dynamics Hessian matrix of the parallel manipulator is gotten and the detailed dynamic model is derived. An index of dynamical coupling based on dynamics Hessian matrix is presented. In the end, a foldable parallel manipulator is taken as an example to validate the deduced kinematics and dynamics formulae. The screw theory based method can simplify the kinematics and dynamics of a manipulator, also the corresponding dynamics Hessian matrix can be used to evaluate the dynamical coupling of a manipulator.
Nanobatteries in redox-based resistive switches require extension of memristor theory
Valov, I.; Linn, E.; Tappertzhofen, S.; Schmelzer, S.; van den Hurk, J.; Lentz, F.; Waser, R.
2013-01-01
Redox-based nanoionic resistive memory cells are one of the most promising emerging nanodevices for future information technology with applications for memory, logic and neuromorphic computing. Recently, the serendipitous discovery of the link between redox-based nanoionic-resistive memory cells and memristors and memristive devices has further intensified the research in this field. Here we show on both a theoretical and an experimental level that nanoionic-type memristive elements are inherently controlled by non-equilibrium states resulting in a nanobattery. As a result, the memristor theory must be extended to fit the observed non-zero-crossing I–V characteristics. The initial electromotive force of the nanobattery depends on the chemistry and the transport properties of the materials system but can also be introduced during redox-based nanoionic-resistive memory cell operations. The emf has a strong impact on the dynamic behaviour of nanoscale memories, and thus, its control is one of the key factors for future device development and accurate modelling. PMID:23612312
NASA Astrophysics Data System (ADS)
Yang, Y.; Li, H. T.; Han, Y. S.; Gu, H. Y.
2015-06-01
Image segmentation is the foundation of further object-oriented image analysis, understanding and recognition. It is one of the key technologies in high resolution remote sensing applications. In this paper, a new fast image segmentation algorithm for high resolution remote sensing imagery is proposed, which is based on graph theory and fractal net evolution approach (FNEA). Firstly, an image is modelled as a weighted undirected graph, where nodes correspond to pixels, and edges connect adjacent pixels. An initial object layer can be obtained efficiently from graph-based segmentation, which runs in time nearly linear in the number of image pixels. Then FNEA starts with the initial object layer and a pairwise merge of its neighbour object with the aim to minimize the resulting summed heterogeneity. Furthermore, according to the character of different features in high resolution remote sensing image, three different merging criterions for image objects based on spectral and spatial information are adopted. Finally, compared with the commercial remote sensing software eCognition, the experimental results demonstrate that the efficiency of the algorithm has significantly improved, and the result can maintain good feature boundaries.
NASA Astrophysics Data System (ADS)
Fujita, Yasunori
2007-09-01
Reformulation of economics by physics has been carried out intensively to reveal many features of the asset market, which were missed in the classical economic theories. The present paper attempts to shed new light on this field. That is, this paper aims at reformulating the international trade model by making use of the real option theory. Based on such a stochastic dynamic model, we examine how the fluctuation of the foreign exchange rate makes effect on the welfare of the exporting country.
Adapting Evidence-based Interventions using a Common Theory, Practices, and Principles
Rotheram-Borus, Mary Jane; Swendeman, Dallas; Becker, Kimberly D.
2013-01-01
Objective Hundreds of validated evidence-based intervention programs (EBIP) aim to improve families’ well-being, however, most are not broadly adopted. As an alternative diffusion strategy, we created wellness centers to reach families’ everyday lives with a prevention framework. Method At two wellness centers, one in a middle-class neighborhood and one in a low-income neighborhood, popular local activity leaders (instructors of martial arts, yoga, sports, music, dancing, zumba), and motivated parents were trained to be Family Mentors. Trainings focused on a framework which taught synthesized, foundational prevention science theory, practice elements, and principles, applied to specific content areas (parenting, social skills, and obesity). Family Mentors were then allowed to adapt scripts and activities based on their cultural experiences, but were closely monitored and supervised over time. The framework was implemented in a range of activities (summer camps, coaching) aimed at improving social, emotional, and behavioral outcomes. Results Successes and challenges are discussed for: 1) engaging parents and communities; 2) identifying and training Family Mentors to promote children and families’ well-being; and 3) gathering data for supervision, outcome evaluation, and continuous quality improvement (CQI). Conclusion To broadly diffuse prevention to families, far more experimentation is needed with alternative and engaging implementation strategies that are enhanced with knowledge harvested from researchers’ past 30 years of experience creating EBIP. One strategy is to train local parents and popular activity leaders in applying robust prevention science theory, common practice elements, and principles of EBIP. More systematic evaluation of such innovations is needed. PMID:24079747
Statistical analysis of 4 types of neck whiplash injuries based on classical meridian theory.
Chen, Yemeng; Zhao, Yan; Xue, Xiaolin; Li, Hui; Wu, Xiuyan; Zhang, Qunce; Zheng, Xin; Wang, Tianfang
2015-01-01
As one component of the Chinese medicine meridian system, the meridian sinew (Jingjin, (see text), tendino-musculo) is specially described as being for acupuncture treatment of the musculoskeletal system because of its dynamic attributes and tender point correlations. In recent decades, the therapeutic importance of the sinew meridian has become revalued in clinical application. Based on this theory, the authors have established therapeutic strategies of acupuncture treatment in Whiplash-Associated Disorders (WAD) by categorizing four types of neck symptom presentations. The advantage of this new system is to make it much easier for the clinician to find effective acupuncture points. This study attempts to prove the significance of the proposed therapeutic strategies by analyzing data collected from a clinical survey of various WAD using non-supervised statistical methods, such as correlation analysis, factor analysis, and cluster analysis. The clinical survey data have successfully verified discrete characteristics of four neck syndromes, based upon the range of motion (ROM) and tender point location findings. A summary of the relationships among the symptoms of the four neck syndromes has shown the correlation coefficient as having a statistical significance (P < 0.01 or P < 0.05), especially with regard to ROM. Furthermore, factor and cluster analyses resulted in a total of 11 categories of general symptoms, which implies syndrome factors are more related to the Liver, as originally described in classical theory. The hypothesis of meridian sinew syndromes in WAD is clearly supported by the statistical analysis of the clinical trials. This new discovery should be beneficial in improving therapeutic outcomes. PMID:25980049
Spectroscopic and density functional theory investigation of novel Schiff base complexes
NASA Astrophysics Data System (ADS)
Hassan, Walid M. I.; Zayed, Ehab M.; Elkholy, Asmaa K.; Moustafa, H.; Mohamed, Gehad G.
2013-02-01
Novel Schiff base (H2L, 1,2-bis[(2-(2-mercaptophenylimino)methyl)phenoxy] ethane) derived from condensation of bisaldehyde and 2-aminothiophenol was prepared in a molar ratio 1:2. The ligand and its metal complexes are fully characterized with analytical and spectroscopic techniques. The metal complexes with Cr(III), Mn(II), Fe(III), Co(II), Ni(II), Cu(II), Zn(II) and Th(IV) have been prepared and characterized by elemental analyses, IR and 1H-NMR spectroscopy, thermal and magnetic measurements. The results suggested that the Schiff base is a bivalent anion with hexadentate OONNSS donors derived from the etheric oxygen (O, O'), azomethine nitrogen (N, N') and thiophenolic sulphur (S, S'). The formulae of the complexes were found to be [ML]·xH2O (M = Mn(II) (x = 0), Co(II) (x = 1), Ni(II), (x = 1), Cu(II) (x = 2) and Zn(II) (x = 0)) and [ML]·nCl (M = Cr(III) (n = 1), Fe(III) (n = 1) and Th(IV) (n = 2)). The thermogravimetric analysis of the complexes shows metal oxide remaining as the final product at 700-1000 °C. Density functional theory at the B3LYP/6-31G* level of theory was used to investigate molecular geometry, Mulliken atomic charges and energetics. The synclinal-conformer was found to be responsible for complex formation. The calculation showed that ligand has weak field. Structural deformation and the dihedral angles rotation during complexation were investigated. The binding energy of each complex was calculated. The calculated results are in good agreement with experimental data.
Redox potentials and acidity constants from density functional theory based molecular dynamics.
Cheng, Jun; Liu, Xiandong; VandeVondele, Joost; Sulpizi, Marialore; Sprik, Michiel
2014-12-16
CONSPECTUS: All-atom methods treat solute and solvent at the same level of electronic structure theory and statistical mechanics. All-atom computation of acidity constants (pKa) and redox potentials is still a challenge. In this Account, we review such a method combining density functional theory based molecular dynamics (DFTMD) and free energy perturbation (FEP) methods. The key computational tool is a FEP based method for reversible insertion of a proton or electron in a periodic DFTMD model system. The free energy of insertion (work function) is computed by thermodynamic integration of vertical energy gaps obtained from total energy differences. The problem of the loss of a physical reference for ionization energies under periodic boundary conditions is solved by comparing with the proton work function computed for the same supercell. The scheme acts as a computational hydrogen electrode, and the DFTMD redox energies can be directly compared with experimental redox potentials. Consistent with the closed shell nature of acid dissociation, pKa estimates computed using the proton insertion/removal scheme are found to be significantly more accurate than the redox potential calculations. This enables us to separate the DFT error from other sources of uncertainty such as finite system size and sampling errors. Drawing an analogy with charged defects in solids, we trace the error in redox potentials back to underestimation of the energy gap of the extended states of the solvent. Accordingly the improvement in the redox potential as calculated by hybrid functionals is explained as a consequence of the opening up of the bandgap by the Hartree-Fock exchange component in hybrids. Test calculations for a number of small inorganic and organic molecules show that the hybrid functional implementation of our method can reproduce acidity constants with an uncertainty of 1-2 pKa units (0.1 eV). The error for redox potentials is in the order of 0.2 V. PMID:25365148
Saber, Fatemeh; Shanazi, Hossein; Sharifirad, Gholamreza; Hasanzadeh, Akbar
2014-01-01
Background: Sedentary life has been recognized as a serious problem in today's Iranian society. Promoting the lifestyle with increased physical activity and prevention of cardiovascular disease (CVD) is imperative. The purpose of this study was identifying the determinants of physical activity in the housewives of Nain city in 2012 based on the theory of planned behavior. Materials and Methods: In this cross-sectional study, 120 housewives were selected by simple random sampling method. Data collection tool was a questionnaire designed based on a standardized and fabricated questionnaire and consisted of four parts. The questionnaire included awareness variables, theory of structures, planned behavior, and physical activity. Data analysis was performed using the SPSS software version 18 and associated statistical tests. Findings: The 120 housewives under study had a mean age of 34.58 ± 6.86 years. The mean scores of awareness, attitude, motivation to perform, subjective norms, and perceived behavioral control variables were 74.1 ± 18.5, 82.6 ± 12.1, 59.4 ± 21.7, 63.2 ± 21.2, and 48.1 ± 12.9 respectively. There was a significant relationship between the motivation for physical activity among women and knowledge (P = 0.02) attitude (P = 0.04) subjective norms (P = 0.002) perceived behavioral control (P = 0.001), and physical activity (P = 0.04). Conclusions: It seems that the housewives, despite being aware of and having a positive attitude on the benefits of physical activity, had a poor lifestyle. Perhaps further studies can help in finding the causes of this issue and the barriers to physical activity such as the conditions and plan for greater measures for improving physical activity, in order to promote women's health which has a significant role in family and community health. PMID:25250360
Electromagnetic information theory and subspace-based signal processing applications in imaging
NASA Astrophysics Data System (ADS)
Gruber, Fred K.
The first part of the dissertation investigates the information-theoretic characterization, via Shannon's information capacity, of wave radiation and wireless propagation systems. Specifically, this part of the dissertation derives, from the fundamental physical point of view of Maxwell's equations describing electromagnetic fields, the Shannon information capacity of space-time wireless channels formed by electromagnetic sources and receivers in a known background medium. The theory is developed first for the case of sources working at a fixed frequency and is expanded later to the more general case of temporally bandlimited systems. In the bandlimited case we consider separately the two cases of time-limited and essentially bandlimited systems and of purely bandlimited systems. The developments take into account the physical radiated power constraint in addition to a constraint in the source L2 norm. Based on such radiated power and current L2 norm constraints we derive the Shannon information capacity of canonical wireless and antenna systems in free space, for a given additive Gaussian noise level, as well as an associated number of degrees of freedom resulting from such capacity calculations. The derived results also illustrate, from a new information-theoretic point of view, the transition from near to far fields. The second part of the dissertation describes a novel technique for the shape reconstruction of extended scatterers from the measurement of the scattering or response matrix based on prior work co-authored by the present author. These previous results are shown to be related to the concepts of angles and distances between subspaces and are used to propose new imaging and shape reconstruction approaches of the support of a unknown extended scatterer assuming the exact scattering theory. Initially we present a modification of the conventional MUSIC imaging approach that avoids the need to determine the numerical rank of the scattering matrix. Then we consider a different problem where given a grid we try to determine whether each of the points of the grid is inside the support of the scatterer or not. In this last application we consider two approaches: one based on the modified MUSIC imaging and the other based on the level set method.
Transient Analysis of Data Traffic in Cognitive Radio Networks: A Non-equilibrium Statistical
Li, Husheng
to better design the networking protocols. Note that, in the field of queuing theory, the queuing process with service interruptions has been widely studied [5] [8] [9]. The conclusions from queuing theory then have
Numerically-based ducted propeller design using vortex lattice lifting line theory
Stubblefield, John M
2008-01-01
This thesis used vortex lattice lifting line theory to model an axisymmetrical-ducted propeller with no gap between the duct and the propeller. The theory required to model the duct and its interaction with the propeller ...
Homayounfar, Mehran; Zomorodian, Mehdi; Martinez, Christopher J; Lai, Sai Hin
2015-01-01
So far many optimization models based on Nash Bargaining Theory associated with reservoir operation have been developed. Most of them have aimed to provide practical and efficient solutions for water allocation in order to alleviate conflicts among water users. These models can be discussed from two viewpoints: (i) having a discrete nature; and (ii) working on an annual basis. Although discrete dynamic game models provide appropriate reservoir operator policies, their discretization of variables increases the run time and causes dimensionality problems. In this study, two monthly based non-discrete optimization models based on the Nash Bargaining Solution are developed for a reservoir system. In the first model, based on constrained state formulation, the first and second moments (mean and variance) of the state variable (water level in the reservoir) is calculated. Using moment equations as the constraint, the long-term utility of the reservoir manager and water users are optimized. The second model is a dynamic approach structured based on continuous state Markov decision models. The corresponding solution based on the collocation method is structured for a reservoir system. In this model, the reward function is defined based on the Nash Bargaining Solution. Indeed, it is used to yield equilibrium in every proper sub-game, thereby satisfying the Markov perfect equilibrium. Both approaches are applicable for water allocation in arid and semi-arid regions. A case study was carried out at the Zayandeh-Rud river basin located in central Iran to identify the effectiveness of the presented methods. The results are compared with the results of an annual form of dynamic game, a classical stochastic dynamic programming model (e.g. Bayesian Stochastic Dynamic Programming model, BSDP), and a discrete stochastic dynamic game model (PSDNG). By comparing the results of alternative methods, it is shown that both models are capable of tackling conflict issues in water allocation in situations of water scarcity properly. Also, comparing the annual dynamic game models, the presented models result in superior results in practice. Furthermore, unlike discrete dynamic game models, the presented models can significantly reduce the runtime thereby avoiding dimensionality problems. PMID:26641095
Homayounfar, Mehran; Zomorodian, Mehdi; Martinez, Christopher J.; Lai, Sai Hin
2015-01-01
So far many optimization models based on Nash Bargaining Theory associated with reservoir operation have been developed. Most of them have aimed to provide practical and efficient solutions for water allocation in order to alleviate conflicts among water users. These models can be discussed from two viewpoints: (i) having a discrete nature; and (ii) working on an annual basis. Although discrete dynamic game models provide appropriate reservoir operator policies, their discretization of variables increases the run time and causes dimensionality problems. In this study, two monthly based non-discrete optimization models based on the Nash Bargaining Solution are developed for a reservoir system. In the first model, based on constrained state formulation, the first and second moments (mean and variance) of the state variable (water level in the reservoir) is calculated. Using moment equations as the constraint, the long-term utility of the reservoir manager and water users are optimized. The second model is a dynamic approach structured based on continuous state Markov decision models. The corresponding solution based on the collocation method is structured for a reservoir system. In this model, the reward function is defined based on the Nash Bargaining Solution. Indeed, it is used to yield equilibrium in every proper sub-game, thereby satisfying the Markov perfect equilibrium. Both approaches are applicable for water allocation in arid and semi-arid regions. A case study was carried out at the Zayandeh-Rud river basin located in central Iran to identify the effectiveness of the presented methods. The results are compared with the results of an annual form of dynamic game, a classical stochastic dynamic programming model (e.g. Bayesian Stochastic Dynamic Programming model, BSDP), and a discrete stochastic dynamic game model (PSDNG). By comparing the results of alternative methods, it is shown that both models are capable of tackling conflict issues in water allocation in situations of water scarcity properly. Also, comparing the annual dynamic game models, the presented models result in superior results in practice. Furthermore, unlike discrete dynamic game models, the presented models can significantly reduce the runtime thereby avoiding dimensionality problems. PMID:26641095
The Model-Based View of Scientific Theories and the Structuring of School Science Programmes
ERIC Educational Resources Information Center
Develaki, Maria
2007-01-01
Model theory in contemporary philosophy of science interprets scientific theories as sets of models, and contributes significantly to the understanding of the relation between theories, models, and the real world. The clarification of this relation is fundamental for the understanding of the nature of scientific methods and scientific knowledge…
ERIC Educational Resources Information Center
Natker, Elana; Baker, Susan S.; Auld, Garry; McGirr, Kathryn; Sutherland, Barbara; Cason, Katherine L.
2015-01-01
The project reported here served to assess a curriculum for EFNEP to ensure theory compliance and content validity. Adherence to Adult Learning Theory and Social Cognitive Theory tenets was determined. A curriculum assessment tool was developed and used by five reviewers to assess initial and revised versions of the curriculum. T-tests for…
Multi-configuration time-dependent density-functional theory based on range separation.
Fromager, Emmanuel; Knecht, Stefan; Jensen, Hans Jørgen Aa
2013-02-28
Multi-configuration range-separated density-functional theory is extended to the time-dependent regime. An exact variational formulation is derived. The approximation, which consists in combining a long-range Multi-Configuration-Self-Consistent Field (MCSCF) treatment with an adiabatic short-range density-functional (DFT) description, is then considered. The resulting time-dependent multi-configuration short-range DFT (TD-MC-srDFT) model is applied to the calculation of singlet excitation energies in H2, Be, and ferrocene, considering both short-range local density (srLDA) and generalized gradient (srGGA) approximations. As expected, when modeling long-range interactions with the MCSCF model instead of the adiabatic Buijse-Baerends density-matrix functional as recently proposed by Pernal [J. Chem. Phys. 136, 184105 (2012)], the description of both the 1(1)D doubly-excited state in Be and the 1(1)?u(+) state in the stretched H2 molecule are improved, although the latter is still significantly underestimated. Exploratory TD-MC-srDFT/GGA calculations for ferrocene yield in general excitation energies at least as good as TD-DFT using the Coulomb attenuated method based on the three-parameter Becke-Lee-Yang-Parr functional (TD-DFT/CAM-B3LYP), and superior to wave-function (TD-MCSCF, symmetry adapted cluster-configuration interaction) and TD-DFT results based on LDA, GGA, and hybrid functionals. PMID:23464134
Intention to use hearing aids: a survey based on the theory of planned behavior
Meister, Hartmut; Grugel, Linda; Meis, Markus
2014-01-01
Objective To determine the intention to use hearing aids (HAs) by applying the theory of planned behavior (TPB). Design The TPB is a widely used decision-making model based on three constructs hypothesized to influence the intention to perform a specific behavior; namely, “attitude toward the behavior”, “subjective norm”, and “behavioral control”. The survey was based on a TPB-specific questionnaire addressing factors relevant to HA provision. Study sample Data from 204 individuals reporting hearing problems were analyzed. Different subgroups were established according to the stage of their hearing help-seeking. Results The TPB models’ outcome depended on the subgroup. The intention of those participants who had recognized their hearing problems but had not yet consulted an ear, nose, and throat specialist was largely dominated by the “subjective norm” construct, whereas those who had already consulted an ear, nose, and throat specialist or had already tried out HAs were significantly influenced by all constructs. The intention of participants who already owned HAs was clearly less affected by the “subjective norm” construct but was largely dominated by their “attitude toward HAs”. Conclusion The intention to use HAs can be modeled on the basis of the constructs “attitude toward the behavior”, “subjective norm”, and “behavioral control”. Individual contribution of the constructs to the model depends on the patient’s stage of hearing help-seeking. The results speak well for counseling strategies that explicitly consider the individual trajectory of hearing help-seeking. PMID:25258520
Redox potentials and pKa for benzoquinone from density functional theory based molecular dynamics
NASA Astrophysics Data System (ADS)
Cheng, Jun; Sulpizi, Marialore; Sprik, Michiel
2009-10-01
The density functional theory based molecular dynamics (DFTMD) method for the computation of redox free energies presented in previous publications and the more recent modification for computation of acidity constants are reviewed. The method uses a half reaction scheme based on reversible insertion/removal of electrons and protons. The proton insertion is assisted by restraining potentials acting as chaperones. The procedure for relating the calculated deprotonation free energies to Brønsted acidities (pKa) and the oxidation free energies to electrode potentials with respect to the normal hydrogen electrode is discussed in some detail. The method is validated in an application to the reduction of aqueous 1,4-benzoquinone. The conversion of hydroquinone to quinone can take place via a number of alternative pathways consisting of combinations of acid dissociations, oxidations, or dehydrogenations. The free energy changes of all elementary steps (ten in total) are computed. The accuracy of the calculations is assessed by comparing the energies of different pathways for the same reaction (Hess's law) and by comparison to experiment. This two-sided test enables us to separate the errors related with the restrictions on length and time scales accessible to DFTMD from the errors introduced by the DFT approximation. It is found that the DFT approximation is the main source of error for oxidation free energies.
Development of theory-based health messages: three-phase programme of formative research
Epton, Tracy; Norman, Paul; Harris, Peter; Webb, Thomas; Snowsill, F. Alexandra; Sheeran, Paschal
2015-01-01
Online health behaviour interventions have great potential but their effectiveness may be hindered by a lack of formative and theoretical work. This paper describes the process of formative research to develop theoretically and empirically based health messages that are culturally relevant and can be used in an online intervention to promote healthy lifestyle behaviours among new university students. Drawing on the Theory of Planned Behaviour, a three-phase programme of formative research was conducted with prospective and current undergraduate students to identify (i) modal salient beliefs (the most commonly held beliefs) about fruit and vegetable intake, physical activity, binge drinking and smoking, (ii) which beliefs predicted intentions/behaviour and (iii) reasons underlying each of the beliefs that could be targeted in health messages. Phase 1, conducted with 96 pre-university college students, elicited 56 beliefs about the behaviours. Phase 2, conducted with 3026 incoming university students, identified 32 of these beliefs that predicted intentions/behaviour. Phase 3, conducted with 627 current university students, elicited 102 reasons underlying the 32 beliefs to be used to construct health messages to bolster or challenge these beliefs. The three-phase programme of formative research provides researchers with an example of how to develop health messages with a strong theoretical- and empirical base for use in health behaviour change interventions. PMID:24504361