NASA Astrophysics Data System (ADS)
Tucker, Laura Jane
Under the harsh conditions of limited nutrient and hard growth surface, Paenibacillus dendritiformis in agar plates form two classes of patterns (morphotypes). The first class, called the dendritic morphotype, has radially directed branches. The second class, called the chiral morphotype, exhibits uniform handedness. The dendritic morphotype has been modeled successfully using a continuum model on a regular lattice; however, a suitable computational approach was not known to solve a continuum chiral model. This work details a new computational approach to solving the chiral continuum model of pattern formation in P. dendritiformis. The approach utilizes a random computational lattice and new methods for calculating certain derivative terms found in the model.
Integrated care management: aligning medical call centers and nurse triage services.
Kastens, J M
1998-01-01
Successful integrated delivery systems must aggressively design new approaches to managing patient care. Implementing a comprehensive care management model to coordinate patient care across the continuum is essential to improving patient care and reducing costs. The practice of telephone nursing and the need for experienced registered nurses to staff medical call centers, nurse triage centers, and outbound telemanagement is expanding as the penetration of full-risk capitated managed care contracts are signed. As health systems design their new care delivery approaches and care management models, medical call centers will be an integral approach to managing demand for services, chronic illnesses, and prevention strategies.
Potential Paradigms and Possible Problems for CALL.
ERIC Educational Resources Information Center
Phillips, Martin
1987-01-01
Describes three models of CALL (computer assisted language learning) activity--games, the expert system, and the prosthetic approaches. A case is made for CALL development within a more instrumental view of the role of computers. (Author/CB)
Teenage Pregnancy Prevention and Adolescents' Sexual Outcomes: An Experiential Approach
ERIC Educational Resources Information Center
Somers, Cheryl L.
2006-01-01
This study evaluates the effectiveness of an experiential approach to teen pregnancy (TP) prevention called "Baby Think It Over," a computerized infant simulator, on adolescents' attitudes and behaviors regarding teen pregnancy and sexuality. Recently, a more realistic model called "Real Care Baby" was developed. The small amount of research on…
Risk adjustment model of credit life insurance using a genetic algorithm
NASA Astrophysics Data System (ADS)
Saputra, A.; Sukono; Rusyaman, E.
2018-03-01
In managing the risk of credit life insurance, insurance company should acknowledge the character of the risks to predict future losses. Risk characteristics can be learned in a claim distribution model. There are two standard approaches in designing the distribution model of claims over the insurance period i.e, collective risk model and individual risk model. In the collective risk model, the claim arises when risk occurs is called individual claim, accumulation of individual claim during a period of insurance is called an aggregate claim. The aggregate claim model may be formed by large model and a number of individual claims. How the measurement of insurance risk with the premium model approach and whether this approach is appropriate for estimating the potential losses occur in the future. In order to solve the problem Genetic Algorithm with Roulette Wheel Selection is used.
Generalizability in Item Response Modeling
ERIC Educational Resources Information Center
Briggs, Derek C.; Wilson, Mark
2007-01-01
An approach called generalizability in item response modeling (GIRM) is introduced in this article. The GIRM approach essentially incorporates the sampling model of generalizability theory (GT) into the scaling model of item response theory (IRT) by making distributional assumptions about the relevant measurement facets. By specifying a random…
Causal Models for Mediation Analysis: An Introduction to Structural Mean Models.
Zheng, Cheng; Atkins, David C; Zhou, Xiao-Hua; Rhew, Isaac C
2015-01-01
Mediation analyses are critical to understanding why behavioral interventions work. To yield a causal interpretation, common mediation approaches must make an assumption of "sequential ignorability." The current article describes an alternative approach to causal mediation called structural mean models (SMMs). A specific SMM called a rank-preserving model (RPM) is introduced in the context of an applied example. Particular attention is given to the assumptions of both approaches to mediation. Applying both mediation approaches to the college student drinking data yield notable differences in the magnitude of effects. Simulated examples reveal instances in which the traditional approach can yield strongly biased results, whereas the RPM approach remains unbiased in these cases. At the same time, the RPM approach has its own assumptions that must be met for correct inference, such as the existence of a covariate that strongly moderates the effect of the intervention on the mediator and no unmeasured confounders that also serve as a moderator of the effect of the intervention or the mediator on the outcome. The RPM approach to mediation offers an alternative way to perform mediation analysis when there may be unmeasured confounders.
Adaptation of warrant price with Black Scholes model and historical volatility
NASA Astrophysics Data System (ADS)
Aziz, Khairu Azlan Abd; Idris, Mohd Fazril Izhar Mohd; Saian, Rizauddin; Daud, Wan Suhana Wan
2015-05-01
This project discusses about pricing warrant in Malaysia. The Black Scholes model with non-dividend approach and linear interpolation technique was applied in pricing the call warrant. Three call warrants that are listed in Bursa Malaysia were selected randomly from UiTM's datastream. The finding claims that the volatility for each call warrants are different to each other. We have used the historical volatility which will describes the price movement by which an underlying share is expected to fluctuate within a period. The Black Scholes model price that was obtained by the model will be compared with the actual market price. Mispricing the call warrants will contribute to under or over valuation price. Other variables like interest rate, time to maturity date, exercise price and underlying stock price are involves in pricing call warrants as well as measuring the moneyness of call warrants.
From TPACK-in-Action Workshops to Classrooms: CALL Competency Developed and Integrated
ERIC Educational Resources Information Center
Tai, Shu-Ju Diana
2015-01-01
This study investigated the impact of a CALL teacher education workshop guided by the TPACK-in-Action model (Tai, 2013). This model is framed within Technological Pedagogical Content Knowledge (TPACK, Mishra & Koehler, 2006) and advocates a learning-by-doing approach (Chapelle & Hegelheimer, 2004) to understand how English teachers develop…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less
On science versus engineering in hydrological modelling
NASA Astrophysics Data System (ADS)
Melsen, Lieke
2017-04-01
It is always stressed that hydrological modelling is very important, to prevent floods, to mitigate droughts, to ensure food production or nature conservation. All very true, but I believe that focussing so much on the application of our knowledge (which I call `the engineering approach'), does not stimulate thorough system understanding (which I call `the scientific approach'). In many studies, science and engineering approaches are mixed, which results in large uncertainty e.g. due to a lack of system understanding. To what extent engineering and science approached are mixed depends on the Philosophy of Science of the researcher; verificationism seems to be closer related to engineering, than falsificationism or Bayesianism. In order to grow our scientific knowledge, which means increasing our understanding of the system, we need to be more critical towards the models that we use, but also recognize all the processes that influence the hydrological cycle. In an era called 'The Anthropocene' the influence of humans on the water system can no longer be neglected, and if we choose a scientific approach we have to account for human-induced processes. Summarizing, I believe that we have to account for human impact on the hydrological system, but we have to resist the temptation to directly quantify the hydrological impact on the human system.
ERIC Educational Resources Information Center
Aslan, Burak Galip; Öztürk, Özlem; Inceoglu, Mustafa Murat
2014-01-01
Considering the increasing importance of adaptive approaches in CALL systems, this study implemented a machine learning based student modeling middleware with Bayesian networks. The profiling approach of the student modeling system is based on Felder and Silverman's Learning Styles Model and Felder and Soloman's Index of Learning Styles…
NASA Astrophysics Data System (ADS)
Unger, André J. A.
2010-02-01
This work is the first installment in a two-part series, and focuses on the development of a numerical PDE approach to price components of a Bermudan-style callable catastrophe (CAT) bond. The bond is based on two underlying stochastic variables; the PCS index which posts quarterly estimates of industry-wide hurricane losses as well as a single-factor CIR interest rate model for the three-month LIBOR. The aggregate PCS index is analogous to losses claimed under traditional reinsurance in that it is used to specify a reinsurance layer. The proposed CAT bond model contains a Bermudan-style call feature designed to allow the reinsurer to minimize their interest rate risk exposure on making substantial fixed coupon payments using capital from the reinsurance premium. Numerical PDE methods are the fundamental strategy for pricing early-exercise constraints, such as the Bermudan-style call feature, into contingent claim models. Therefore, the objective and unique contribution of this first installment in the two-part series is to develop a formulation and discretization strategy for the proposed CAT bond model utilizing a numerical PDE approach. Object-oriented code design is fundamental to the numerical methods used to aggregate the PCS index, and implement the call feature. Therefore, object-oriented design issues that relate specifically to the development of a numerical PDE approach for the component of the proposed CAT bond model that depends on the PCS index and LIBOR are described here. Formulation, numerical methods and code design issues that relate to aggregating the PCS index and introducing the call option are the subject of the companion paper.
Pohjola, Mikko V; Pohjola, Pasi; Tainio, Marko; Tuomisto, Jouni T
2013-06-26
The calls for knowledge-based policy and policy-relevant research invoke a need to evaluate and manage environment and health assessments and models according to their societal outcomes. This review explores how well the existing approaches to assessment and model performance serve this need. The perspectives to assessment and model performance in the scientific literature can be called: (1) quality assurance/control, (2) uncertainty analysis, (3) technical assessment of models, (4) effectiveness and (5) other perspectives, according to what is primarily seen to constitute the goodness of assessments and models. The categorization is not strict and methods, tools and frameworks in different perspectives may overlap. However, altogether it seems that most approaches to assessment and model performance are relatively narrow in their scope. The focus in most approaches is on the outputs and making of assessments and models. Practical application of the outputs and the consequential outcomes are often left unaddressed. It appears that more comprehensive approaches that combine the essential characteristics of different perspectives are needed. This necessitates a better account of the mechanisms of collective knowledge creation and the relations between knowledge and practical action. Some new approaches to assessment, modeling and their evaluation and management span the chain from knowledge creation to societal outcomes, but the complexity of evaluating societal outcomes remains a challenge.
AIR QUALITY MODELING AT COARSE-TO-FINE SCALES IN URBAN AREAS
Urban air toxics control strategies are moving towards a community based modeling approach, with an emphasis on assessing those areas that experience high air toxic concentration levels, the so-called "hot spots". This approach will require information that accurately maps and...
A Systemic Approach: The Ultimate Choice for Gifted Education
ERIC Educational Resources Information Center
Tao, Ting; Shi, Jiannong
2012-01-01
In "Towards a systemic theory of gifted education," A. Ziegler and S.N. Phillipson have proposed a systemic approach to gifted education. For this approach, they built a model that they call an "actiotope" model. As they explained in the article, an actiotope consists of the acting individual and the environment with which he or she interacts. The…
Computer-Aided Air-Traffic Control In The Terminal Area
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
1995-01-01
Developmental computer-aided system for automated management and control of arrival traffic at large airport includes three integrated subsystems. One subsystem, called Traffic Management Advisor, another subsystem, called Descent Advisor, and third subsystem, called Final Approach Spacing Tool. Data base that includes current wind measurements and mathematical models of performances of types of aircraft contributes to effective operation of system.
Black-Scholes finite difference modeling in forecasting of call warrant prices in Bursa Malaysia
NASA Astrophysics Data System (ADS)
Mansor, Nur Jariah; Jaffar, Maheran Mohd
2014-07-01
Call warrant is a type of structured warrant in Bursa Malaysia. It gives the holder the right to buy the underlying share at a specified price within a limited period of time. The issuer of the structured warrants usually uses European style to exercise the call warrant on the maturity date. Warrant is very similar to an option. Usually, practitioners of the financial field use Black-Scholes model to value the option. The Black-Scholes equation is hard to solve analytically. Therefore the finite difference approach is applied to approximate the value of the call warrant prices. The central in time and central in space scheme is produced to approximate the value of the call warrant prices. It allows the warrant holder to forecast the value of the call warrant prices before the expiry date.
Supermodeling With A Global Atmospheric Model
NASA Astrophysics Data System (ADS)
Wiegerinck, Wim; Burgers, Willem; Selten, Frank
2013-04-01
In weather and climate prediction studies it often turns out to be the case that the multi-model ensemble mean prediction has the best prediction skill scores. One possible explanation is that the major part of the model error is random and is averaged out in the ensemble mean. In the standard multi-model ensemble approach, the models are integrated in time independently and the predicted states are combined a posteriori. Recently an alternative ensemble prediction approach has been proposed in which the models exchange information during the simulation and synchronize on a common solution that is closer to the truth than any of the individual model solutions in the standard multi-model ensemble approach or a weighted average of these. This approach is called the super modeling approach (SUMO). The potential of the SUMO approach has been demonstrated in the context of simple, low-order, chaotic dynamical systems. The information exchange takes the form of linear nudging terms in the dynamical equations that nudge the solution of each model to the solution of all other models in the ensemble. With a suitable choice of the connection strengths the models synchronize on a common solution that is indeed closer to the true system than any of the individual model solutions without nudging. This approach is called connected SUMO. An alternative approach is to integrate a weighted averaged model, weighted SUMO. At each time step all models in the ensemble calculate the tendency, these tendencies are weighted averaged and the state is integrated one time step into the future with this weighted averaged tendency. It was shown that in case the connected SUMO synchronizes perfectly, the connected SUMO follows the weighted averaged trajectory and both approaches yield the same solution. In this study we pioneer both approaches in the context of a global, quasi-geostrophic, three-level atmosphere model that is capable of simulating quite realistically the extra-tropical circulation in the Northern Hemisphere winter.
Modeling Learning Processes in Lexical CALL.
ERIC Educational Resources Information Center
Goodfellow, Robin; Laurillard, Diana
1994-01-01
Studies the performance of a novice Spanish student using a Computer-assisted language learning (CALL) system designed for vocabulary enlargement. Results indicate that introspective evidence may be used to validate performance data within a theoretical framework that characterizes the learning approach as "surface" or "deep." (25 references)…
Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.
Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk
2018-07-01
Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.
Pohjola, Mikko V.; Pohjola, Pasi; Tainio, Marko; Tuomisto, Jouni T.
2013-01-01
The calls for knowledge-based policy and policy-relevant research invoke a need to evaluate and manage environment and health assessments and models according to their societal outcomes. This review explores how well the existing approaches to assessment and model performance serve this need. The perspectives to assessment and model performance in the scientific literature can be called: (1) quality assurance/control, (2) uncertainty analysis, (3) technical assessment of models, (4) effectiveness and (5) other perspectives, according to what is primarily seen to constitute the goodness of assessments and models. The categorization is not strict and methods, tools and frameworks in different perspectives may overlap. However, altogether it seems that most approaches to assessment and model performance are relatively narrow in their scope. The focus in most approaches is on the outputs and making of assessments and models. Practical application of the outputs and the consequential outcomes are often left unaddressed. It appears that more comprehensive approaches that combine the essential characteristics of different perspectives are needed. This necessitates a better account of the mechanisms of collective knowledge creation and the relations between knowledge and practical action. Some new approaches to assessment, modeling and their evaluation and management span the chain from knowledge creation to societal outcomes, but the complexity of evaluating societal outcomes remains a challenge. PMID:23803642
Three novel approaches to structural identifiability analysis in mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2016-05-06
Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Eastment, David
Despite the evolution of software for computer-assisted language learning (CALL), teacher resistance remains high. Early software for language instruction was almost exclusively designed for drill and practice. That approach was later replaced by a model in which the computer provided a stimulus for students, most often as a partner in games.…
Comparison of frequency-domain and time-domain rotorcraft vibration control methods
NASA Technical Reports Server (NTRS)
Gupta, N. K.
1984-01-01
Active control of rotor-induced vibration in rotorcraft has received significant attention recently. Two classes of techniques have been proposed. The more developed approach works with harmonic analysis of measured time histories and is called the frequency-domain approach. The more recent approach computes the control input directly using the measured time history data and is called the time-domain approach. The report summarizes the results of a theoretical investigation to compare the two approaches. Five specific areas were addressed: (1) techniques to derive models needed for control design (system identification methods), (2) robustness with respect to errors, (3) transient response, (4) susceptibility to noise, and (5) implementation difficulties. The system identification methods are more difficult for the time-domain models. The time-domain approach is more robust (e.g., has higher gain and phase margins) than the frequency-domain approach. It might thus be possible to avoid doing real-time system identification in the time-domain approach by storing models at a number of flight conditions. The most significant error source is the variation in open-loop vibrations caused by pilot inputs, maneuvers or gusts. The implementation requirements are similar except that the time-domain approach can be much simpler to implement if real-time system identification were not necessary.
Judgmental Standard Setting Using a Cognitive Components Model.
ERIC Educational Resources Information Center
McGinty, Dixie; Neel, John H.
A new standard setting approach is introduced, called the cognitive components approach. Like the Angoff method, the cognitive components method generates minimum pass levels (MPLs) for each item. In both approaches, the item MPLs are summed for each judge, then averaged across judges to yield the standard. In the cognitive components approach,…
ERIC Educational Resources Information Center
Tsai, Yu-Ling; Chang, Ching-Kuch
2009-01-01
This article reports an alternative approach, called the combinatorial model, to learning multiplicative identities, and investigates the effects of implementing results for this alternative approach. Based on realistic mathematics education theory, the new instructional materials or modules of the new approach were developed by the authors. From…
ERIC Educational Resources Information Center
Bidarra, José; Rusman, Ellen
2017-01-01
This paper proposes a design framework to support science education through blended learning, based on a participatory and interactive approach supported by ICT-based tools, called "Science Learning Activities Model" (SLAM). The development of this design framework started as a response to complex changes in society and education (e.g.…
Comparing Health Education Approaches in Textbooks of Sixteen Countries
ERIC Educational Resources Information Center
Carvalho, Graca S.; Dantas, Catarina; Rauma, Anna-Liisa; Luzi, Daniela; Ruggieri, Roberta; Bogner, Franz; Geier, Christine; Caussidier, Claude; Berger, Dominique; Clement, Pierre
2008-01-01
Classically, health education has provided mainly factual knowledge about diseases and their prevention. This educational approach is within the so called Biomedical Model (BM). It is based on pathologic (Pa), curative (Cu) and preventive (Pr) conceptions of health. In contrast, the Health Promotion (HP) approach of health education intends to…
Bruce Bagwell, C
2018-01-01
This chapter outlines how to approach the complex tasks associated with designing models for high-dimensional cytometry data. Unlike gating approaches, modeling lends itself to automation and accounts for measurement overlap among cellular populations. Designing these models is now easier because of a new technique called high-definition t-SNE mapping. Nontrivial examples are provided that serve as a guide to create models that are consistent with data.
Towards Model-Driven End-User Development in CALL
ERIC Educational Resources Information Center
Farmer, Rod; Gruba, Paul
2006-01-01
The purpose of this article is to introduce end-user development (EUD) processes to the CALL software development community. EUD refers to the active participation of end-users, as non-professional developers, in the software development life cycle. Unlike formal software engineering approaches, the focus in EUD on means/ends development is…
Ohio at the Crossroads: School Funding--More of the Same or Changing the Model?
ERIC Educational Resources Information Center
Hill, Paul T.
2009-01-01
Ohio Governor Ted Strickland's education plan calls for modernizing Ohio's K-12 education system, including the state's school-funding system, but the plan's so-called "evidence-based" approach would actually scuttle any modernizing efforts, argues this study issued by the Thomas B. Fordham Institute. The governor's funding plan, says…
A new approach for developing adjoint models
NASA Astrophysics Data System (ADS)
Farrell, P. E.; Funke, S. W.
2011-12-01
Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and supplies callbacks to compute the action of these operators. The library, called libadjoint, is then capable of symbolically manipulating the forward annotation to automatically assemble the adjoint equations. Libadjoint is open source, and is explicitly designed to be bolted-on to an existing discrete model. It can be applied to any discretisation, steady or time-dependent problems, and both linear and nonlinear systems. Using libadjoint has several advantages. It requires the application of an AD tool only to small pieces of code, making the use of AD far more tractable. As libadjoint derives the adjoint equations, the expertise required to develop an adjoint model is greatly diminished. One major advantage of this approach is that the model developer is freed from implementing complex checkpointing strategies for the adjoint model: libadjoint has sufficient information about the forward model to re-play the entire forward solve when necessary, and thus the checkpointing algorithm can be implemented entirely within the library itself. Examples are shown using the Fluidity/ICOM framework, a complex ocean model under development at Imperial College London.
Fan, Yu; Xi, Liu; Hughes, Daniel S T; Zhang, Jianjun; Zhang, Jianhua; Futreal, P Andrew; Wheeler, David A; Wang, Wenyi
2016-08-24
Subclonal mutations reveal important features of the genetic architecture of tumors. However, accurate detection of mutations in genetically heterogeneous tumor cell populations using next-generation sequencing remains challenging. We develop MuSE ( http://bioinformatics.mdanderson.org/main/MuSE ), Mutation calling using a Markov Substitution model for Evolution, a novel approach for modeling the evolution of the allelic composition of the tumor and normal tissue at each reference base. MuSE adopts a sample-specific error model that reflects the underlying tumor heterogeneity to greatly improve the overall accuracy. We demonstrate the accuracy of MuSE in calling subclonal mutations in the context of large-scale tumor sequencing projects using whole exome and whole genome sequencing.
DOT National Transportation Integrated Search
2013-08-01
The Texas Department of Transportation : (TxDOT) created a standardized trip-based : modeling approach for travel demand modeling : called the Texas Package Suite of Travel Demand : Models (referred to as the Texas Package) to : oversee the travel de...
Operator function modeling: An approach to cognitive task analysis in supervisory control systems
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1987-01-01
In a study of models of operators in complex, automated space systems, an operator function model (OFM) methodology was extended to represent cognitive as well as manual operator activities. Development continued on a software tool called OFMdraw, which facilitates construction of an OFM by permitting construction of a heterarchic network of nodes and arcs. Emphasis was placed on development of OFMspert, an expert system designed both to model human operation and to assist real human operators. The system uses a blackboard method of problem solving to make an on-line representation of operator intentions, called ACTIN (actions interpreter).
Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform
Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong
2016-01-01
We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features. PMID:27304979
Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform.
Wu, Hau-Tieng; Wu, Han-Kuei; Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong
2016-01-01
We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features.
Two modeling strategies for empirical Bayes estimation
Efron, Bradley
2014-01-01
Empirical Bayes methods use the data from parallel experiments, for instance observations Xk ~ 𝒩 (Θk, 1) for k = 1, 2, …, N, to estimate the conditional distributions Θk|Xk. There are two main estimation strategies: modeling on the θ space, called “g-modeling” here, and modeling on the×space, called “f-modeling.” The two approaches are de- scribed and compared. A series of computational formulas are developed to assess their frequentist accuracy. Several examples, both contrived and genuine, show the strengths and limitations of the two strategies. PMID:25324592
Carbon Dynamics and Export from Flooded Wetlands: A Modeling Approach
Described in this article is development and validation of a process based model for carbon cycling in flooded wetlands, called WetQual-C. The model considers various biogeochemical interactions affecting C cycling, greenhouse gas emissions, organic carbon export and retention. ...
ERIC Educational Resources Information Center
Wang, Yanqing; Li, Hang; Feng, Yuqiang; Jiang, Yu; Liu, Ying
2012-01-01
The traditional assessment approach, in which one single written examination counts toward a student's total score, no longer meets new demands of programming language education. Based on a peer code review process model, we developed an online assessment system called "EduPCR" and used a novel approach to assess the learning of computer…
USDA-ARS?s Scientific Manuscript database
The Earth is a complex system comprised of many interacting spatial and temporal scales. Understanding, predicting, and managing for these dynamics requires a trans-disciplinary integrated approach. Although there have been calls for this integration, a general approach is needed. We developed a Tra...
NASA Technical Reports Server (NTRS)
Canuto, V. M.
1994-01-01
The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.
NASA Astrophysics Data System (ADS)
Canuto, V. M.
1994-06-01
The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 108 for the planetary boundary layer and Re approximately equals 1014 for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re9/4 exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.
Evaluation of Hybrid Learning in a Construction Engineering Context: A Mixed-Method Approach
ERIC Educational Resources Information Center
Karabulut-Ilgu, Aliye; Jahren, Charles
2016-01-01
Engineering educators call for a widespread implementation of hybrid learning to respond to rapidly changing demands of the 21st century. In response to this call, a junior-level course in the Construction Engineering program entitled Construction Equipment and Heavy Construction Methods was converted into a hybrid learning model. The overarching…
Participatory Action Research and Public Policy.
ERIC Educational Resources Information Center
Turnbull, H. Rutherford, III; Turnbull, Ann P.
This paper describes collegial model approaches to the interactions between rehabilitation researchers and individuals with disabilities or their family members. The approaches, called participatory research and participatory action research, grew out of a 1989 conference sponsored by the National Institute on Disability and Rehabilitation…
Socrates Meets the 21st Century
ERIC Educational Resources Information Center
Lege, Jerry
2005-01-01
A inquiry-based approach called the "modelling discussion" is introduced for structuring beginning modelling activity, teaching new mathematics from examining its applications in contextual situations, and as a general classroom management technique when students are engaged in mathematical modelling. An example which illustrates the style and…
Automatic building information model query generation
Jiang, Yufei; Yu, Nan; Ming, Jiang; ...
2015-12-01
Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approachmore » to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. In conclusion, by demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.« less
Automatic building information model query generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Yufei; Yu, Nan; Ming, Jiang
Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approachmore » to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. In conclusion, by demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.« less
NASA Astrophysics Data System (ADS)
Riccio, A.; Giunta, G.; Galmarini, S.
2007-04-01
In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b). This approach also provides a way to systematically reduce (and quantify) model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.
NASA Astrophysics Data System (ADS)
Riccio, A.; Giunta, G.; Galmarini, S.
2007-12-01
In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b). This approach also provides a way to systematically reduce (and quantify) model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.
MODELING THE FORMATION OF SECONDARY ORGANIC AEROSOL WITHIN A COMPREHENSIVE AIR QUALITY MODEL SYSTEM
The aerosol component of the CMAQ model is designed to be an efficient and economical depiction of aerosol dynamics in the atmosphere. The approach taken represents the particle size distribution as the superposition of three lognormal subdistributions, called modes. The proces...
Streamline Your Project: A Lifecycle Model.
ERIC Educational Resources Information Center
Viren, John
2000-01-01
Discusses one approach to project organization providing a baseline lifecycle model for multimedia/CBT development. This variation of the standard four-phase model of Analysis, Design, Development, and Implementation includes a Pre-Analysis phase, called Definition, and a Post-Implementation phase, known as Maintenance. Each phase is described.…
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.
van den Akker, Jeroen; Mishne, Gilad; Zimmer, Anjali D; Zhou, Alicia Y
2018-04-17
Next generation sequencing (NGS) has become a common technology for clinical genetic tests. The quality of NGS calls varies widely and is influenced by features like reference sequence characteristics, read depth, and mapping accuracy. With recent advances in NGS technology and software tools, the majority of variants called using NGS alone are in fact accurate and reliable. However, a small subset of difficult-to-call variants that still do require orthogonal confirmation exist. For this reason, many clinical laboratories confirm NGS results using orthogonal technologies such as Sanger sequencing. Here, we report the development of a deterministic machine-learning-based model to differentiate between these two types of variant calls: those that do not require confirmation using an orthogonal technology (high confidence), and those that require additional quality testing (low confidence). This approach allows reliable NGS-based calling in a clinical setting by identifying the few important variant calls that require orthogonal confirmation. We developed and tested the model using a set of 7179 variants identified by a targeted NGS panel and re-tested by Sanger sequencing. The model incorporated several signals of sequence characteristics and call quality to determine if a variant was identified at high or low confidence. The model was tuned to eliminate false positives, defined as variants that were called by NGS but not confirmed by Sanger sequencing. The model achieved very high accuracy: 99.4% (95% confidence interval: +/- 0.03%). It categorized 92.2% (6622/7179) of the variants as high confidence, and 100% of these were confirmed to be present by Sanger sequencing. Among the variants that were categorized as low confidence, defined as NGS calls of low quality that are likely to be artifacts, 92.1% (513/557) were found to be not present by Sanger sequencing. This work shows that NGS data contains sufficient characteristics for a machine-learning-based model to differentiate low from high confidence variants. Additionally, it reveals the importance of incorporating site-specific features as well as variant call features in such a model.
Personalized Modeling for Prediction with Decision-Path Models
Visweswaran, Shyam; Ferreira, Antonio; Ribeiro, Guilherme A.; Oliveira, Alexandre C.; Cooper, Gregory F.
2015-01-01
Deriving predictive models in medicine typically relies on a population approach where a single model is developed from a dataset of individuals. In this paper we describe and evaluate a personalized approach in which we construct a new type of decision tree model called decision-path model that takes advantage of the particular features of a given person of interest. We introduce three personalized methods that derive personalized decision-path models. We compared the performance of these methods to that of Classification And Regression Tree (CART) that is a population decision tree to predict seven different outcomes in five medical datasets. Two of the three personalized methods performed statistically significantly better on area under the ROC curve (AUC) and Brier skill score compared to CART. The personalized approach of learning decision path models is a new approach for predictive modeling that can perform better than a population approach. PMID:26098570
A Feature Mining Based Approach for the Classification of Text Documents into Disjoint Classes.
ERIC Educational Resources Information Center
Nieto Sanchez, Salvador; Triantaphyllou, Evangelos; Kraft, Donald
2002-01-01
Proposes a new approach for classifying text documents into two disjoint classes. Highlights include a brief overview of document clustering; a data mining approach called the One Clause at a Time (OCAT) algorithm which is based on mathematical logic; vector space model (VSM); and comparing the OCAT to the VSM. (Author/LRW)
Adaptive Role Playing Games: An Immersive Approach for Problem Based Learning
ERIC Educational Resources Information Center
Sancho, Pilar; Moreno-Ger, Pablo; Fuentes-Fernandez, Ruben; Fernandez-Manjon, Baltasar
2009-01-01
In this paper we present a general framework, called NUCLEO, for the application of socio-constructive educational approaches in higher education. The underlying pedagogical approach relies on an adaptation model in order to improve group dynamics, as this has been identified as one of the key features in the success of collaborative learning…
Critical Comments on the General Model of Instructional Communication
ERIC Educational Resources Information Center
Walton, Justin D.
2014-01-01
This essay presents a critical commentary on McCroskey et al.'s (2004) general model of instructional communication. In particular, five points are examined which make explicit and problematize the meta-theoretical assumptions of the model. Comments call attention to the limitations of the model and argue for a broader approach to…
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
Incorporating principal component analysis into air quality model evaluation
The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...
COMMUNITY SCALE AIR TOXICS MODELING WITH CMAQ
Consideration and movement for an urban air toxics control strategy is toward a community, exposure and risk-based modeling approach, with emphasis on assessments of areas that experience high air toxic concentration levels, the so-called "hot spots". This strategy will requir...
Moore, Jason H; Boczko, Erik M; Summar, Marshall L
2005-02-01
Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two or more DNA sequence variations. We review here this approach and then discuss how it can be used to model biochemical and metabolic data in the context of genetic studies of human disease susceptibility.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2015-12-01
Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Efficiency Analysis of Public Universities in Thailand
ERIC Educational Resources Information Center
Kantabutra, Saranya; Tang, John C. S.
2010-01-01
This paper examines the performance of Thai public universities in terms of efficiency, using a non-parametric approach called data envelopment analysis. Two efficiency models, the teaching efficiency model and the research efficiency model, are developed and the analysis is conducted at the faculty level. Further statistical analyses are also…
ERIC Educational Resources Information Center
Blikstein, Paulo; Fuhrmann, Tamar; Salehi, Shima
2016-01-01
In this paper, we investigate an approach to supporting students' learning in science through a combination of physical experimentation and virtual modeling. We present a study that utilizes a scientific inquiry framework, which we call "bifocal modeling," to link student-designed experiments and computer models in real time. In this…
Multi-model approach to characterize human handwriting motion.
Chihi, I; Abdelkrim, A; Benrejeb, M
2016-02-01
This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.
Navarrete, Jairo A; Dartnell, Pablo
2017-08-01
Category Theory, a branch of mathematics, has shown promise as a modeling framework for higher-level cognition. We introduce an algebraic model for analogy that uses the language of category theory to explore analogy-related cognitive phenomena. To illustrate the potential of this approach, we use this model to explore three objects of study in cognitive literature. First, (a) we use commutative diagrams to analyze an effect of playing particular educational board games on the learning of numbers. Second, (b) we employ a notion called coequalizer as a formal model of re-representation that explains a property of computational models of analogy called "flexibility" whereby non-similar representational elements are considered matches and placed in structural correspondence. Finally, (c) we build a formal learning model which shows that re-representation, language processing and analogy making can explain the acquisition of knowledge of rational numbers. These objects of study provide a picture of acquisition of numerical knowledge that is compatible with empirical evidence and offers insights on possible connections between notions such as relational knowledge, analogy, learning, conceptual knowledge, re-representation and procedural knowledge. This suggests that the approach presented here facilitates mathematical modeling of cognition and provides novel ways to think about analogy-related cognitive phenomena.
2017-01-01
Category Theory, a branch of mathematics, has shown promise as a modeling framework for higher-level cognition. We introduce an algebraic model for analogy that uses the language of category theory to explore analogy-related cognitive phenomena. To illustrate the potential of this approach, we use this model to explore three objects of study in cognitive literature. First, (a) we use commutative diagrams to analyze an effect of playing particular educational board games on the learning of numbers. Second, (b) we employ a notion called coequalizer as a formal model of re-representation that explains a property of computational models of analogy called “flexibility” whereby non-similar representational elements are considered matches and placed in structural correspondence. Finally, (c) we build a formal learning model which shows that re-representation, language processing and analogy making can explain the acquisition of knowledge of rational numbers. These objects of study provide a picture of acquisition of numerical knowledge that is compatible with empirical evidence and offers insights on possible connections between notions such as relational knowledge, analogy, learning, conceptual knowledge, re-representation and procedural knowledge. This suggests that the approach presented here facilitates mathematical modeling of cognition and provides novel ways to think about analogy-related cognitive phenomena. PMID:28841643
Point model equations for neutron correlation counting: Extension of Böhnel's equations to any order
Favalli, Andrea; Croft, Stephen; Santi, Peter
2015-06-15
Various methods of autocorrelation neutron analysis may be used to extract information about a measurement item containing spontaneously fissioning material. The two predominant approaches being the time correlation analysis (that make use of a coincidence gate) methods of multiplicity shift register logic and Feynman sampling. The common feature is that the correlated nature of the pulse train can be described by a vector of reduced factorial multiplet rates. We call these singlets, doublets, triplets etc. Within the point reactor model the multiplet rates may be related to the properties of the item, the parameters of the detector, and basic nuclearmore » data constants by a series of coupled algebraic equations – the so called point model equations. Solving, or inverting, the point model equations using experimental calibration model parameters is how assays of unknown items is performed. Currently only the first three multiplets are routinely used. In this work we develop the point model equations to higher order multiplets using the probability generating functions approach combined with the general derivative chain rule, the so called Faà di Bruno Formula. Explicit expression up to 5th order are provided, as well the general iterative formula to calculate any order. This study represents the first necessary step towards determining if higher order multiplets can add value to nondestructive measurement practice for nuclear materials control and accountancy.« less
Using Work Breakdown Structure Models to Develop Unit Treatment Costs
This article presents a new cost modeling approach called work breakdown structure (WBS), designed to develop unit costs for drinking water technologies. WBS involves breaking the technology into its discrete components for the purposes of estimating unit costs. The article dem...
NASA Astrophysics Data System (ADS)
Farmer, J. Doyne; Gallegati, M.; Hommes, C.; Kirman, A.; Ormerod, P.; Cincotti, S.; Sanchez, A.; Helbing, D.
2012-11-01
We outline a vision for an ambitious program to understand the economy and financial markets as a complex evolving system of coupled networks of interacting agents. This is a completely different vision from that currently used in most economic models. This view implies new challenges and opportunities for policy and managing economic crises. The dynamics of such models inherently involve sudden and sometimes dramatic changes of state. Further, the tools and approaches we use emphasize the analysis of crises rather than of calm periods. In this they respond directly to the calls of Governors Bernanke and Trichet for new approaches to macroeconomic modelling.
Teaching Undergraduate Research: The One-Room Schoolhouse Model
ERIC Educational Resources Information Center
Henderson, LaRhee; Buising, Charisse; Wall, Piper
2008-01-01
Undergraduate research in the biochemistry, cell, and molecular biology program at Drake University uses apprenticeship, cooperative-style learning, and peer mentoring in a cross-disciplinary and cross-community educational program. We call it the one-room schoolhouse approach to teaching undergraduate research. This approach is cost effective,…
Theory-Based Stakeholder Evaluation
ERIC Educational Resources Information Center
Hansen, Morten Balle; Vedung, Evert
2010-01-01
This article introduces a new approach to program theory evaluation called theory-based stakeholder evaluation or the TSE model for short. Most theory-based approaches are program theory driven and some are stakeholder oriented as well. Practically, all of the latter fuse the program perceptions of the various stakeholder groups into one unitary…
Flight-Test Evaluation of Flutter-Prediction Methods
NASA Technical Reports Server (NTRS)
Lind, RIck; Brenner, Marty
2003-01-01
The flight-test community routinely spends considerable time and money to determine a range of flight conditions, called a flight envelope, within which an aircraft is safe to fly. The cost of determining a flight envelope could be greatly reduced if there were a method of safely and accurately predicting the speed associated with the onset of an instability called flutter. Several methods have been developed with the goal of predicting flutter speeds to improve the efficiency of flight testing. These methods include (1) data-based methods, in which one relies entirely on information obtained from the flight tests and (2) model-based approaches, in which one relies on a combination of flight data and theoretical models. The data-driven methods include one based on extrapolation of damping trends, one that involves an envelope function, one that involves the Zimmerman-Weissenburger flutter margin, and one that involves a discrete-time auto-regressive model. An example of a model-based approach is that of the flutterometer. These methods have all been shown to be theoretically valid and have been demonstrated on simple test cases; however, until now, they have not been thoroughly evaluated in flight tests. An experimental apparatus called the Aerostructures Test Wing (ATW) was developed to test these prediction methods.
Anonymizing 1:M microdata with high utility
Gong, Qiyuan; Luo, Junzhou; Yang, Ming; Ni, Weiwei; Li, Xiao-Bai
2016-01-01
Preserving privacy and utility during data publishing and data mining is essential for individuals, data providers and researchers. However, studies in this area typically assume that one individual has only one record in a dataset, which is unrealistic in many applications. Having multiple records for an individual leads to new privacy leakages. We call such a dataset a 1:M dataset. In this paper, we propose a novel privacy model called (k, l)-diversity that addresses disclosure risks in 1:M data publishing. Based on this model, we develop an efficient algorithm named 1:M-Generalization to preserve privacy and data utility, and compare it with alternative approaches. Extensive experiments on real-world data show that our approach outperforms the state-of-the-art technique, in terms of data utility and computational cost. PMID:28603388
ERIC Educational Resources Information Center
Zeyer, Albert; Bolsterli, Katrin; Brovelli, Dorothee; Odermatt, Freia
2012-01-01
Sex is considered to be one of the most significant factors influencing attitudes towards science. However, the so-called brain type approach from cognitive science suggests that the difference in motivation to learn science does not primarily differentiate the girls from the boys, but rather the so-called systemisers from the empathizers. The…
The US EPA National Exposure Research Laboratory (NERL) has developed a population exposure and dose model for particulate matter (PM), called the Stochastic Human Exposure and Dose Simulation (SHEDS) model. SHEDS-PM uses a probabilistic approach that incorporates both variabi...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmadi, Rouhollah, E-mail: rouhollahahmadi@yahoo.com; Khamehchi, Ehsan
Conditioning stochastic simulations are very important in many geostatistical applications that call for the introduction of nonlinear and multiple-point data in reservoir modeling. Here, a new methodology is proposed for the incorporation of different data types into multiple-point statistics (MPS) simulation frameworks. Unlike the previous techniques that call for an approximate forward model (filter) for integration of secondary data into geologically constructed models, the proposed approach develops an intermediate space where all the primary and secondary data are easily mapped onto. Definition of the intermediate space, as may be achieved via application of artificial intelligence tools like neural networks andmore » fuzzy inference systems, eliminates the need for using filters as in previous techniques. The applicability of the proposed approach in conditioning MPS simulations to static and geologic data is verified by modeling a real example of discrete fracture networks using conventional well-log data. The training patterns are well reproduced in the realizations, while the model is also consistent with the map of secondary data.« less
Maru, Biniam T; Munasinghe, Pradeep C; Gilary, Hadar; Jones, Shawn W; Tracy, Bryan P
2018-04-01
Biological CO2 fixation is an important technology that can assist in combating climate change. Here, we show an approach called anaerobic, non-photosynthetic mixotrophy can result in net CO2 fixation when using a reduced feedstock. This approach uses microbes called acetogens that are capable of concurrent utilization of both organic and inorganic substrates. In this study, we investigated the substrate utilization of 17 different acetogens, both mesophilic and thermophilic, on a variety of different carbohydrates and gases. Compared to most model acetogen strains, several non-model mesophilic strains displayed greater substrate flexibility, including the ability to utilize disaccharides, glycerol and an oligosaccharide, and growth rates. Three of these non-model strains (Blautia producta, Clostridium scatologenes and Thermoanaerobacter kivui) were chosen for further characterization, under a variety of conditions including H2- or syngas-fed sugar fermentations and a CO2-fed glycerol fermentation. In all cases, CO2 was fixed and carbon yields approached 100%. Finally, the model acetogen C. ljungdahlii was engineered to utilize glucose, a non-preferred sugar, while maintaining mixotrophic behavior. This work demonstrates the flexibility and robustness of anaerobic, non-photosynthetic mixotrophy as a technology to help reduce CO2 emissions.
Ice Accretion Modeling using an Eulerian Approach for Droplet Impingement
NASA Technical Reports Server (NTRS)
Kim, Joe Woong; Garza, Dennis P.; Sankar, Lakshmi N.; Kreeger, Richard E.
2012-01-01
A three-dimensional Eulerian analysis has been developed for modeling droplet impingement on lifting bodes. The Eulerian model solves the conservation equations of mass and momentum to obtain the droplet flow field properties on the same mesh used in CFD simulations. For complex configurations such as a full rotorcraft, the Eulerian approach is more efficient because the Lagrangian approach would require a significant amount of seeding for accurate estimates of collection efficiency. Simulations are done for various benchmark cases such as NACA0012 airfoil, MS317 airfoil and oscillating SC2110 airfoil to illustrate its use. The present results are compared with results from the Lagrangian approach used in an industry standard analysis called LEWICE.
Comparing architectural solutions of IPT application SDKs utilizing H.323 and SIP
NASA Astrophysics Data System (ADS)
Keskinarkaus, Anja; Korhonen, Jani; Ohtonen, Timo; Kilpelanaho, Vesa; Koskinen, Esa; Sauvola, Jaakko J.
2001-07-01
This paper presents two approaches to efficient service development for Internet Telephony. In first approach we consider services ranging from core call signaling features and media control as stated in ITU-T's H.323 to end user services that supports user interaction. The second approach supports IETF's SIP protocol. We compare these from differing architectural perspectives, economy of network and terminal development, and propose efficient architecture models for both protocols. In their design, the main criteria were component independence, lightweight operation and portability in heterogeneous end-to-end environments. In proposed architecture, the vertical division of call signaling and streaming media control logic allows for using the components either individually or combined, depending on the level of functionality required by an application.
Skolarus, Lesli E.; Zimmerman, Marc A.; Murphy, Jillian; Brown, Devin L.; Kerber, Kevin A.; Bailey, Sarah; Fowlkes, Sophronia; Morgenstern, Lewis B.
2014-01-01
Background and Purpose Acute stroke treatments are underutilized primarily due to delayed hospital arrival. Using a community based participatory research approach, we explored stroke self-efficacy, knowledge and perceptions of stroke among a predominately African American population in Flint, Michigan. Methods In March 2010, a survey was administered to youth and adults after religious services at three churches and one church health day. The survey consisted of vignettes (12 stroke, 4 non-stroke) to assess knowledge of stroke warning signs and behavioral intent to call 911. The survey also assessed stroke self-efficacy, personal knowledge of someone who had had a stroke, personal history of stroke and barriers to calling 911. Linear regression models explored the association of stroke self-efficacy with behavioral intent to call 911 among adults. Results Two hundred forty two adults and 90 youth completed the survey. Ninety two percent of adults and 90% of youth respondents were African American. Responding to 12 stroke vignettes, adults would call 911 in 72% (sd=0.26) of the vignettes while youth would call 911 in 54% (sd=0.29) (p<0.001). Adults correctly identified stroke in 51% (sd=0.32) of the stroke vignettes and youth in 46% (sd=0.28) of the stroke vignettes (p=0.28). Stroke self-efficacy predicted behavioral intent to call 911 (p=0.046). Conclusion In addition to knowledge of stroke warning signs, behavioral interventions to increase both stroke self-efficacy and behavioral intent may be useful for helping people make appropriate 911 calls for stroke. A community based participatory research approach may be effective in reducing stroke disparities. PMID:21617148
Skolarus, Lesli E; Zimmerman, Marc A; Murphy, Jillian; Brown, Devin L; Kerber, Kevin A; Bailey, Sarah; Fowlkes, Sophronia; Morgenstern, Lewis B
2011-07-01
Acute stroke treatments are underutilized primarily because of delayed hospital arrival. Using a community-based participatory research approach, we explored stroke self-efficacy, knowledge, and perceptions of stroke among a predominately African American population in Flint, Michigan. In March 2010, a survey was administered to youth and adults after religious services at 3 churches and during 1 church health day. The survey consisted of vignettes (12 stroke, 4 nonstroke) to assess knowledge of stroke warning signs and behavioral intent to call 911. The survey also assessed stroke self-efficacy, personal knowledge of someone who had experienced a stroke, personal history of stroke, and barriers to calling 911. Linear regression models explored the association of stroke self-efficacy with behavioral intent to call 911 among adults. Two hundred forty-two adults and 90 youths completed the survey. Ninety-two percent of adults and 90% of youth respondents were African American. Responding to 12 stroke vignettes, adults would call 911 in 72% (SD, 0.26) of the vignettes, whereas youths would call 911 in 54% of vignettes (SD, 0.29; P<0.001). Adults correctly identified stroke in 51% (SD, 0.32) of the stroke vignettes and youth correctly identified stroke in 46% (SD, 0.28) of the stroke vignettes (P=0.28). Stroke self-efficacy predicted behavioral intent to call 911 (P=0.046). In addition to knowledge of stroke warning signs, behavioral interventions to increase both stroke self-efficacy and behavioral intent may be useful for helping people make appropriate 911 calls for stroke. A community-based participatory research approach may be effective in reducing stroke disparities.
Structure and function of neonatal social communication in a genetic mouse model of autism.
Takahashi, T; Okabe, S; Broin, P Ó; Nishi, A; Ye, K; Beckert, M V; Izumi, T; Machida, A; Kang, G; Abe, S; Pena, J L; Golden, A; Kikusui, T; Hiroi, N
2016-09-01
A critical step toward understanding autism spectrum disorder (ASD) is to identify both genetic and environmental risk factors. A number of rare copy number variants (CNVs) have emerged as robust genetic risk factors for ASD, but not all CNV carriers exhibit ASD and the severity of ASD symptoms varies among CNV carriers. Although evidence exists that various environmental factors modulate symptomatic severity, the precise mechanisms by which these factors determine the ultimate severity of ASD are still poorly understood. Here, using a mouse heterozygous for Tbx1 (a gene encoded in 22q11.2 CNV), we demonstrate that a genetically triggered neonatal phenotype in vocalization generates a negative environmental loop in pup-mother social communication. Wild-type pups used individually diverse sequences of simple and complicated call types, but heterozygous pups used individually invariable call sequences with less complicated call types. When played back, representative wild-type call sequences elicited maternal approach, but heterozygous call sequences were ineffective. When the representative wild-type call sequences were randomized, they were ineffective in eliciting vigorous maternal approach behavior. These data demonstrate that an ASD risk gene alters the neonatal call sequence of its carriers and this pup phenotype in turn diminishes maternal care through atypical social communication. Thus, an ASD risk gene induces, through atypical neonatal call sequences, less than optimal maternal care as a negative neonatal environmental factor.
Structure and function of neonatal social communication in a genetic mouse model of autism
Takahashi, Tomohisa; Okabe, Shota; Ó Broin, Pilib; Nishi, Akira; Ye, Kenny; Beckert, Michael V.; Izumi, Takeshi; Machida, Akihiro; Kang, Gina; Abe, Seiji; Pena, Jose L.; Golden, Aaron; Kikusui, Takefumi; Hiroi, Noboru
2015-01-01
A critical step toward understanding autism spectrum disorder (ASD) is to identify both genetic and environmental risk factors. A number of rare copy number variants (CNVs) have emerged as robust genetic risk factors for ASD, but not all CNV carriers exhibit ASD and the severity of ASD symptoms varies among CNV carriers. Although evidence exists that various environmental factors modulate symptomatic severity, the precise mechanisms by which these factors determine the ultimate severity of ASD are still poorly understood. Here, using a mouse heterozygous for Tbx1 (a gene encoded in 22q11.2 CNV), we demonstrate that a genetically-triggered neonatal phenotype in vocalization generates a negative environmental loop in pup-mother social communication. Wild-type pups used individually diverse sequences of simple and complicated call types, but heterozygous pups used individually invariable call sequences with less complicated call types. When played back, representative wild-type call sequences elicited maternal approach, but heterozygous call sequences were ineffective. When the representative wild-type call sequences were randomized, they were ineffective in eliciting vigorous maternal approach behavior. These data demonstrate that an ASD risk gene alters the neonatal call sequence of its carriers and this pup phenotype in turn diminishes maternal care through atypical social communication. Thus, an ASD risk gene induces, through atypical neonatal call sequences, less than optimal maternal care as a negative neonatal environmental factor. PMID:26666205
The Lom Approach--a Call for Concern?
ERIC Educational Resources Information Center
Armitage, Nicholas; Bowerman, Chris
2005-01-01
The LOM (Learning Object Model) approach to courseware design seems to be driven by a desire to increase access to education as well as use technology to enable a higher staff-student ratio than is currently possible. The LOM standard involves the use of standard metadata descriptions of content and adaptive content engines to deliver the…
The LOM Approach -- A CALL for Concern?
ERIC Educational Resources Information Center
Armitage, Nicholas; Bowerman, Chris
2005-01-01
The LOM (Learning Object Model) approach to courseware design seems to be driven by a desire to increase access to education as well as use technology to enable a higher staff-student ratio than is currently possible. The LOM standard involves the use of standard metadata descriptions of content and adaptive content engines to deliver the…
"Clinical Reasoning Theater": A New Approach to Clinical Reasoning Education.
ERIC Educational Resources Information Center
Borleffs, Jan C. C.; Custers, Eugene J. F. M.; van Gijn, Jan; ten Gate, Olle Th. J.
2003-01-01
Describes a new approach to clinical reasoning education called clinical reasoning theater (CRT). With students as the audience, the doctor's clinical reasoning skills are modeled in CRT when he or she thinks aloud during conversations with the patient. Preliminary results of students' evaluations of the relevance of CRT reveal that they…
Facilitating Attuned Interactions: Using the FAN Approach to Family Engagement
ERIC Educational Resources Information Center
Gilkerson, Linda
2015-01-01
Erikson Institute's Fussy Baby Network® (FBN) is a national model prevention program known for its approach to family engagement called the FAN (Gilkerson & Gray, 2014; Gilkerson et al., 2012). The FAN is both a conceptual framework and a practical tool to facilitate attunement in helping relationships and promote reflective practice. This…
Research Outcomes of Auditory-Verbal Intervention: Is the Approach Justified?
ERIC Educational Resources Information Center
Rhoades, Ellen A.
2006-01-01
This paper examines the construct of evidence-based practice, how existing data on the effectiveness of the Auditory-Verbal (A-V) approach for children with hearing loss are evaluated within this construct, and whether implementation of an A-V intervention model is therefore justified. It concludes with a recurrent call for action towards…
The Cognitive Domain: The Last Frontier. Final Report of the Regional Study Award Project.
ERIC Educational Resources Information Center
Clary, Joan; Mahaffy, John
The theoretical foundations of thinking skills models differ. One category of thinking skills programs uses the cognitive process approach on the premise that thinking abilities depend upon certain fundamental processes. Thinking skills programs that present a strategic approach to thinking are called heuristics-oriented programs, and focus on an…
Multi Sensor Fusion Using Fitness Adaptive Differential Evolution
NASA Astrophysics Data System (ADS)
Giri, Ritwik; Ghosh, Arnob; Chowdhury, Aritra; Das, Swagatam
The rising popularity of multi-source, multi-sensor networks supports real-life applications calls for an efficient and intelligent approach to information fusion. Traditional optimization techniques often fail to meet the demands. The evolutionary approach provides a valuable alternative due to its inherent parallel nature and its ability to deal with difficult problems. We present a new evolutionary approach based on a modified version of Differential Evolution (DE), called Fitness Adaptive Differential Evolution (FiADE). FiADE treats sensors in the network as distributed intelligent agents with various degrees of autonomy. Existing approaches based on intelligent agents cannot completely answer the question of how their agents could coordinate their decisions in a complex environment. The proposed approach is formulated to produce good result for the problems that are high-dimensional, highly nonlinear, and random. The proposed approach gives better result in case of optimal allocation of sensors. The performance of the proposed approach is compared with an evolutionary algorithm coordination generalized particle model (C-GPM).
A Semi-Parametric Bayesian Mixture Modeling Approach for the Analysis of Judge Mediated Data
ERIC Educational Resources Information Center
Muckle, Timothy Joseph
2010-01-01
Existing methods for the analysis of ordinal-level data arising from judge ratings, such as the Multi-Facet Rasch model (MFRM, or the so-called Facets model) have been widely used in assessment in order to render fair examinee ability estimates in situations where the judges vary in their behavior or severity. However, this model makes certain…
As part of the LBA-ECO Phase III synthesis efforts for remote sensing and predictive modeling of Amazon carbon, water, and trace gas fluxes, we are evaluating results from the regional ecosystem model called NASA-CASA (Carnegie-Ames Stanford Approach). The NASA-CASA model has bee...
Understanding Individual-Level Change through the Basis Functions of a Latent Curve Model
ERIC Educational Resources Information Center
Blozis, Shelley A.; Harring, Jeffrey R.
2017-01-01
Latent curve models have become a popular approach to the analysis of longitudinal data. At the individual level, the model expresses an individual's response as a linear combination of what are called "basis functions" that are common to all members of a population and weights that may vary among individuals. This article uses…
The effect of call libraries and acoustic filters on the identification of bat echolocation.
Clement, Matthew J; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-09-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys.
The effect of call libraries and acoustic filters on the identification of bat echolocation
Clement, Matthew J; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-01-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys. PMID:25535563
The effect of call libraries and acoustic filters on the identification of bat echolocation
Clement, Matthew; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-01-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys.
A Phenomenological Study of Undergraduate Instructors Using the Inverted or Flipped Classroom Model
ERIC Educational Resources Information Center
Brown, Anna F.
2012-01-01
The changing educational needs of undergraduate students have not been addressed with a corresponding development of instructional methods in higher education classrooms. This study used a phenomenological approach to investigate a classroom-based instructional model called the "inverted" or "flipped" classroom. The flipped…
BUILDING AN ENVIRONMENTAL TRAINING MODEL, MAPCORE - A TRAINING EXERCISE FOR AIR POLLUTION CONTROL.
ERIC Educational Resources Information Center
SIEGEL, GILBERT B.; SULLIVAN, DONALD M.
NEW AIR POLLUTION CONTROL PROGRAMS HAVE RESULTED FROM THE "CLEAN AIR ACT" PASSED BY CONGRESS IN DECEMBER 1963. THE UNIVERSITY OF SOUTHERN CALIFORNIA DEVELOPED A TRAINING MODEL, CALLED "MAPCORE," WHICH PROVIDES A SEMISTRUCTURED ENVIRONMENT, IS PRACTICAL AND REALISTIC IN APPROACH, PROVIDES OPPORTUNITY FOR HIGH CREATIVITY,…
Carter, Gerald; Schoeppler, Diana; Manthey, Marie; Knörnschild, Mirjam; Denzinger, Annette
2015-01-01
Many birds and mammals produce distress calls when captured. Bats often approach speakers playing conspecific distress calls, which has led to the hypothesis that bat distress calls promote cooperative mobbing. An alternative explanation is that approaching bats are selfishly assessing predation risk. Previous playback studies on bat distress calls involved species with highly maneuverable flight, capable of making close passes and tight circles around speakers, which can look like mobbing. We broadcast distress calls recorded from the velvety free-tailed bat, Molossus molossus, a fast-flying aerial-hawker with relatively poor maneuverability. Based on their flight behavior, we predicted that, in response to distress call playbacks, M. molossus would make individual passing inspection flights but would not approach in groups or approach within a meter of the distress call source. By recording responses via ultrasonic recording and infrared video, we found that M. molossus, and to a lesser extent Saccopteryx bilineata, made more flight passes during distress call playbacks compared to noise. However, only the more maneuverable S. bilineata made close approaches to the speaker, and we found no evidence of mobbing in groups. Instead, our findings are consistent with the hypothesis that single bats approached distress calls simply to investigate the situation. These results suggest that approaches by bats to distress calls should not suffice as clear evidence for mobbing. PMID:26353118
Carter, Gerald; Schoeppler, Diana; Manthey, Marie; Knörnschild, Mirjam; Denzinger, Annette
2015-01-01
Many birds and mammals produce distress calls when captured. Bats often approach speakers playing conspecific distress calls, which has led to the hypothesis that bat distress calls promote cooperative mobbing. An alternative explanation is that approaching bats are selfishly assessing predation risk. Previous playback studies on bat distress calls involved species with highly maneuverable flight, capable of making close passes and tight circles around speakers, which can look like mobbing. We broadcast distress calls recorded from the velvety free-tailed bat, Molossus molossus, a fast-flying aerial-hawker with relatively poor maneuverability. Based on their flight behavior, we predicted that, in response to distress call playbacks, M. molossus would make individual passing inspection flights but would not approach in groups or approach within a meter of the distress call source. By recording responses via ultrasonic recording and infrared video, we found that M. molossus, and to a lesser extent Saccopteryx bilineata, made more flight passes during distress call playbacks compared to noise. However, only the more maneuverable S. bilineata made close approaches to the speaker, and we found no evidence of mobbing in groups. Instead, our findings are consistent with the hypothesis that single bats approached distress calls simply to investigate the situation. These results suggest that approaches by bats to distress calls should not suffice as clear evidence for mobbing.
Petri net modeling of high-order genetic systems using grammatical evolution.
Moore, Jason H; Hahn, Lance W
2003-11-01
Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two DNA sequence variations. In the present study, we evaluate whether the Petri net approach is capable of identifying biochemical networks that are consistent with disease susceptibility due to higher order nonlinear interactions between three DNA sequence variations. The results indicate that our model-building approach is capable of routinely identifying good, but not perfect, Petri net models. Ideas for improving the algorithm for this high-dimensional problem are presented.
Le management des projets scientifiques
NASA Astrophysics Data System (ADS)
Perrier, Françoise
2000-12-01
We describe in this paper a new approach for the management of scientific projects. This approach is the result of a long reflexion carried out within the MQDP (Methodology and Quality in the Project Development) group of INSU-CNRS, and continued with Guy Serra. Our reflexion was initiated with the study of the so-called `North-American Paradigm' which was, initially considered as the only relevant management model. Through our active participation in several astrophysical projects we realized that this model could not be applied to our laboratories without major modifications. Therefore, step-by-step, we have constructed our own methodology, using to the fullest human potential resources existing in our research field, their habits and skills. We have also participated in various working groups in industrial and scientific organisms for the benefits of CNRS. The management model presented here is based on a systemic and complex approach. This approach lets us describe the multiple aspects of a scientific project specially taking into account the human dimension. The project system model includes three major interconnected systems, immersed within an influencing and influenced environment: the `System to be Realized' which defines scientific and technical tasks leading to the scientific goals, the `Realizing System' which describes procedures, processes and organization, and the `Actors' System' which implements and boosts all the processes. Each one exists only through a series of successive models, elaborated at predefined dates of the project called `key-points'. These systems evolve with time and under often-unpredictable circumstances and the models have to take it into account. At these key-points, each model is compared to reality and the difference between the predicted and realized tasks is evaluated in order to define the data for the next model. This model can be applied to any kind of projects.
Infinitely divisible cascades to model the statistics of natural images.
Chainais, Pierre
2007-12-01
We propose to model the statistics of natural images thanks to the large class of stochastic processes called Infinitely Divisible Cascades (IDC). IDC were first introduced in one dimension to provide multifractal time series to model the so-called intermittency phenomenon in hydrodynamical turbulence. We have extended the definition of scalar infinitely divisible cascades from 1 to N dimensions and commented on the relevance of such a model in fully developed turbulence in [1]. In this article, we focus on the particular 2 dimensional case. IDC appear as good candidates to model the statistics of natural images. They share most of their usual properties and appear to be consistent with several independent theoretical and experimental approaches of the literature. We point out the interest of IDC for applications to procedural texture synthesis.
The globalization of training in adolescent health and medicine: one size does not fit all.
Leslie, Karen
2016-08-01
Adolescent medicine across the globe is practiced within a variety of healthcare models, with the shared vision of the promotion of optimal health outcomes for adolescents. In the past decade, there has been a call for transformation in how health professionals are trained, with recommendations that there be adoption of a global outlook, a multiprofessional perspective and a systems approach that considers the connections between education and health systems. Many individuals and groups are now examining how best to accomplish this educational reform. There are tensions between the call for globally accepted standards of education models and practice (a one-size fits all approach) and the need to promote the ability for education practices to be interpreted and transformed to best suit local contexts. This paper discusses some of the key considerations for 'importing' training program models for adolescent health and medicine, including the importance of cultural alignment and the utilization of best evidence and practice in health professions education.
2014-09-30
from individuals to the population by way of changes in either behavior or physiology, and the revised approach is called PCOD (Population...include modeling fecundity, and exploring the feasibility of incorporating acoustic disturbance and prey variability into the PCOD model...the applicability of the model to assessing the effects of acoustics on the population. We have refined and applied the PCOD model developed for
NASA Astrophysics Data System (ADS)
Labate, Demetrio; Negi, Pooran; Ozcan, Burcin; Papadakis, Manos
2015-09-01
As advances in imaging technologies make more and more data available for biomedical applications, there is an increasing need to develop efficient quantitative algorithms for the analysis and processing of imaging data. In this paper, we introduce an innovative multiscale approach called Directional Ratio which is especially effective to distingush isotropic from anisotropic structures. This task is especially useful in the analysis of images of neurons, the main units of the nervous systems which consist of a main cell body called the soma and many elongated processes called neurites. We analyze the theoretical properties of our method on idealized models of neurons and develop a numerical implementation of this approach for analysis of fluorescent images of cultured neurons. We show that this algorithm is very effective for the detection of somas and the extraction of neurites in images of small circuits of neurons.
ERIC Educational Resources Information Center
Dalkilic, Maryam; Vadeboncoeur, Jennifer A.
2016-01-01
Scholars have called for the articulation of new frameworks in special education that are responsive to culture and context and that address the limitations of medical and social models of disability. In this article, we advance a theoretical and practical framework for inclusive education based on the integration of a model of relational…
An Analytic Hierarchy Process for School Quality and Inspection: Model Development and Application
ERIC Educational Resources Information Center
Al Qubaisi, Amal; Badri, Masood; Mohaidat, Jihad; Al Dhaheri, Hamad; Yang, Guang; Al Rashedi, Asma; Greer, Kenneth
2016-01-01
Purpose: The purpose of this paper is to develop an analytic hierarchy planning-based framework to establish criteria weights and to develop a school performance system commonly called school inspections. Design/methodology/approach: The analytic hierarchy process (AHP) model uses pairwise comparisons and a measurement scale to generate the…
Active Learning through Modeling: Introduction to Software Development in the Business Curriculum
ERIC Educational Resources Information Center
Roussev, Boris; Rousseva, Yvonna
2004-01-01
Modern software practices call for the active involvement of business people in the software process. Therefore, programming has become an indispensable part of the information systems component of the core curriculum at business schools. In this paper, we present a model-based approach to teaching introduction to programming to general business…
Distributed intelligent scheduling of FMS
NASA Astrophysics Data System (ADS)
Wu, Zuobao; Cheng, Yaodong; Pan, Xiaohong
1995-08-01
In this paper, a distributed scheduling approach of a flexible manufacturing system (FMS) is presented. A new class of Petri nets called networked time Petri nets (NTPN) for system modeling of networking environment is proposed. The distributed intelligent scheduling is implemented by three schedulers which combine NTPN models with expert system techniques. The simulation results are shown.
Teaching Complex Dynamic Systems to Young Students with StarLogo
ERIC Educational Resources Information Center
Klopfer, Eric; Yoon, Susan; Um, Tricia
2005-01-01
In this paper, we report on a program of study called Adventures in Modeling that challenges the traditional scientific method approach in science classrooms using StarLogo modeling software. Drawing upon previous successful efforts with older students, and the related work of other projects working with younger students, we explore: (a) What can…
Shilling Attacks Detection in Recommender Systems Based on Target Item Analysis
Zhou, Wei; Wen, Junhao; Koh, Yun Sing; Xiong, Qingyu; Gao, Min; Dobbie, Gillian; Alam, Shafiq
2015-01-01
Recommender systems are highly vulnerable to shilling attacks, both by individuals and groups. Attackers who introduce biased ratings in order to affect recommendations, have been shown to negatively affect collaborative filtering (CF) algorithms. Previous research focuses only on the differences between genuine profiles and attack profiles, ignoring the group characteristics in attack profiles. In this paper, we study the use of statistical metrics to detect rating patterns of attackers and group characteristics in attack profiles. Another question is that most existing detecting methods are model specific. Two metrics, Rating Deviation from Mean Agreement (RDMA) and Degree of Similarity with Top Neighbors (DegSim), are used for analyzing rating patterns between malicious profiles and genuine profiles in attack models. Building upon this, we also propose and evaluate a detection structure called RD-TIA for detecting shilling attacks in recommender systems using a statistical approach. In order to detect more complicated attack models, we propose a novel metric called DegSim’ based on DegSim. The experimental results show that our detection model based on target item analysis is an effective approach for detecting shilling attacks. PMID:26222882
Checking a Conceptual Model for Groundwater Flow in the Fractured Rock at Äspö, Sweden
NASA Astrophysics Data System (ADS)
Kröhn, K. P.
2015-12-01
The underground Hard Rock Laboratory (HRL) at Äspö, Sweden, is located in granitic rock and dedicated to investigations concerning deep geological disposal of radioactive waste. Several in-situ experiments have been performed in the HRL, among them the recent Buffer-Rock Interaction Experiment (BRIE) and, on a much larger scale, the long-term Prototype Repository (PR) experiment.Interpretation of such experiments requires a profound understanding of the groundwater flow system. Often assumed is a conceptual model where the so-called "intact rock" is interspersed with stochastically distributed fractures. It is also a common assumption, though, that fractures in granite exist on all length-scales implying that the hydraulically relevant rock porosity is basically made up of micro fractures. The conceptual approach of GRS' groundwater flow code d3f thus appeared to be fitting where large fractures are represented discretely by lower-dimensional features while the remaining set of smaller fractures - also called "background fractures" - is assumed to act like an additional homogeneous continuum besides what is believed to be the undisturbed matrix. This approach was applied to a hydraulic model of the BRIE in a cube-like domain of 40 m side length including drifts, boreholes and three intersecting large fractures. According to observations at the underground rock laboratories Stripa and the HRL a narrow zone of reduced permeability - called "skin" - was additionally arranged around all geotechnical openings. Calibration of the model resulted in a considerable increase of matrix permeability due to adding the effect of the background fractures. To check the validity of this approach the calibrated data for the BRIE were applied to a model for the PR which is also located in the HRL but at quite some distance. The related brick-shaped model domain has a size of 200 m x 150 m x 50 m. Fitting the calculated outflow from the rock to the measured outflow distribution along the PR-tunnel and the outflow into the six "deposition boreholes" nevertheless required only a moderate modification of the initially used permeabilities. By and large the chosen approach for the BRIE can thus be considered to have been successfully transferred to the PR.
NASA Astrophysics Data System (ADS)
Herrmann, K.
2009-11-01
Information-theoretic approaches still play a minor role in financial market analysis. Nonetheless, there have been two very similar approaches evolving during the last years, one in the so-called econophysics and the other in econometrics. Both generalize the notion of GARCH processes in an information-theoretic sense and are able to capture kurtosis better than traditional models. In this article we present both approaches in a more general framework. The latter allows the derivation of a wide range of new models. We choose a third model using an entropy measure suggested by Kapur. In an application to financial market data, we find that all considered models - with similar flexibility in terms of skewness and kurtosis - lead to very similar results.
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.
Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A
2013-02-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models.
Formal verification of mathematical software
NASA Technical Reports Server (NTRS)
Sutherland, D.
1984-01-01
Methods are investigated for formally specifying and verifying the correctness of mathematical software (software which uses floating point numbers and arithmetic). Previous work in the field was reviewed. A new model of floating point arithmetic called the asymptotic paradigm was developed and formalized. Two different conceptual approaches to program verification, the classical Verification Condition approach and the more recently developed Programming Logic approach, were adapted to use the asymptotic paradigm. These approaches were then used to verify several programs; the programs chosen were simplified versions of actual mathematical software.
Photolysis rates in correlated overlapping cloud fields: Cloud-J 7.3
Prather, M. J.
2015-05-27
A new approach for modeling photolysis rates ( J values) in atmospheres with fractional cloud cover has been developed and implemented as Cloud-J – a multi-scattering eight-stream radiative transfer model for solar radiation based on Fast-J. Using observed statistics for the vertical correlation of cloud layers, Cloud-J 7.3 provides a practical and accurate method for modeling atmospheric chemistry. The combination of the new maximum-correlated cloud groups with the integration over all cloud combinations represented by four quadrature atmospheres produces mean J values in an atmospheric column with root-mean-square errors of 4% or less compared with 10–20% errors using simpler approximations.more » Cloud-J is practical for chemistry-climate models, requiring only an average of 2.8 Fast-J calls per atmosphere, vs. hundreds of calls with the correlated cloud groups, or 1 call with the simplest cloud approximations. Another improvement in modeling J values, the treatment of volatile organic compounds with pressure-dependent cross sections is also incorporated into Cloud-J.« less
Photolysis rates in correlated overlapping cloud fields: Cloud-J 7.3c
Prather, M. J.
2015-08-14
A new approach for modeling photolysis rates ( J values) in atmospheres with fractional cloud cover has been developed and is implemented as Cloud-J – a multi-scattering eight-stream radiative transfer model for solar radiation based on Fast-J. Using observations of the vertical correlation of cloud layers, Cloud-J 7.3c provides a practical and accurate method for modeling atmospheric chemistry. The combination of the new maximum-correlated cloud groups with the integration over all cloud combinations by four quadrature atmospheres produces mean J values in an atmospheric column with root mean square (rms) errors of 4 % or less compared with 10–20 %more » errors using simpler approximations. Cloud-J is practical for chemistry–climate models, requiring only an average of 2.8 Fast-J calls per atmosphere vs. hundreds of calls with the correlated cloud groups, or 1 call with the simplest cloud approximations. Another improvement in modeling J values, the treatment of volatile organic compounds with pressure-dependent cross sections, is also incorporated into Cloud-J.« less
Distributed Damage Estimation for Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2011-01-01
Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.
Model-Driven Useware Engineering
NASA Astrophysics Data System (ADS)
Meixner, Gerrit; Seissler, Marc; Breiner, Kai
User-oriented hardware and software development relies on a systematic development process based on a comprehensive analysis focusing on the users' requirements and preferences. Such a development process calls for the integration of numerous disciplines, from psychology and ergonomics to computer sciences and mechanical engineering. Hence, a correspondingly interdisciplinary team must be equipped with suitable software tools to allow it to handle the complexity of a multimodal and multi-device user interface development approach. An abstract, model-based development approach seems to be adequate for handling this complexity. This approach comprises different levels of abstraction requiring adequate tool support. Thus, in this chapter, we present the current state of our model-based software tool chain. We introduce the use model as the core model of our model-based process, transformation processes, and a model-based architecture, and we present different software tools that provide support for creating and maintaining the models or performing the necessary model transformations.
Chan, Jennifer S K
2016-05-01
Dropouts are common in longitudinal study. If the dropout probability depends on the missing observations at or after dropout, this type of dropout is called informative (or nonignorable) dropout (ID). Failure to accommodate such dropout mechanism into the model will bias the parameter estimates. We propose a conditional autoregressive model for longitudinal binary data with an ID model such that the probabilities of positive outcomes as well as the drop-out indicator in each occasion are logit linear in some covariates and outcomes. This model adopting a marginal model for outcomes and a conditional model for dropouts is called a selection model. To allow for the heterogeneity and clustering effects, the outcome model is extended to incorporate mixture and random effects. Lastly, the model is further extended to a novel model that models the outcome and dropout jointly such that their dependency is formulated through an odds ratio function. Parameters are estimated by a Bayesian approach implemented using the user-friendly Bayesian software WinBUGS. A methadone clinic dataset is analyzed to illustrate the proposed models. Result shows that the treatment time effect is still significant but weaker after allowing for an ID process in the data. Finally the effect of drop-out on parameter estimates is evaluated through simulation studies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Closed-form solution of the Ogden-Hill's compressible hyperelastic model for ramp loading
NASA Astrophysics Data System (ADS)
Berezvai, Szabolcs; Kossa, Attila
2017-05-01
This article deals with the visco-hyperelastic modelling approach for compressible polymer foam materials. Polymer foams can exhibit large elastic strains and displacements in case of volumetric compression. In addition, they often show significant rate-dependent properties. This material behaviour can be accurately modelled using the visco-hyperelastic approach, in which the large strain viscoelastic description is combined with the rate-independent hyperelastic material model. In case of polymer foams, the most widely used compressible hyperelastic material model, the so-called Ogden-Hill's model, was applied, which is implemented in the commercial finite element (FE) software Abaqus. The visco-hyperelastic model is defined in hereditary integral form, therefore, obtaining a closed-form solution for the stress is not a trivial task. However, the parameter-fitting procedure could be much faster and accurate if closed-form solution exists. In this contribution, exact stress solutions are derived in case of uniaxial, biaxial and volumetric compression loading cases using ramp-loading history. The analytical stress solutions are compared with the stress results in Abaqus using FE analysis. In order to highlight the benefits of the analytical closed-form solution during the parameter-fitting process experimental work has been carried out on a particular open-cell memory foam material. The results of the material identification process shows significant accuracy improvement in the fitting procedure by applying the derived analytical solutions compared to the so-called separated approach applied in the engineering practice.
ERIC Educational Resources Information Center
Akkus, Recai; Hand, Brian
2011-01-01
This study examines the changes in teaching practices during the implementation of a pedagogical model called the mathematics reasoning approach (MRA), which was founded on 2 critical areas in mathematics, problem solving, and writing to learn. Three algebra teachers implemented the approach with their classes, which were divided into control…
Documentation Driven Development for Complex Real-Time Systems
2004-12-01
This paper presents a novel approach for development of complex real - time systems , called the documentation-driven development (DDD) approach. This... time systems . DDD will also support automated software generation based on a computational model and some relevant techniques. DDD includes two main...stakeholders to be easily involved in development processes and, therefore, significantly improve the agility of software development for complex real
2015-09-30
physiology, and the revised approach is called PCOD (Population Consequences Of Disturbance). In North Atlantic right whales (Eubalaena glacialis...acoustic disturbance and prey variability into the PCOD model. OBJECTIVES The objectives for this study are to: 1) develop a Hierarchical...the model to assessing the effects of acoustics on the population. We have refined and applied the PCOD model developed for right whales (Schick et
Predicting financial trouble using call data—On social capital, phone logs, and financial trouble
Lin, Chia-Ching; Chen, Kuan-Ta; Singh, Vivek Kumar
2018-01-01
An ability to understand and predict financial wellbeing for individuals is of interest to economists, policy designers, financial institutions, and the individuals themselves. According to the Nilson reports, there were more than 3 billion credit cards in use in 2013, accounting for purchases exceeding US$ 2.2 trillion, and according to the Federal Reserve report, 39% of American households were carrying credit card debt from month to month. Prior literature has connected individual financial wellbeing with social capital. However, as yet, there is limited empirical evidence connecting social interaction behavior with financial outcomes. This work reports results from one of the largest known studies connecting financial outcomes and phone-based social behavior (180,000 individuals; 2 years’ time frame; 82.2 million monthly bills, and 350 million call logs). Our methodology tackles highly imbalanced dataset, which is a pertinent problem with modelling credit risk behavior, and offers a novel hybrid method that yields improvements over, both, a traditional transaction data only approach, and an approach that uses only call data. The results pave way for better financial modelling of billions of unbanked and underbanked customers using non-traditional metrics like phone-based credit scoring. PMID:29474411
Predicting financial trouble using call data-On social capital, phone logs, and financial trouble.
Agarwal, Rishav Raj; Lin, Chia-Ching; Chen, Kuan-Ta; Singh, Vivek Kumar
2018-01-01
An ability to understand and predict financial wellbeing for individuals is of interest to economists, policy designers, financial institutions, and the individuals themselves. According to the Nilson reports, there were more than 3 billion credit cards in use in 2013, accounting for purchases exceeding US$ 2.2 trillion, and according to the Federal Reserve report, 39% of American households were carrying credit card debt from month to month. Prior literature has connected individual financial wellbeing with social capital. However, as yet, there is limited empirical evidence connecting social interaction behavior with financial outcomes. This work reports results from one of the largest known studies connecting financial outcomes and phone-based social behavior (180,000 individuals; 2 years' time frame; 82.2 million monthly bills, and 350 million call logs). Our methodology tackles highly imbalanced dataset, which is a pertinent problem with modelling credit risk behavior, and offers a novel hybrid method that yields improvements over, both, a traditional transaction data only approach, and an approach that uses only call data. The results pave way for better financial modelling of billions of unbanked and underbanked customers using non-traditional metrics like phone-based credit scoring.
Uncertainty Quantification for Robust Control of Wind Turbines using Sliding Mode Observer
NASA Astrophysics Data System (ADS)
Schulte, Horst
2016-09-01
A new quantification method of uncertain models for robust wind turbine control using sliding-mode techniques is presented with the objective to improve active load mitigation. This approach is based on the so-called equivalent output injection signal, which corresponds to the average behavior of the discontinuous switching term, establishing and maintaining a motion on a so-called sliding surface. The injection signal is directly evaluated to obtain estimates of the uncertainty bounds of external disturbances and parameter uncertainties. The applicability of the proposed method is illustrated by the quantification of a four degree-of-freedom model of the NREL 5MW reference turbine containing uncertainties.
Observations and Models of Highly Intermittent Phytoplankton Distributions
Mandal, Sandip; Locke, Christopher; Tanaka, Mamoru; Yamazaki, Hidekatsu
2014-01-01
The measurement of phytoplankton distributions in ocean ecosystems provides the basis for elucidating the influences of physical processes on plankton dynamics. Technological advances allow for measurement of phytoplankton data to greater resolution, displaying high spatial variability. In conventional mathematical models, the mean value of the measured variable is approximated to compare with the model output, which may misinterpret the reality of planktonic ecosystems, especially at the microscale level. To consider intermittency of variables, in this work, a new modelling approach to the planktonic ecosystem is applied, called the closure approach. Using this approach for a simple nutrient-phytoplankton model, we have shown how consideration of the fluctuating parts of model variables can affect system dynamics. Also, we have found a critical value of variance of overall fluctuating terms below which the conventional non-closure model and the mean value from the closure model exhibit the same result. This analysis gives an idea about the importance of the fluctuating parts of model variables and about when to use the closure approach. Comparisons of plot of mean versus standard deviation of phytoplankton at different depths, obtained using this new approach with real observations, give this approach good conformity. PMID:24787740
Bayesian inference for psychology, part IV: parameter estimation and Bayes factors.
Rouder, Jeffrey N; Haaf, Julia M; Vandekerckhove, Joachim
2018-02-01
In the psychological literature, there are two seemingly different approaches to inference: that from estimation of posterior intervals and that from Bayes factors. We provide an overview of each method and show that a salient difference is the choice of models. The two approaches as commonly practiced can be unified with a certain model specification, now popular in the statistics literature, called spike-and-slab priors. A spike-and-slab prior is a mixture of a null model, the spike, with an effect model, the slab. The estimate of the effect size here is a function of the Bayes factor, showing that estimation and model comparison can be unified. The salient difference is that common Bayes factor approaches provide for privileged consideration of theoretically useful parameter values, such as the value corresponding to the null hypothesis, while estimation approaches do not. Both approaches, either privileging the null or not, are useful depending on the goals of the analyst.
2013-09-30
the revised approach is called PCOD (Population Consequences Of Disturbance) . In North Atlantic right whales (Eubalaena glacialis), extensive data on...disturbance and prey variability into the PCOD model. DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Report...Figure 1. Modified model of population consequences of disturbance ( PCOD ) (Thomas et al. 2011). OBJECTIVES The objectives for this study are
ERIC Educational Resources Information Center
Martino, Wayne J.
2015-01-01
This article provides a critical analysis of the political significance of role modelling as it relates to envisaging a critical multicultural approach to educational reform. While not rejecting role modelling outright, it calls for a commitment to questioning the limits of common sense understandings that underpin the logic of gender and racial…
Structural identifiability of cyclic graphical models of biological networks with latent variables.
Wang, Yulin; Lu, Na; Miao, Hongyu
2016-06-13
Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Flexible Environmental Modeling with Python and Open - GIS
NASA Astrophysics Data System (ADS)
Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann
2015-04-01
Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.
On splice site prediction using weight array models: a comparison of smoothing techniques
NASA Astrophysics Data System (ADS)
Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard
2007-11-01
In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called "splicing". The positions where introns are cut and exons are spliced together are called "splice sites". Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed.
Direct Importance Estimation with Gaussian Mixture Models
NASA Astrophysics Data System (ADS)
Yamada, Makoto; Sugiyama, Masashi
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.
ERIC Educational Resources Information Center
Bowman, Phillip J.
2006-01-01
This article applauds the strength-based model (SBM) of counseling but calls for an extension. In the existential or humanistic tradition, the SBM builds on emerging trends in psychology to highlight the importance of individual strengths in counseling interventions. However, a role strain and adaptation (RSA) approach extends the SBM to…
A Value-Added Approach to Selecting the Best Master of Business Administration (MBA) Program
ERIC Educational Resources Information Center
Fisher, Dorothy M.; Kiang, Melody; Fisher, Steven A.
2007-01-01
Although numerous studies rank master of business administration (MBA) programs, prospective students' selection of the best MBA program is a formidable task. In this study, the authors used a linear-programming-based model called data envelopment analysis (DEA) to evaluate MBA programs. The DEA model connects costs to benefits to evaluate the…
Effective Skills for Child-Care Workers: A Training Manual from Boys Town.
ERIC Educational Resources Information Center
Dowd, Tom; And Others
Boys Town, founded in 1917 by Father Edward Flanagan, attempts to respond to the challenges faced by today's children and youth with its own child care model, called the Boys Town Family Home Program. This model is based on family-style nurturing, behavioral-based instruction, and a "systems" approach to staff training and development.…
Solar granulation and statistical crystallography: A modeling approach using size-shape relations
NASA Technical Reports Server (NTRS)
Noever, D. A.
1994-01-01
The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.
A pilot modeling technique for handling-qualities research
NASA Technical Reports Server (NTRS)
Hess, R. A.
1980-01-01
A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie
The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with othermore » experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.« less
Helble, Tyler A; D'Spain, Gerald L; Hildebrand, John A; Campbell, Gregory S; Campbell, Richard L; Heaney, Kevin D
2013-09-01
Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. A common mistake in the analysis of marine mammal acoustic data is formulating conclusions about these animals without first understanding how environmental properties such as bathymetry, sediment properties, water column sound speed, and ocean acoustic noise influence the detection and character of vocalizations in the acoustic data. The approach in this paper is to use Monte Carlo simulations with a full wave field acoustic propagation model to characterize the site specific probability of detection of six types of humpback whale calls at three passive acoustic monitoring locations off the California coast. Results show that the probability of detection can vary by factors greater than ten when comparing detections across locations, or comparing detections at the same location over time, due to environmental effects. Effects of uncertainties in the inputs to the propagation model are also quantified, and the model accuracy is assessed by comparing calling statistics amassed from 24,690 humpback units recorded in the month of October 2008. Under certain conditions, the probability of detection can be estimated with uncertainties sufficiently small to allow for accurate density estimates.
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach
Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.
2014-01-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856
1988-12-01
software development scene is often charac- c. SPQR Model-Jones terized by: * schedule and cost estimates that are gross-d. COPMO-Thebaut ly inaccurate, SEI...time c. SPQR Model-Jones (in seconds) is simply derived from E by dividing T. Capers Jones has developed a software cost by the Stroud number, S...estimation model called the Software Produc- T=E/S tivity, Quality, and Reliability ( SPQR ) model. The basic approach is similar to that of Boehm’s The value
Toward a descriptive model of galactic cosmic rays in the heliosphere
NASA Technical Reports Server (NTRS)
Mewaldt, R. A.; Cummings, A. C.; Adams, James H., Jr.; Evenson, Paul; Fillius, W.; Jokipii, J. R.; Mckibben, R. B.; Robinson, Paul A., Jr.
1988-01-01
Researchers review the elements that enter into phenomenological models of the composition, energy spectra, and the spatial and temporal variations of galactic cosmic rays, including the so-called anomalous cosmic ray component. Starting from an existing model, designed to describe the behavior of cosmic rays in the near-Earth environment, researchers suggest possible updates and improvements to this model, and then propose a quantitative approach for extending such a model into other regions of the heliosphere.
A Substance Called Food: Long-Term Psychodynamic Group Treatment for Compulsive Overeating.
Schwartz, Deborah C; Nickow, Marcia S; Arseneau, Ric; Gisslow, Mary T
2015-07-01
Obesity has proven difficult to treat. Many approaches neglect to address the deep-rooted underlying psychological issues. This paper describes a psychodynamically oriented approach to treating compulsive overeating as an addiction. Common to all addictions is a compulsion to consume a substance or engage in a behavior, a preoccupation with using behavior and rituals, and a lifestyle marked by an inability to manage the behavior and its harmful consequences. The approach represents a shift away from primarily medical models of intervention to integrated models focusing on the psychological underpinnings of obesity. Long-term psychodynamic group psychotherapy is recommended as a primary treatment.
Emergency residential care settings: A model for service assessment and design.
Graça, João; Calheiros, Maria Manuela; Patrício, Joana Nunes; Magalhães, Eunice Vieira
2018-02-01
There have been calls for uncovering the "black box" of residential care services, with a particular need for research focusing on emergency care settings for children and youth in danger. In fact, the strikingly scant empirical attention that these settings have received so far contrasts with the role that they often play as gateway into the child welfare system. To answer these calls, this work presents and tests a framework for assessing a service model in residential emergency care. It comprises seven studies which address a set of different focal areas (e.g., service logic model; care experiences), informants (e.g., case records; staff; children/youth), and service components (e.g., case assessment/evaluation; intervention; placement/referral). Drawing on this process-consultation approach, the work proposes a set of key challenges for emergency residential care in terms of service improvement and development, and calls for further research targeting more care units and different types of residential care services. These findings offer a contribution to inform evidence-based practice and policy in service models of residential care. Copyright © 2017 Elsevier Ltd. All rights reserved.
TotalReCaller: improved accuracy and performance via integrated alignment and base-calling.
Menges, Fabian; Narzisi, Giuseppe; Mishra, Bud
2011-09-01
Currently, re-sequencing approaches use multiple modules serially to interpret raw sequencing data from next-generation sequencing platforms, while remaining oblivious to the genomic information until the final alignment step. Such approaches fail to exploit the full information from both raw sequencing data and the reference genome that can yield better quality sequence reads, SNP-calls, variant detection, as well as an alignment at the best possible location in the reference genome. Thus, there is a need for novel reference-guided bioinformatics algorithms for interpreting analog signals representing sequences of the bases ({A, C, G, T}), while simultaneously aligning possible sequence reads to a source reference genome whenever available. Here, we propose a new base-calling algorithm, TotalReCaller, to achieve improved performance. A linear error model for the raw intensity data and Burrows-Wheeler transform (BWT) based alignment are combined utilizing a Bayesian score function, which is then globally optimized over all possible genomic locations using an efficient branch-and-bound approach. The algorithm has been implemented in soft- and hardware [field-programmable gate array (FPGA)] to achieve real-time performance. Empirical results on real high-throughput Illumina data were used to evaluate TotalReCaller's performance relative to its peers-Bustard, BayesCall, Ibis and Rolexa-based on several criteria, particularly those important in clinical and scientific applications. Namely, it was evaluated for (i) its base-calling speed and throughput, (ii) its read accuracy and (iii) its specificity and sensitivity in variant calling. A software implementation of TotalReCaller as well as additional information, is available at: http://bioinformatics.nyu.edu/wordpress/projects/totalrecaller/ fabian.menges@nyu.edu.
Le Bras, Ronan J; Kuzma, Heidi; Sucic, Victor; Bokelmann, Götz
2016-05-01
A notable sequence of calls was encountered, spanning several days in January 2003, in the central part of the Indian Ocean on a hydrophone triplet recording acoustic data at a 250 Hz sampling rate. This paper presents signal processing methods applied to the waveform data to detect, group, extract amplitude and bearing estimates for the recorded signals. An approximate location for the source of the sequence of calls is inferred from extracting the features from the waveform. As the source approaches the hydrophone triplet, the source level (SL) of the calls is estimated at 187 ± 6 dB re: 1 μPa-1 m in the 15-60 Hz frequency range. The calls are attributed to a subgroup of blue whales, Balaenoptera musculus, with a characteristic acoustic signature. A Bayesian location method using probabilistic models for bearing and amplitude is demonstrated on the calls sequence. The method is applied to the case of detection at a single triad of hydrophones and results in a probability distribution map for the origin of the calls. It can be extended to detections at multiple triads and because of the Bayesian formulation, additional modeling complexity can be built-in as needed.
Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.
Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo
Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
A modified acceleration-based monthly gravity field solution from GRACE data
NASA Astrophysics Data System (ADS)
Chen, Qiujie; Shen, Yunzhong; Chen, Wu; Zhang, Xingfu; Hsu, Houze; Ju, Xiaolei
2015-08-01
This paper describes an alternative acceleration approach for determining GRACE monthly gravity field models. The main differences compared to the traditional acceleration approach can be summarized as: (1) The position errors of GRACE orbits in the functional model are taken into account; (2) The range ambiguity is eliminated via the difference of the range measurements and (3) The mean acceleration equation is formed based on Cowell integration. Using this developed approach, a new time-series of GRACE monthly solution spanning the period January 2003 to December 2010, called Tongji_Acc RL01, has been derived. The annual signals from the Tongji_Acc RL01 time-series agree well with those from the GLDAS model. The performance of Tongji_Acc RL01 shows that this new model is comparable with the RL05 models released by CSR and JPL as well as with the RL05a model released by GFZ.
NASA Technical Reports Server (NTRS)
Murray, William R.
1990-01-01
An approach is described to student modeling for intelligent tutoring systems based on an explicit representation of the tutor's beliefs about the student and the arguments for and against those beliefs (called endorsements). A lexicographic comparison of arguments, sorted according to evidence reliability, provides a principled means of determining those beliefs that are considered true, false, or uncertain. Each of these beliefs is ultimately justified by underlying assessment data. The endorsement-based approach to student modeling is particularly appropriate for tutors controlled by instructional planners. These tutors place greater demands on a student model than opportunistic tutors. Numerical calculi approaches are less well-suited because it is difficult to correctly assign numbers for evidence reliability and rule plausibility. It may also be difficult to interpret final results and provide suitable combining functions. When numeric measures of uncertainty are used, arbitrary numeric thresholds are often required for planning decisions. Such an approach is inappropriate when robust context-sensitive planning decisions must be made. A TMS-based implementation of the endorsement-based approach to student modeling is presented, this approach is compared to alternatives, and a project history is provided describing the evolution of this approach.
SNIF-ACT: A Cognitive Model of User Navigation on the World Wide Web
2007-01-03
opinions of others on a particular topic or problems. Obviously, our model was not able to answer these questions directly, and more research is... Research Center 3333 Coyote Hill Rd Palo Alto, CA 94304, USA Manuscript submitted to Human-Computer Interaction Date: Jan 03, 2007...models. Rational analysis is a variant form of an approach called methodological adaptationism that has also shaped research programs in behavioral
Using Synchronous Boolean Networks to Model Several Phenomena of Collective Behavior
Kochemazov, Stepan; Semenov, Alexander
2014-01-01
In this paper, we propose an approach for modeling and analysis of a number of phenomena of collective behavior. By collectives we mean multi-agent systems that transition from one state to another at discrete moments of time. The behavior of a member of a collective (agent) is called conforming if the opinion of this agent at current time moment conforms to the opinion of some other agents at the previous time moment. We presume that at each moment of time every agent makes a decision by choosing from the set (where 1-decision corresponds to action and 0-decision corresponds to inaction). In our approach we model collective behavior with synchronous Boolean networks. We presume that in a network there can be agents that act at every moment of time. Such agents are called instigators. Also there can be agents that never act. Such agents are called loyalists. Agents that are neither instigators nor loyalists are called simple agents. We study two combinatorial problems. The first problem is to find a disposition of instigators that in several time moments transforms a network from a state where the majority of simple agents are inactive to a state with the majority of active agents. The second problem is to find a disposition of loyalists that returns the network to a state with the majority of inactive agents. Similar problems are studied for networks in which simple agents demonstrate the contrary to conforming behavior that we call anticonforming. We obtained several theoretical results regarding the behavior of collectives of agents with conforming or anticonforming behavior. In computational experiments we solved the described problems for randomly generated networks with several hundred vertices. We reduced corresponding combinatorial problems to the Boolean satisfiability problem (SAT) and used modern SAT solvers to solve the instances obtained. PMID:25526612
Joint parameter and state estimation algorithms for real-time traffic monitoring.
DOT National Transportation Integrated Search
2013-12-01
A common approach to traffic monitoring is to combine a macroscopic traffic flow model with traffic sensor data in a process called state estimation, data fusion, or data assimilation. The main challenge of traffic state estimation is the integration...
A CONCEPTUAL MODEL FOR ECOSYSTEM - HUMAN HEALTH INTERCONNECTIONS
Much environmental policy fails to consider the relationships that exist among component parts of the natural and social world. The linkages that exist between natural and social systems are intricate and varied and call for new and creative approaches to environmental policy and...
Some Moral Dimensions of Administrative Theory and Practice.
ERIC Educational Resources Information Center
Raywid, Mary Anne
1986-01-01
Examines management approaches in ethical terms, arriving at numerous criteria applicable to educational administration. Discusses scientific management, morally neutral concepts, hyperrationalization, tightening of controls, and the business/industry model as having eclipsed or confused the moral dimensions of education. Calls for enlarged moral…
RUBIC identifies driver genes by detecting recurrent DNA copy number breaks
van Dyk, Ewald; Hoogstraat, Marlous; ten Hoeve, Jelle; Reinders, Marcel J. T.; Wessels, Lodewyk F. A.
2016-01-01
The frequent recurrence of copy number aberrations across tumour samples is a reliable hallmark of certain cancer driver genes. However, state-of-the-art algorithms for detecting recurrent aberrations fail to detect several known drivers. In this study, we propose RUBIC, an approach that detects recurrent copy number breaks, rather than recurrently amplified or deleted regions. This change of perspective allows for a simplified approach as recursive peak splitting procedures and repeated re-estimation of the background model are avoided. Furthermore, we control the false discovery rate on the level of called regions, rather than at the probe level, as in competing algorithms. We benchmark RUBIC against GISTIC2 (a state-of-the-art approach) and RAIG (a recently proposed approach) on simulated copy number data and on three SNP6 and NGS copy number data sets from TCGA. We show that RUBIC calls more focal recurrent regions and identifies a much larger fraction of known cancer genes. PMID:27396759
DAMS: A Model to Assess Domino Effects by Using Agent-Based Modeling and Simulation.
Zhang, Laobing; Landucci, Gabriele; Reniers, Genserik; Khakzad, Nima; Zhou, Jianfeng
2017-12-19
Historical data analysis shows that escalation accidents, so-called domino effects, have an important role in disastrous accidents in the chemical and process industries. In this study, an agent-based modeling and simulation approach is proposed to study the propagation of domino effects in the chemical and process industries. Different from the analytical or Monte Carlo simulation approaches, which normally study the domino effect at probabilistic network levels, the agent-based modeling technique explains the domino effects from a bottom-up perspective. In this approach, the installations involved in a domino effect are modeled as agents whereas the interactions among the installations (e.g., by means of heat radiation) are modeled via the basic rules of the agents. Application of the developed model to several case studies demonstrates the ability of the model not only in modeling higher-level domino effects and synergistic effects but also in accounting for temporal dependencies. The model can readily be applied to large-scale complicated cases. © 2017 Society for Risk Analysis.
ERIC Educational Resources Information Center
McDonald, Paige L.; Lyons, Laurie B.; Straker, Howard O.; Barnett, Jacqueline S.; Schlumpf, Karen S.; Cotton, Linda; Corcoran, Mary A.
2014-01-01
For disciplines heavily reliant upon traditional classroom teaching, such as medicine and health sciences, incorporating new learning models may pose challenges for students and faculty. In an effort to innovate curricula, better align courses to required student learning outcomes, and address the call to redesign health professions education,…
Wright, Mark H.; Tung, Chih-Wei; Zhao, Keyan; Reynolds, Andy; McCouch, Susan R.; Bustamante, Carlos D.
2010-01-01
Motivation: The development of new high-throughput genotyping products requires a significant investment in testing and training samples to evaluate and optimize the product before it can be used reliably on new samples. One reason for this is current methods for automated calling of genotypes are based on clustering approaches which require a large number of samples to be analyzed simultaneously, or an extensive training dataset to seed clusters. In systems where inbred samples are of primary interest, current clustering approaches perform poorly due to the inability to clearly identify a heterozygote cluster. Results: As part of the development of two custom single nucleotide polymorphism genotyping products for Oryza sativa (domestic rice), we have developed a new genotype calling algorithm called ‘ALCHEMY’ based on statistical modeling of the raw intensity data rather than modelless clustering. A novel feature of the model is the ability to estimate and incorporate inbreeding information on a per sample basis allowing accurate genotyping of both inbred and heterozygous samples even when analyzed simultaneously. Since clustering is not used explicitly, ALCHEMY performs well on small sample sizes with accuracy exceeding 99% with as few as 18 samples. Availability: ALCHEMY is available for both commercial and academic use free of charge and distributed under the GNU General Public License at http://alchemy.sourceforge.net/ Contact: mhw6@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20926420
Toward fidelity between specification and implementation
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Morrison, Jeff; Wu, Yunqing
1994-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
Verification and validation of a reliable multicast protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
Modeling asset price processes based on mean-field framework
NASA Astrophysics Data System (ADS)
Ieda, Masashi; Shiino, Masatoshi
2011-12-01
We propose a model of the dynamics of financial assets based on the mean-field framework. This framework allows us to construct a model which includes the interaction among the financial assets reflecting the market structure. Our study is on the cutting edge in the sense of a microscopic approach to modeling the financial market. To demonstrate the effectiveness of our model concretely, we provide a case study, which is the pricing problem of the European call option with short-time memory noise.
An Alternative Approach to the Extended Drude Model
NASA Astrophysics Data System (ADS)
Gantzler, N. J.; Dordevic, S. V.
2018-05-01
The original Drude model, proposed over a hundred years ago, is still used today for the analysis of optical properties of solids. Within this model, both the plasma frequency and quasiparticle scattering rate are constant, which makes the model rather inflexible. In order to circumvent this problem, the so-called extended Drude model was proposed, which allowed for the frequency dependence of both the quasiparticle scattering rate and the effective mass. In this work we will explore an alternative approach to the extended Drude model. Here, one also assumes that the quasiparticle scattering rate is frequency dependent; however, instead of the effective mass, the plasma frequency becomes frequency-dependent. This alternative model is applied to the high Tc superconductor Bi2Sr2CaCu2O8+δ (Bi2212) with Tc = 92 K, and the results are compared and contrasted with the ones obtained from the conventional extended Drude model. The results point to several advantages of this alternative approach to the extended Drude model.
Multiple imputation of missing covariates for the Cox proportional hazards cure model
Beesley, Lauren J; Bartlett, Jonathan W; Wolf, Gregory T; Taylor, Jeremy M G
2016-01-01
We explore several approaches for imputing partially observed covariates when the outcome of interest is a censored event time and when there is an underlying subset of the population that will never experience the event of interest. We call these subjects “cured,” and we consider the case where the data are modeled using a Cox proportional hazards (CPH) mixture cure model. We study covariate imputation approaches using fully conditional specification (FCS). We derive the exact conditional distribution and suggest a sampling scheme for imputing partially observed covariates in the CPH cure model setting. We also propose several approximations to the exact distribution that are simpler and more convenient to use for imputation. A simulation study demonstrates that the proposed imputation approaches outperform existing imputation approaches for survival data without a cure fraction in terms of bias in estimating CPH cure model parameters. We apply our multiple imputation techniques to a study of patients with head and neck cancer. PMID:27439726
Predicting Human Preferences Using the Block Structure of Complex Social Networks
Guimerà, Roger; Llorente, Alejandro; Moro, Esteban; Sales-Pardo, Marta
2012-01-01
With ever-increasing available data, predicting individuals' preferences and helping them locate the most relevant information has become a pressing need. Understanding and predicting preferences is also important from a fundamental point of view, as part of what has been called a “new” computational social science. Here, we propose a novel approach based on stochastic block models, which have been developed by sociologists as plausible models of complex networks of social interactions. Our model is in the spirit of predicting individuals' preferences based on the preferences of others but, rather than fitting a particular model, we rely on a Bayesian approach that samples over the ensemble of all possible models. We show that our approach is considerably more accurate than leading recommender algorithms, with major relative improvements between 38% and 99% over industry-level algorithms. Besides, our approach sheds light on decision-making processes by identifying groups of individuals that have consistently similar preferences, and enabling the analysis of the characteristics of those groups. PMID:22984533
2015-03-26
albeit powerful , method available for exploring CAS. As discussed above, there are many useful mathematical tools appropriate for CAS modeling. Agent-based...cells, tele- phone calls, and sexual contacts approach power -law distributions. [48] Networks in general are robust against random failures, but...targeted failures can have powerful effects – provided the targeter has a good understanding of the network structure. Some argue (convincingly) that all
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
Unterberger, Michael J; Holzapfel, Gerhard A
2014-11-01
The protein actin is a part of the cytoskeleton and, therefore, responsible for the mechanical properties of the cells. Starting with the single molecule up to the final structure, actin creates a hierarchical structure of several levels exhibiting a remarkable behavior. The hierarchy spans several length scales and limitations in computational power; therefore, there is a call for different mechanical modeling approaches for the different scales. On the molecular level, we may consider each atom in molecular dynamics simulations. Actin forms filaments by combining the molecules into a double helix. In a model, we replace molecular subdomains using coarse-graining methods, allowing the investigation of larger systems of several atoms. These models on the nanoscale inform continuum mechanical models of large filaments, which are based on worm-like chain models for polymers. Assemblies of actin filaments are connected with cross-linker proteins. Models with discrete filaments, so-called Mikado models, allow us to investigate the dependence of the properties of networks on the parameters of the constituents. Microstructurally motivated continuum models of the networks provide insights into larger systems containing cross-linked actin networks. Modeling of such systems helps to gain insight into the processes on such small scales. On the other hand, they call for verification and hence trigger the improvement of established experiments and the development of new methods.
Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety
NASA Astrophysics Data System (ADS)
Mikula, J. F. Kip
2005-12-01
This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.
NASA Astrophysics Data System (ADS)
Tiwari, Shivendra N.; Padhi, Radhakant
2018-01-01
Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.
Analysis of composite plates by using mechanics of structure genome and comparison with ANSYS
NASA Astrophysics Data System (ADS)
Zhao, Banghua
Motivated by a recently discovered concept, Structure Genome (SG) which is defined as the smallest mathematical building block of a structure, a new approach named Mechanics of Structure Genome (MSG) to model and analyze composite plates is introduced. MSG is implemented in a general-purpose code named SwiftComp(TM), which provides the constitutive models needed in structural analysis by homogenization and pointwise local fields by dehomogenization. To improve the user friendliness of SwiftComp(TM), a simple graphic user interface (GUI) based on ANSYS Mechanical APDL platform, called ANSYS-SwiftComp GUI is developed, which provides a convenient way to create some common SG models or arbitrary customized SG models in ANSYS and invoke SwiftComp(TM) to perform homogenization and dehomogenization. The global structural analysis can also be handled in ANSYS after homogenization, which could predict the global behavior and provide needed inputs for dehomogenization. To demonstrate the accuracy and efficiency of the MSG approach, several numerical cases are studied and compared using both MSG and ANSYS. In the ANSYS approach, 3D solid element models (ANSYS 3D approach) are used as reference models and the 2D shell element models created by ANSYS Composite PrepPost (ACP approach) are compared with the MSG approach. The results of the MSG approach agree well with the ANSYS 3D approach while being as efficient as the ACP approach. Therefore, the MSG approach provides an efficient and accurate new way to model composite plates.
Network Models: An Underutilized Tool in Wildlife Epidemiology?
Craft, Meggan E.; Caillaud, Damien
2011-01-01
Although the approach of contact network epidemiology has been increasing in popularity for studying transmission of infectious diseases in human populations, it has generally been an underutilized approach for investigating disease outbreaks in wildlife populations. In this paper we explore the differences between the type of data that can be collected on human and wildlife populations, provide an update on recent advances that have been made in wildlife epidemiology by using a network approach, and discuss why networks might have been underutilized and why networks could and should be used more in the future. We conclude with ideas for future directions and a call for field biologists and network modelers to engage in more cross-disciplinary collaboration. PMID:21527981
Ranking Theory and Conditional Reasoning.
Skovgaard-Olsen, Niels
2016-05-01
Ranking theory is a formal epistemology that has been developed in over 600 pages in Spohn's recent book The Laws of Belief, which aims to provide a normative account of the dynamics of beliefs that presents an alternative to current probabilistic approaches. It has long been received in the AI community, but it has not yet found application in experimental psychology. The purpose of this paper is to derive clear, quantitative predictions by exploiting a parallel between ranking theory and a statistical model called logistic regression. This approach is illustrated by the development of a model for the conditional inference task using Spohn's (2013) ranking theoretic approach to conditionals. Copyright © 2015 Cognitive Science Society, Inc.
An improved model of fission gas atom transport in irradiated uranium dioxide
NASA Astrophysics Data System (ADS)
Shea, J. H.
2018-04-01
The hitherto standard approach to predicting fission gas release has been a pure diffusion gas atom transport model based upon Fick's law. An additional mechanism has subsequently been identified from experimental data at high burnup and has been summarised in an empirical model that is considered to embody a so-called fuel matrix 'saturation' phenomenon whereby the fuel matrix has become saturated with fission gas so that the continued addition of extra fission gas atoms results in their expulsion from the fuel matrix into the fuel rod plenum. The present paper proposes a different approach by constructing an enhanced fission gas transport law consisting of two components: 1) Fick's law and 2) a so-called drift term. The new transport law can be shown to be effectively identical in its predictions to the 'saturation' approach and is more readily physically justifiable. The method introduces a generalisation of the standard diffusion equation which is dubbed the Drift Diffusion Equation. According to the magnitude of a dimensionless Péclet number, P, the new equation can vary from pure diffusion to pure drift, which latter represents a collective motion of the fission gas atoms through the fuel matrix at a translational velocity. Comparison is made between the saturation and enhanced transport approaches. Because of its dependence on P, the Drift Diffusion Equation is shown to be more effective at managing the transition from one type of limiting transport phenomenon to the other. Thus it can adapt appropriately according to the reactor operation.
ERIC Educational Resources Information Center
Rogerson-Revell, Pamela
2005-01-01
This paper describes some of the pedagogical and technical issues involved in adopting a hybrid approach to CALL materials development. It illustrates some of these issues with reference to a vocational CALL project, LANCAM, which took such a hybrid approach. It describes some of the benefits and considerations involved in hybrid development and…
A longitudinal multilevel CFA-MTMM model for interchangeable and structurally different methods
Koch, Tobias; Schultze, Martin; Eid, Michael; Geiser, Christian
2014-01-01
One of the key interests in the social sciences is the investigation of change and stability of a given attribute. Although numerous models have been proposed in the past for analyzing longitudinal data including multilevel and/or latent variable modeling approaches, only few modeling approaches have been developed for studying the construct validity in longitudinal multitrait-multimethod (MTMM) measurement designs. The aim of the present study was to extend the spectrum of current longitudinal modeling approaches for MTMM analysis. Specifically, a new longitudinal multilevel CFA-MTMM model for measurement designs with structurally different and interchangeable methods (called Latent-State-Combination-Of-Methods model, LS-COM) is presented. Interchangeable methods are methods that are randomly sampled from a set of equivalent methods (e.g., multiple student ratings for teaching quality), whereas structurally different methods are methods that cannot be easily replaced by one another (e.g., teacher, self-ratings, principle ratings). Results of a simulation study indicate that the parameters and standard errors in the LS-COM model are well recovered even in conditions with only five observations per estimated model parameter. The advantages and limitations of the LS-COM model relative to other longitudinal MTMM modeling approaches are discussed. PMID:24860515
2014-04-25
EA’s Java application programming interface (API), the team built a tool called OWL2EA that can ingest an OWL file and generate the corresponding UML...ObjectItemStructure specification shown in Figure 10. Running this script in the relational database server MySQL creates the physical schema that
The Ignorant Supervisor: About Common Worlds, Epistemological Modesty and Distributed Knowledge
ERIC Educational Resources Information Center
Engels-Schwarzpaul, A.-Chr.
2015-01-01
When postgraduate researchers' interests lie outside the body(ies) of knowledge with which their supervisors are familiar, different supervisory approaches are called for. In such situations, questions concerning the appropriateness of traditional models arise, which almost invariably involve a budding candidate's relationship with a…
A New Frontier for Educational Research.
ERIC Educational Resources Information Center
Richmond, George H.
A microeconomic simulation game, called Micro-Economy, is discussed. The game approach was developed to provide students and educators an opportunity to express the aspirations, values, and principles of the people living in an environment. By following a microeconomic model, Society School, students create microinstitutions for their society at…
The Learning Cycle and College Science Teaching.
ERIC Educational Resources Information Center
Barman, Charles R.; Allard, David W.
Originally developed in an elementary science program called the Science Curriculum Improvement Study, the learning cycle (LC) teaching approach involves students in an active learning process modeled on four elements of Jean Piaget's theory of cognitive development: physical experience, referring to the biological growth of the central nervous…
The Layer-Oriented Approach to Declarative Languages for Biological Modeling
Raikov, Ivan; De Schutter, Erik
2012-01-01
We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language. PMID:22615554
The layer-oriented approach to declarative languages for biological modeling.
Raikov, Ivan; De Schutter, Erik
2012-01-01
We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language.
A Data Snapshot Approach for Making Real-Time Predictions in Basketball.
Kayhan, Varol Onur; Watkins, Alison
2018-06-08
This article proposes a novel approach, called data snapshots, to generate real-time probabilities of winning for National Basketball Association (NBA) teams while games are being played. The approach takes a snapshot from a live game, identifies historical games that have the same snapshot, and uses the outcomes of these games to calculate the winning probabilities of the teams in this game as the game is underway. Using data obtained from 20 seasons worth of NBA games, we build three models and compare their accuracies to a baseline accuracy. In Model 1, each snapshot includes the point difference between the home and away teams at a given second of the game. In Model 2, each snapshot includes the net team strength in addition to the point difference at a given second. In Model 3, each snapshot includes the rate of score change in addition to the point difference at a given second. The results show that all models perform better than the baseline accuracy, with Model 1 being the best model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kartsaklis, Christos; Hernandez, Oscar R
Interrogating the structure of a program for patterns of interest is attractive to the broader spectrum of software engineering. The very approach by which a pattern is constructed remains a concern for the source code mining community. This paper presents a pattern programming model, for the C and Fortran programming languages, using a compiler directives approach. We discuss our specification, called HERCULES/PL, throughout a number of examples and show how different patterns can be constructed, plus some preliminary results.
Asymptotic dynamics of the exceptional Bianchi cosmologies
NASA Astrophysics Data System (ADS)
Hewitt, C. G.; Horwood, J. T.; Wainwright, J.
2003-05-01
In this paper we give, for the first time, a qualitative description of the asymptotic dynamics of a class of non-tilted spatially homogeneous (SH) cosmologies, the so-called exceptional Bianchi cosmologies, which are of Bianchi type VI$_{-1/9}$. This class is of interest for two reasons. Firstly, it is generic within the class of non-tilted SH cosmologies, being of the same generality as the models of Bianchi types VIII and IX. Secondly, it is the SH limit of a generic class of spatially inhomogeneous $G_{2}$ cosmologies. Using the orthonormal frame formalism and Hubble-normalized variables, we show that the exceptional Bianchi cosmologies differ from the non-exceptional Bianchi cosmologies of type VI$_{h}$ in two significant ways. Firstly, the models exhibit an oscillatory approach to the initial singularity and hence are not asymptotically self-similar. Secondly, at late times, although the models are asymptotically self-similar, the future attractor for the vacuum-dominated models is the so-called Robinson-Trautman SH model instead of the vacuum SH plane wave models.
Towards Accurate Node-Based Detection of P2P Botnets
2014-01-01
Botnets are a serious security threat to the current Internet infrastructure. In this paper, we propose a novel direction for P2P botnet detection called node-based detection. This approach focuses on the network characteristics of individual nodes. Based on our model, we examine node's flows and extract the useful features over a given time period. We have tested our approach on real-life data sets and achieved detection rates of 99-100% and low false positives rates of 0–2%. Comparison with other similar approaches on the same data sets shows that our approach outperforms the existing approaches. PMID:25089287
Matrosova, Vera A; Blumstein, Daniel T; Volodin, Ilya A; Volodina, Elena V
2011-03-01
In addition to encoding referential information and information about the sender's motivation, mammalian alarm calls may encode information about other attributes of the sender, providing the potential for recognition among kin, mates, and neighbors. Here, we examined 96 speckled ground squirrels (Spermophilus suslicus), 100 yellow ground squirrels (Spermophilus fulvus) and 85 yellow-bellied marmots (Marmota flaviventris) to determine whether their alarm calls differed between species in their ability to encode information about the caller's sex, age, and identity. Alarm calls were elicited by approaching individually identified animals in live-traps. We assume this experimental design modeled a naturally occurring predatory event, when receivers should acquire information about attributes of a caller from a single bout of alarm calls. In each species, variation that allows identification of the caller's identity was greater than variation allowing identification of age or sex. We discuss these results in relation to each species' biology and sociality.
NASA Astrophysics Data System (ADS)
Zhou, H.; Luo, Z.; Li, Q.; Zhong, B.
2016-12-01
The monthly gravity field model can be used to compute the information about the mass variation within the system Earth, i.e., the relationship between mass variation in the oceans, land hydrology, and ice sheets. For more than ten years, GRACE has provided valuable information for recovering monthly gravity field model. In this study, a new time series of GRACE monthly solution, which is truncated to degree and order 60, is computed by the modified dynamic approach. Compared with the traditional dynamic approach, the major difference of our modified approach is the way to process the nuisance parameters. This type of parameters is mainly used to absorb low-frequency errors in KBRR data. One way is to remove the nuisance parameters before estimating the geo-potential coefficients, called Pure Predetermined Strategy (PPS). The other way is to determine the nuisance parameters and geo-potential coefficients simultaneously, called Pure Simultaneous Strategy (PSS). It is convenient to detect the gross error by PPS, while there is also obvious signal loss compared with the solutions derived from PSS. After comparing the difference of practical calculation formulas between PPS and PSS, we create the Filter Predetermine Strategy (FPS), which can combine the advantages of PPS and PSS efficiently. With FPS, a new monthly gravity field model entitled HUST-Grace2016s is developed. The comparisons of geoid degree powers and mass change signals in the Amazon basin, the Greenland and the Antarctic demonstrate that our model is comparable with the other published models, e.g., the CSR RL05, JPL RL05 and GFZ RL05 models. Acknowledgements: This work is supported by China Postdoctoral Science Foundation (Grant No.2016M592337), the National Natural Science Foundation of China (Grant Nos. 41131067, 41504014), the Open Research Fund Program of the State Key Laboratory of Geodesy and Earth's Dynamics (Grant No. SKLGED2015-1-3-E).
Komemushi, Atsushi; Suzuki, Satoshi; Sano, Akira; Kanno, Shohei; Kariya, Shuji; Nakatani, Miyuki; Yoshida, Rie; Kono, Yumiko; Ikeda, Koshi; Utsunomiya, Keita; Harima, Yoko; Komemushi, Sadao; Tanigawa, Noboru
2014-08-01
To compare radiation exposure of nurses when performing nursing tasks associated with interventional procedures depending on whether or not the nurses called out to the operator before approaching the patient. In a prospective study, 93 interventional radiology procedures were randomly divided into a call group and a no-call group; there were 50 procedures in the call group and 43 procedures in the no-call group. Two monitoring badges were used to calculate effective dose of nurses. In the call group, the nurse first told the operator she was going to approach the patient each time she was about to do so. In the no-call group, the nurse did not say anything to the operator when she was about to approach the patient. In all the nursing tasks, the equivalent dose at the umbilical level inside the lead apron was below the detectable limit. The equivalent dose at the sternal level outside the lead apron was 0.16 μSv ± 0.41 per procedure in the call group and 0.51 μSv ± 1.17 per procedure in the no-call group. The effective dose was 0.018 μSv ± 0.04 per procedure in the call group and 0.056 μSv ± 0.129 per procedure in the no-call group. The call group had a significantly lower radiation dose (P = .034). Radiation doses of nurses were lower in the group in which the nurse called to the operator before she approached the patient. Copyright © 2014 SIR. Published by Elsevier Inc. All rights reserved.
Strategic planning: a biomedical communications model.
Barrett, J E
1991-01-01
This article describes a biomedical communications approach to strategic planning. This model produces a short-term plan that allows a department to take the competitive advantage, react to technological change, and make timely decisions on new courses of action. The model calls for self-study, involving staff in brainstorming sessions where options are identified and ideas are prioritized into possible strategies for success. The article recommends that an evaluation and monitoring schedule be implemented after decisions have been made.
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
A unified approach to computer analysis and modeling of spacecraft environmental interactions
NASA Technical Reports Server (NTRS)
Katz, I.; Mandell, M. J.; Cassidy, J. J.
1986-01-01
A new, coordinated, unified approach to the development of spacecraft plasma interaction models is proposed. The objective is to eliminate the unnecessary duplicative work in order to allow researchers to concentrate on the scientific aspects. By streamlining the developmental process, the interchange between theories and experimentalists is enhanced, and the transfer of technology to the spacecraft engineering community is faster. This approach is called the UNIfied Spacecraft Interaction Model (UNISIM). UNISIM is a coordinated system of software, hardware, and specifications. It is a tool for modeling and analyzing spacecraft interactions. It will be used to design experiments, to interpret results of experiments, and to aid in future spacecraft design. It breaks a Spacecraft Ineraction analysis into several modules. Each module will perform an analysis for some physical process, using phenomenology and algorithms which are well documented and have been subject to review. This system and its characteristics are discussed.
1997-09-01
first PC-based, very large vocabulary dictation system with a continuous natural language free flow approach to speech recognition. (This system allows...indicating the likelihood that a particular stored HMM reference model is the best match for the input. This approach is called the Baum-Welch...InfoCentral, and Envoy 1.0; and Lotus Development Corp.’s SmartSuite 3, Approach 3.0, and Organizer. 2. IBM At a press conference in New York in June 1997, IBM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillingham, Kenneth; Bollinger, Bryan
This is the final report for a systematic, evidence-based project using an unprecedented series of large-scale field experiments to examine the effectiveness and cost-effectiveness of novel approaches to reduce the soft costs of solar residential photovoltaics. The approaches were based around grassroots marketing campaigns called ‘Solarize’ campaigns, that were designed to lower costs and increase adoption of solar technology. This study quantified the effectiveness and cost-effectiveness of the Solarize programs and tested new approaches to further improve the model.
Quick Prototyping of Educational Software: An Object-Oriented Approach.
ERIC Educational Resources Information Center
Wong, Simon C-H
1994-01-01
Introduces and demonstrates a quick-prototyping model for educational software development that can be used by teachers developing their own courseware using an object-oriented programming system. Development of a courseware package called "The Match-Maker" is explained as an example that uses HyperCard for quick prototyping. (Contains…
Enhancing Capacity to Improve Student Learning
ERIC Educational Resources Information Center
Mayotte, Gail; Wei, Dan; Lamphier, Sarah; Doyle, Thomas
2013-01-01
Professional development provides a means to build capacity among school personnel when it is delivered as part of a systematic, long-term approach to school and teacher improvement. This research examines a sustained, diocesan-wide professional development model, called the ACE Collaborative for Academic Excellence, that aims to build capacity…
A Domain Specific Modeling Approach for Coordinating User-Centric Communication Services
ERIC Educational Resources Information Center
Wu, Yali
2011-01-01
Rapid advances in electronic communication devices and technologies have resulted in a shift in the way communication applications are being developed. These new development strategies provide abstract views of the underlying communication technologies and lead to the so-called "user-centric communication applications." One user-centric…
Microskills Training: Evolution, Reexamination, and Call for Reform
ERIC Educational Resources Information Center
Ridley, Charles R.; Kelly, Shannon M.; Mollen, Debra
2011-01-01
For more than four decades, the microskills approach has been the dominant paradigm for training entry-level counseling students. At its inception, the model met a critical need: instruction in discrete counseling behaviors, which at the time was conspicuously missing from training curricula. Although these behaviors have become essential…
ERIC Educational Resources Information Center
Tracey, Monica W.; Hutchinson, Alisa; Grzebyk, Tamme Quinn
2014-01-01
As the design thinking approach becomes more established in the instructional design (ID) discourse, the field will have to reconsider the professional identity of instructional designers. Rather than passively following models or processes, a professional identity rooted in design thinking calls for instructional designers to be dynamic agents of…
Teaching and Learning with Flexible Hypermedia Learning Environments.
ERIC Educational Resources Information Center
Wedekind, Joachim; Lechner, Martin; Tergan, Sigmar-Olaf
This paper presents an approach for developing flexible Hypermedia Learning Environments (HMLE) and applies this theoretical framework to the creation of a layered model of a hypermedia system, called HyperDisc, developed at the German Institute for Research on Distance Education. The first section introduces HMLE and suggests that existing…
ERIC Educational Resources Information Center
Goslin, Khym G.
2012-01-01
The call for education to dramatically transform itself to meet the needs of the 21st century learner has required educational leaders at all levels to become conscious of the approaches that help guide and direct large-scale changes. Unfortunately, the role of the principal is so rooted in managerial tasks that leading transformational change…
EcSL: Teaching Economics as a Second Language.
ERIC Educational Resources Information Center
Crowe, Richard
Hazard Community College, in Kentucky, has implemented a new instructional methodology for economics courses called Economics as a Second Language (EcSL). This teaching approach, based on the theory of Rendigs Fel that the best model for learning economics is the foreign language classroom, utilizes strategies similar to those employed in…
ERIC Educational Resources Information Center
Webster, Collin Andrew; Beets, Michael; Weaver, Robert Glenn; Vazou, Spyridoula; Russ, Laura
2015-01-01
Recommended approaches to promoting children's physical activity through schools call for physical education teachers to serve as champions for, and leaders of, Comprehensive School Physical Activity Programs (CSPAPs). Little evidence, however, exists to suggest that physical education teachers are ideally prepared or supported to assume CSPAP…
Curriculum Development in History Using Systems Approach
ERIC Educational Resources Information Center
Acun, Ramazan
2011-01-01
This work provides a conceptual framework for developing coherent history curricula at university level. It can also be used for evaluating existing curricula in terms of coherence. For this purpose, two models that are closely inter-connected called History Education System (Tarih Egitim Sistemi or TES) and History Research System (Tarih…
Don't panic--prepare: towards crisis-aware models of emergency department operations.
Ceglowski, Red; Churilov, Leonid; Wasserheil, Jeff
2005-12-01
The existing models of Emergency Department (ED) operations that are based on the "flow-shop" management logic do not provide adequate decision support in dealing with the ED overcrowding crises. A conceptually different crisis-aware approach to ED modelling and operational decision support is introduced in this paper. It is based on Perrow's theory of "normal accidents" and calls for recognizing the inevitable nature of ED overcrowding crises within current health system setup. Managing the crisis before it happens--a standard approach in crisis management area--should become an integral part of ED operations management. The potential implications of adopting such a crisis-aware perspective for health services research and ED management are outlined.
NASA Astrophysics Data System (ADS)
Albano, Raffaele; Manfreda, Salvatore; Celano, Giuseppe
The paper introduces a minimalist water-driven crop model for sustainable irrigation management using an eco-hydrological approach. Such model, called MY SIRR, uses a relatively small number of parameters and attempts to balance simplicity, accuracy, and robustness. MY SIRR is a quantitative tool to assess water requirements and agricultural production across different climates, soil types, crops, and irrigation strategies. The MY SIRR source code is published under copyleft license. The FOSS approach could lower the financial barriers of smallholders, especially in developing countries, in the utilization of tools for better decision-making on the strategies for short- and long-term water resource management.
Cognitive architecture of perceptual organization: from neurons to gnosons.
van der Helm, Peter A
2012-02-01
What, if anything, is cognitive architecture and how is it implemented in neural architecture? Focusing on perceptual organization, this question is addressed by way of a pluralist approach which, supported by metatheoretical considerations, combines complementary insights from representational, connectionist, and dynamic systems approaches to cognition. This pluralist approach starts from a representationally inspired model which implements the intertwined but functionally distinguishable subprocesses of feedforward feature encoding, horizontal feature binding, and recurrent feature selection. As sustained by a review of neuroscientific evidence, these are the subprocesses that are believed to take place in the visual hierarchy in the brain. Furthermore, the model employs a special form of processing, called transparallel processing, whose neural signature is proposed to be gamma-band synchronization in transient horizontal neural assemblies. In neuroscience, such assemblies are believed to mediate binding of similar features. Their formal counterparts in the model are special input-dependent distributed representations, called hyperstrings, which allow many similar features to be processed in a transparallel fashion, that is, simultaneously as if only one feature were concerned. This form of processing does justice to both the high combinatorial capacity and the high speed of the perceptual organization process. A naturally following proposal is that those temporarily synchronized neural assemblies are "gnosons", that is, constituents of flexible self-organizing cognitive architecture in between the relatively rigid level of neurons and the still elusive level of consciousness.
Integrated System Modeling for Nuclear Thermal Propulsion (NTP)
NASA Technical Reports Server (NTRS)
Ryan, Stephen W.; Borowski, Stanley K.
2014-01-01
Nuclear thermal propulsion (NTP) has long been identified as a key enabling technology for space exploration beyond LEO. From Wernher Von Braun's early concepts for crewed missions to the Moon and Mars to the current Mars Design Reference Architecture (DRA) 5.0 and recent lunar and asteroid mission studies, the high thrust and specific impulse of NTP opens up possibilities such as reusability that are just not feasible with competing approaches. Although NTP technology was proven in the Rover / NERVA projects in the early days of the space program, an integrated spacecraft using NTP has never been developed. Such a spacecraft presents a challenging multidisciplinary systems integration problem. The disciplines that must come together include not only nuclear propulsion and power, but also thermal management, power, structures, orbital dynamics, etc. Some of this integration logic was incorporated into a vehicle sizing code developed at NASA's Glenn Research Center (GRC) in the early 1990s called MOMMA, and later into an Excel-based tool called SIZER. Recently, a team at GRC has developed an open source framework for solving Multidisciplinary Design, Analysis and Optimization (MDAO) problems called OpenMDAO. A modeling approach is presented that builds on previous work in NTP vehicle sizing and mission analysis by making use of the OpenMDAO framework to enable modular and reconfigurable representations of various NTP vehicle configurations and mission scenarios. This approach is currently applied to vehicle sizing, but is extensible to optimization of vehicle and mission designs. The key features of the code will be discussed and examples of NTP transfer vehicles and candidate missions will be presented.
2014-06-01
from the ODM standard. Leveraging SPARX EA’s Java application programming interface (API), the team built a tool called OWL2EA that can ingest an OWL...server MySQL creates the physical schema that enables a user to store and retrieve data conforming to the vocabulary of the JC3IEDM. 6. GENERATING AN
ERIC Educational Resources Information Center
Thurgood, Larry L.
2010-01-01
A mixed methods study examined how a newly developed campus-wide framework for learning and teaching, called the Learning Model, was accepted and embraced by faculty members at Brigham Young University-Idaho from September 2007 to January 2009. Data from two administrations of the Approaches to Teaching Inventory showed that (a) faculty members…
NASA Astrophysics Data System (ADS)
Malof, Jordan M.; Collins, Leslie M.
2016-05-01
Many remote sensing modalities have been developed for buried target detection (BTD), each one offering relative advantages over the others. There has been interest in combining several modalities into a single BTD system that benefits from the advantages of each constituent sensor. Recently an approach was developed, called multi-state management (MSM), that aims to achieve this goal by separating BTD system operation into discrete states, each with different sensor activity and system velocity. Additionally, a modeling approach, called Q-MSM, was developed to quickly analyze multi-modality BTD systems operating with MSM. This work extends previous work by demonstrating how Q-MSM modeling can be used to design BTD systems operating with MSM, and to guide research to yield the most performance benefits. In this work an MSM system is considered that combines a forward-looking infrared (FLIR) camera and a ground penetrating radar (GPR). Experiments are conducted using a dataset of real, field-collected, data which demonstrates how the Q-MSM model can be used to evaluate performance benefits of altering, or improving via research investment, various characteristics of the GPR and FLIR systems. Q-MSM permits fast analysis that can determine where system improvements will have the greatest impact, and can therefore help guide BTD research.
Modeling continuous covariates with a "spike" at zero: Bivariate approaches.
Jenkner, Carolin; Lorenz, Eva; Becher, Heiko; Sauerbrei, Willi
2016-07-01
In epidemiology and clinical research, predictors often take value zero for a large amount of observations while the distribution of the remaining observations is continuous. These predictors are called variables with a spike at zero. Examples include smoking or alcohol consumption. Recently, an extension of the fractional polynomial (FP) procedure, a technique for modeling nonlinear relationships, was proposed to deal with such situations. To indicate whether or not a value is zero, a binary variable is added to the model. In a two stage procedure, called FP-spike, the necessity of the binary variable and/or the continuous FP function for the positive part are assessed for a suitable fit. In univariate analyses, the FP-spike procedure usually leads to functional relationships that are easy to interpret. This paper introduces four approaches for dealing with two variables with a spike at zero (SAZ). The methods depend on the bivariate distribution of zero and nonzero values. Bi-Sep is the simplest of the four bivariate approaches. It uses the univariate FP-spike procedure separately for the two SAZ variables. In Bi-D3, Bi-D1, and Bi-Sub, proportions of zeros in both variables are considered simultaneously in the binary indicators. Therefore, these strategies can account for correlated variables. The methods can be used for arbitrary distributions of the covariates. For illustration and comparison of results, data from a case-control study on laryngeal cancer, with smoking and alcohol intake as two SAZ variables, is considered. In addition, a possible extension to three or more SAZ variables is outlined. A combination of log-linear models for the analysis of the correlation in combination with the bivariate approaches is proposed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An evolutionary morphological approach for software development cost estimation.
Araújo, Ricardo de A; Oliveira, Adriano L I; Soares, Sergio; Meira, Silvio
2012-08-01
In this work we present an evolutionary morphological approach to solve the software development cost estimation (SDCE) problem. The proposed approach consists of a hybrid artificial neuron based on framework of mathematical morphology (MM) with algebraic foundations in the complete lattice theory (CLT), referred to as dilation-erosion perceptron (DEP). Also, we present an evolutionary learning process, called DEP(MGA), using a modified genetic algorithm (MGA) to design the DEP model, because a drawback arises from the gradient estimation of morphological operators in the classical learning process of the DEP, since they are not differentiable in the usual way. Furthermore, an experimental analysis is conducted with the proposed model using five complex SDCE problems and three well-known performance metrics, demonstrating good performance of the DEP model to solve SDCE problems. Copyright © 2012 Elsevier Ltd. All rights reserved.
Modeling of scale-dependent bacterial growth by chemical kinetics approach.
Martínez, Haydee; Sánchez, Joaquín; Cruz, José-Manuel; Ayala, Guadalupe; Rivera, Marco; Buhse, Thomas
2014-01-01
We applied the so-called chemical kinetics approach to complex bacterial growth patterns that were dependent on the liquid-surface-area-to-volume ratio (SA/V) of the bacterial cultures. The kinetic modeling was based on current experimental knowledge in terms of autocatalytic bacterial growth, its inhibition by the metabolite CO2, and the relief of inhibition through the physical escape of the inhibitor. The model quantitatively reproduces kinetic data of SA/V-dependent bacterial growth and can discriminate between differences in the growth dynamics of enteropathogenic E. coli, E. coli JM83, and Salmonella typhimurium on one hand and Vibrio cholerae on the other hand. Furthermore, the data fitting procedures allowed predictions about the velocities of the involved key processes and the potential behavior in an open-flow bacterial chemostat, revealing an oscillatory approach to the stationary states.
Gutnik, Lily A; Hakimzada, A Forogh; Yoskowitz, Nicole A; Patel, Vimla L
2006-12-01
Models of decision-making usually focus on cognitive, situational, and socio-cultural variables in accounting for human performance. However, the emotional component is rarely addressed within these models. This paper reviews evidence for the emotional aspect of decision-making and its role within a new framework of investigation, called neuroeconomics. The new approach aims to build a comprehensive theory of decision-making, through the unification of theories and methods from economics, psychology, and neuroscience. In this paper, we review these integrative research methods and their applications to issues of public health, with illustrative examples from our research on young adults' safe sex practices. This approach promises to be valuable as a comprehensively descriptive and possibly, better predictive model for construction and customization of decision support tools for health professionals and consumers.
The stochastic system approach for estimating dynamic treatments effect.
Commenges, Daniel; Gégout-Petit, Anne
2015-10-01
The problem of assessing the effect of a treatment on a marker in observational studies raises the difficulty that attribution of the treatment may depend on the observed marker values. As an example, we focus on the analysis of the effect of a HAART on CD4 counts, where attribution of the treatment may depend on the observed marker values. This problem has been treated using marginal structural models relying on the counterfactual/potential response formalism. Another approach to causality is based on dynamical models, and causal influence has been formalized in the framework of the Doob-Meyer decomposition of stochastic processes. Causal inference however needs assumptions that we detail in this paper and we call this approach to causality the "stochastic system" approach. First we treat this problem in discrete time, then in continuous time. This approach allows incorporating biological knowledge naturally. When working in continuous time, the mechanistic approach involves distinguishing the model for the system and the model for the observations. Indeed, biological systems live in continuous time, and mechanisms can be expressed in the form of a system of differential equations, while observations are taken at discrete times. Inference in mechanistic models is challenging, particularly from a numerical point of view, but these models can yield much richer and reliable results.
Integrative Data Analysis of Multi-Platform Cancer Data with a Multimodal Deep Learning Approach.
Liang, Muxuan; Li, Zhizhong; Chen, Ting; Zeng, Jianyang
2015-01-01
Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for personalized cancer therapy.
Identification of transmissivity fields using a Bayesian strategy and perturbative approach
NASA Astrophysics Data System (ADS)
Zanini, Andrea; Tanda, Maria Giovanna; Woodbury, Allan D.
2017-10-01
The paper deals with the crucial problem of the groundwater parameter estimation that is the basis for efficient modeling and reclamation activities. A hierarchical Bayesian approach is developed: it uses the Akaike's Bayesian Information Criteria in order to estimate the hyperparameters (related to the covariance model chosen) and to quantify the unknown noise variance. The transmissivity identification proceeds in two steps: the first, called empirical Bayesian interpolation, uses Y* (Y = lnT) observations to interpolate Y values on a specified grid; the second, called empirical Bayesian update, improve the previous Y estimate through the addition of hydraulic head observations. The relationship between the head and the lnT has been linearized through a perturbative solution of the flow equation. In order to test the proposed approach, synthetic aquifers from literature have been considered. The aquifers in question contain a variety of boundary conditions (both Dirichelet and Neuman type) and scales of heterogeneities (σY2 = 1.0 and σY2 = 5.3). The estimated transmissivity fields were compared to the true one. The joint use of Y* and head measurements improves the estimation of Y considering both degrees of heterogeneity. Even if the variance of the strong transmissivity field can be considered high for the application of the perturbative approach, the results show the same order of approximation of the non-linear methods proposed in literature. The procedure allows to compute the posterior probability distribution of the target quantities and to quantify the uncertainty in the model prediction. Bayesian updating has advantages related both to the Monte-Carlo (MC) and non-MC approaches. In fact, as the MC methods, Bayesian updating allows computing the direct posterior probability distribution of the target quantities and as non-MC methods it has computational times in the order of seconds.
A model for cancer tissue heterogeneity.
Mohanty, Anwoy Kumar; Datta, Aniruddha; Venkatraj, Vijayanagaram
2014-03-01
An important problem in the study of cancer is the understanding of the heterogeneous nature of the cell population. The clonal evolution of the tumor cells results in the tumors being composed of multiple subpopulations. Each subpopulation reacts differently to any given therapy. This calls for the development of novel (regulatory network) models, which can accommodate heterogeneity in cancerous tissues. In this paper, we present a new approach to model heterogeneity in cancer. We model heterogeneity as an ensemble of deterministic Boolean networks based on prior pathway knowledge. We develop the model considering the use of qPCR data. By observing gene expressions when the tissue is subjected to various stimuli, the compositional breakup of the tissue under study can be determined. We demonstrate the viability of this approach by using our model on synthetic data, and real-world data collected from fibroblasts.
The Buffer Diagnostic Prototype: A fault isolation application using CLIPS
NASA Technical Reports Server (NTRS)
Porter, Ken
1994-01-01
This paper describes problem domain characteristics and development experiences from using CLIPS 6.0 in a proof-of-concept troubleshooting application called the Buffer Diagnostic Prototype. The problem domain is a large digital communications subsystems called the real-time network (RTN), which was designed to upgrade the launch processing system used for shuttle support at KSC. The RTN enables up to 255 computers to share 50,000 data points with millisecond response times. The RTN's extensive built-in test capability but lack of any automatic fault isolation capability presents a unique opportunity for a diagnostic expert system application. The Buffer Diagnostic Prototype addresses RTN diagnosis with a multiple strategy approach. A novel technique called 'faulty causality' employs inexact qualitative models to process test results. Experimental knowledge provides a capability to recognize symptom-fault associations. The implementation utilizes rule-based and procedural programming techniques, including a goal-directed control structure and simple text-based generic user interface that may be reusable for other rapid prototyping applications. Although limited in scope, this project demonstrates a diagnostic approach that may be adapted to troubleshoot a broad range of equipment.
Murphy, G C; Foreman, P
1993-03-01
Calls for rehabilitation counselors to learn more about the world of work have been recently repeated. The validity of these calls is suggested by a group of studies which indicate that the rehabilitation counseling literature has an established emphasis on matters of counseling and adjustment rather than on matters related to behavior in organizations. A survey of rehabilitation counsellors' beliefs about key topics in organizational behavior indicates that their beliefs are often discrepant with those of practicing managers and supervisors. A summary of dominant models of work motivation adopted by managerial workers is presented and some implications for occupational rehabilitation practice identified. Finally, some contemporary literature relevant to managerial approaches to employee motivation are identified and it is suggested that familiarity with this literature could assist rehabilitation practitioners move from a more narrow occupational rehabilitation role to a broader involvement in organizational life via the expansion of the disability management approach in work organizations.
Combining Domain-driven Design and Mashups for Service Development
NASA Astrophysics Data System (ADS)
Iglesias, Carlos A.; Fernández-Villamor, José Ignacio; Del Pozo, David; Garulli, Luca; García, Boni
This chapter presents the Romulus project approach to Service Development using Java-based web technologies. Romulus aims at improving productivity of service development by providing a tool-supported model to conceive Java-based web applications. This model follows a Domain Driven Design approach, which states that the primary focus of software projects should be the core domain and domain logic. Romulus proposes a tool-supported model, Roma Metaframework, that provides an abstraction layer on top of existing web frameworks and automates the application generation from the domain model. This metaframework follows an object centric approach, and complements Domain Driven Design by identifying the most common cross-cutting concerns (security, service, view, ...) of web applications. The metaframework uses annotations for enriching the domain model with these cross-cutting concerns, so-called aspects. In addition, the chapter presents the usage of mashup technology in the metaframework for service composition, using the web mashup editor MyCocktail. This approach is applied to a scenario of the Mobile Phone Service Portability case study for the development of a new service.
Identification and Control of Aircrafts using Multiple Models and Adaptive Critics
NASA Technical Reports Server (NTRS)
Principe, Jose C.
2007-01-01
We compared two possible implementations of local linear models for control: one approach is based on a self-organizing map (SOM) to cluster the dynamics followed by a set of linear models operating at each cluster. Therefore the gating function is hard (a single local model will represent the regional dynamics). This simplifies the controller design since there is a one to one mapping between controllers and local models. The second approach uses a soft gate using a probabilistic framework based on a Gaussian Mixture Model (also called a dynamic mixture of experts). In this approach several models may be active at a given time, we can expect a smaller number of models, but the controller design is more involved, with potentially better noise rejection characteristics. Our experiments showed that the SOM provides overall best performance in high SNRs, but the performance degrades faster than with the GMM for the same noise conditions. The SOM approach required about an order of magnitude more models than the GMM, so in terms of implementation cost, the GMM is preferable. The design of the SOM is straight forward, while the design of the GMM controllers, although still reasonable, is more involved and needs more care in the selection of the parameters. Either one of these locally linear approaches outperform global nonlinear controllers based on neural networks, such as the time delay neural network (TDNN). Therefore, in essence the local model approach warrants practical implementations. In order to call the attention of the control community for this design methodology we extended successfully the multiple model approach to PID controllers (still today the most widely used control scheme in the industry), and wrote a paper on this subject. The echo state network (ESN) is a recurrent neural network with the special characteristics that only the output parameters are trained. The recurrent connections are preset according to the problem domain and are fixed. In a nutshell, the states of the reservoir of recurrent processing elements implement a projection space, where the desired response is optimally projected. This architecture trades training efficiency by a large increase in the dimension of the recurrent layer. However, the power of the recurrent neural networks can be brought to bear on practical difficult problems. Our goal was to implement an adaptive critic architecture implementing Bellman s approach to optimal control. However, we could only characterize the ESN performance as a critic in value function evaluation, which is just one of the pieces of the overall adaptive critic controller. The results were very convincing, and the simplicity of the implementation was unparalleled.
Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong
2012-01-01
Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency. PMID:22368467
Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong
2012-01-01
Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors' mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors' monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency.
Modeling the chemistry of complex petroleum mixtures.
Quann, R J
1998-01-01
Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903
Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe
2016-01-01
Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Koven, C. D.; Schuur, E.; Schaedel, C.; Bohn, T. J.; Burke, E.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A.; Marchenko, S. S.; McGuire, A. D.; Natali, S.; Nicolsky, D.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M. R.
2015-12-01
We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a 3-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100.
NASA Astrophysics Data System (ADS)
Mariano, Adrian V.; Grossmann, John M.
2010-11-01
Reflectance-domain methods convert hyperspectral data from radiance to reflectance using an atmospheric compensation model. Material detection and identification are performed by comparing the compensated data to target reflectance spectra. We introduce two radiance-domain approaches, Single atmosphere Adaptive Cosine Estimator (SACE) and Multiple atmosphere ACE (MACE) in which the target reflectance spectra are instead converted into sensor-reaching radiance using physics-based models. For SACE, known illumination and atmospheric conditions are incorporated in a single atmospheric model. For MACE the conditions are unknown so the algorithm uses many atmospheric models to cover the range of environmental variability, and it approximates the result using a subspace model. This approach is sometimes called the invariant method, and requires the choice of a subspace dimension for the model. We compare these two radiance-domain approaches to a Reflectance-domain ACE (RACE) approach on a HYDICE image featuring concealed materials. All three algorithms use the ACE detector, and all three techniques are able to detect most of the hidden materials in the imagery. For MACE we observe a strong dependence on the choice of the material subspace dimension. Increasing this value can lead to a decline in performance.
Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.
2016-01-01
Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.
Intrinsic ethics regarding integrated assessment models for climate management.
Schienke, Erich W; Baum, Seth D; Tuana, Nancy; Davis, Kenneth J; Keller, Klaus
2011-09-01
In this essay we develop and argue for the adoption of a more comprehensive model of research ethics than is included within current conceptions of responsible conduct of research (RCR). We argue that our model, which we label the ethical dimensions of scientific research (EDSR), is a more comprehensive approach to encouraging ethically responsible scientific research compared to the currently typically adopted approach in RCR training. This essay focuses on developing a pedagogical approach that enables scientists to better understand and appreciate one important component of this model, what we call intrinsic ethics. Intrinsic ethical issues arise when values and ethical assumptions are embedded within scientific findings and analytical methods. Through a close examination of a case study and its application in teaching, namely, evaluation of climate change integrated assessment models, this paper develops a method and case for including intrinsic ethics within research ethics training to provide scientists with a comprehensive understanding and appreciation of the critical role of values and ethical choices in the production of research outcomes.
A human operator simulator model of the NASA Terminal Configured Vehicle (TCV)
NASA Technical Reports Server (NTRS)
Glenn, F. A., III; Doane, S. M.
1981-01-01
A generic operator model called HOS was used to simulate the behavior and performance of a pilot flying a transport airplane during instrument approach and landing operations in order to demonstrate the applicability of the model to problems associated with interfacing a crew with a flight system. The model which was installed and operated on NASA Langley's central computing system is described. Preliminary results of its application to an investigation of an innovative display system under development in Langley's terminal configured vehicle program are considered.
Implications of Modeling Uncertainty for Water Quality Decision Making
NASA Astrophysics Data System (ADS)
Shabman, L.
2002-05-01
The report, National Academy of Sciences report, "Assessing the TMDL Approach to Water Quality Management" endorsed the "watershed" and "ambient water quality focused" approach" to water quality management called for in the TMDL program. The committee felt that available data and models were adequate to move such a program forward, if the EPA and all stakeholders better understood the nature of the scientific enterprise and its application to the TMDL program. Specifically, the report called for a greater acknowledgement of model prediction uncertinaity in making and implementing TMDL plans. To assure that such uncertinaity was addressed in water quality decision making the committee called for a commitment to "adaptive implementation" of water quality management plans. The committee found that the number and complexity of the interactions of multiple stressors, combined with model prediction uncertinaity means that we need to avoid the temptation to make assurances that specific actions will result in attainment of particular water quality standards. Until the work on solving a water quality problem begins, analysts and decision makers cannot be sure what the correct solutions are, or even what water quality goals a community should be seeking. In complex systems we need to act in order to learn; adaptive implementation is a concurrent process of action and learning. Learning requires (1) continued monitoring of the waterbody to determine how it responds to the actions taken and (2) carefully designed experiments in the watershed. If we do not design learning into what we attempt we are not doing adaptive implementation. Therefore, there needs to be an increased commitment to monitoring and experiments in watersheds that will lead to learning. This presentation will 1) explain the logic for adaptive implementation; 2) discuss the ways that water quality modelers could characterize and explain model uncertinaity to decision makers; 3) speculate on the implications of the adaptive implementation for setting of water quality standards, for design of watershed monitoring programs and for the regulatory rules governing the TMDL program implementation.
Understanding Teacher Users of a Digital Library Service: A Clustering Approach
ERIC Educational Resources Information Center
Xu, Beijie
2011-01-01
This research examined teachers' online behaviors while using a digital library service--the Instructional Architect (IA)--through three consecutive studies. In the first two studies, a statistical model called latent class analysis (LCA) was applied to cluster different groups of IA teachers according to their diverse online behaviors. The third…
How Teacher Education Can Make a Difference
ERIC Educational Resources Information Center
Korthagen, Fred A. J.
2010-01-01
Many studies reveal a huge gap between theory and practice in teacher education, leading to serious doubts concerning the effectiveness of teacher education. In this paper, the causes of the gap between theory and practice are analysed. On this basis, and grounded in a three-level model of teacher learning, the so-called "realistic approach" to…
Modelling Critical Thinking through Learning-Oriented Assessment
ERIC Educational Resources Information Center
Lombard, B. J. J.
2008-01-01
One of the cornerstones peculiar to the outcomes-based approach adopted by the South African education and training sector is the so-called "critical outcomes". Included in one of these outcomes is the ability to think critically. Although this outcome articulates well with the cognitive domain of holistic development, it also gives rise…
ERIC Educational Resources Information Center
Donnelly, Laura
2007-01-01
When teaching science to kids, a visual approach is good. Humor is also good. And blowing things up is really, really good. At least that is what educators at the Exploratorium in San Francisco have found in the nine years since the museum began producing a live, off-the-cuff competition called Iron Science Teacher. Modeled after the Japanese cult…
Using Tablet Technology for Personalising Learning
ERIC Educational Resources Information Center
Ryan, David
2016-01-01
This paper begins with examining the origins of Individual Educational Plans, before taking a critical approach to the concept, to highlight the shortcomings and flaws that can now be found with the concept. The call is made to move toward Personalised Planning models, which will have a greater impact on pupil outcomes, before reporting on how the…
Faculty Adaptation to an Experimental Curriculum.
ERIC Educational Resources Information Center
Moore-West, Maggi; And Others
The adjustment of medical school faculty members to a new curriculum, called problem-based learning, was studied. Nineteen faculty members who taught in both a lecture-based and tutorial program over 2 academic years were surveyed. Besides the teacher-centered approach, the other model of learning was student-centered and could be conducted in…
Multimedia Projects in Education: Designing, Producing, and Assessing.
ERIC Educational Resources Information Center
Ivers, Karen S.; Barron, Ann E.
A practical step-by-step approach to teaching multimedia skills is offered in this book. A model called "Decide, Design, Develop, and Evaluate" (DDDE) is presented which can be used as a template for designing, producing, and assessing multimedia projects in the classroom. The books covers all issues an educator is likely to face with…
A Core Journal Decision Model Based on Weighted Page Rank
ERIC Educational Resources Information Center
Wang, Hei-Chia; Chou, Ya-lin; Guo, Jiunn-Liang
2011-01-01
Purpose: The paper's aim is to propose a core journal decision method, called the local impact factor (LIF), which can evaluate the requirements of the local user community by combining both the access rate and the weighted impact factor, and by tracking citation information on the local users' articles. Design/methodology/approach: Many…
USDA-ARS?s Scientific Manuscript database
1. Resilience-based approaches are increasingly being called upon to inform ecosystem management, particularly in arid and semi-arid regions. This requires management frameworks that can assess ecosystem dynamics, both within and between alternative states, at relevant time scales. 2. We analysed l...
ERIC Educational Resources Information Center
Borge, Marcela; White, Barbara
2016-01-01
We proposed and evaluated an instructional framework for increasing students' ability to understand and regulate collaborative interactions called Co-Regulated Collaborative Learning (CRCL). In this instantiation of CRCL, models of collaborative competence were articulated through a set of socio-metacognitive roles. Our population consisted of 28…
Implementing a Project-Based Learning Model in a Pre-Service Leadership Program
ERIC Educational Resources Information Center
Albritton, Shelly; Stacks, Jamie
2016-01-01
This paper describes two instructors' efforts to more authentically engage students in a preservice leadership program's course called Program Planning and Evaluation by using a project-based learning approach. Markham, Larmer, and Ravitz (2003) describe project-based learning (PjBL) as "a systematic teaching method that engages students in…
Some New Theoretical Issues in Systems Thinking Relevant for Modelling Corporate Learning
ERIC Educational Resources Information Center
Minati, Gianfranco
2007-01-01
Purpose: The purpose of this paper is to describe fundamental concepts and theoretical challenges with regard to systems, and to build on these in proposing new theoretical frameworks relevant to learning, for example in so-called learning organizations. Design/methodology/approach: The paper focuses on some crucial fundamental aspects introduced…
Stream of consciousness: Quantum and biochemical assumptions regarding psychopathology.
Tonello, Lucio; Cocchi, Massimo; Gabrielli, Fabio; Tuszynski, Jack A
2017-04-01
The accepted paradigms of mainstream neuropsychiatry appear to be incompletely adequate and in various cases offer equivocal analyses. However, a growing number of new approaches are being proposed that suggest the emergence of paradigm shifts in this area. In particular, quantum theories of mind, brain and consciousness seem to offer a profound change to the current approaches. Unfortunately these quantum paradigms harbor at least two serious problems. First, they are simply models, theories, and assumptions, with no convincing experiments supporting their claims. Second, they deviate from contemporary mainstream views of psychiatric illness and do so in revolutionary ways. We suggest a possible way to integrate experimental neuroscience with quantum models in order to address outstanding issues in psychopathology. A key role is played by the phenomenon called the "stream of consciousness", which can be linked to the so-called "Gamma Synchrony" (GS), which is clearly demonstrated by EEG data. In our novel proposal, a unipolar depressed patient could be seen as a subject with an altered stream of consciousness. In particular, some clues suggest that depression is linked to an "increased power" stream of consciousness. It is additionally suggested that such an approach to depression might be extended to psychopathology in general with potential benefits to diagnostics and therapeutics in neuropsychiatry. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jung, Kirsten; Molinari, Jesús; Kalko, Elisabeth K V
2014-01-01
Phylogeny, ecology, and sensorial constraints are thought to be the most important factors influencing echolocation call design in bats. The Molossidae is a diverse bat family with a majority of species restricted to tropical and subtropical regions. Most molossids are specialized to forage for insects in open space, and thus share similar navigational challenges. We use an unprecedented dataset on the echolocation calls of 8 genera and 18 species of New World molossids to explore how habitat, phylogenetic relatedness, body mass, and prey perception contribute to echolocation call design. Our results confirm that, with the exception of the genus Molossops, echolocation calls of these bats show a typical design for open space foraging. Two lines of evidence point to echolocation call structure of molossids reflecting phylogenetic relatedness. First, such structure is significantly more similar within than among genera. Second, except for allometric scaling, such structure is nearly the same in congeneric species. Despite contrasting body masses, 12 of 18 species call within a relatively narrow frequency range of 20 to 35 kHz, a finding that we explain by using a modeling approach whose results suggest this frequency range to be an adaptation optimizing prey perception in open space. To conclude, we argue that the high variability in echolocation call design of molossids is an advanced evolutionary trait allowing the flexible adjustment of echolocation systems to various sensorial challenges, while conserving sender identity for social communication. Unraveling evolutionary drivers for echolocation call design in bats has so far been hampered by the lack of adequate model organisms sharing a phylogenetic origin and facing similar sensorial challenges. We thus believe that knowledge of the echolocation call diversity of New World molossid bats may prove to be landmark to understand the evolution and functionality of species-specific signal design in bats.
Jung, Kirsten; Molinari, Jesús
2014-01-01
Phylogeny, ecology, and sensorial constraints are thought to be the most important factors influencing echolocation call design in bats. The Molossidae is a diverse bat family with a majority of species restricted to tropical and subtropical regions. Most molossids are specialized to forage for insects in open space, and thus share similar navigational challenges. We use an unprecedented dataset on the echolocation calls of 8 genera and 18 species of New World molossids to explore how habitat, phylogenetic relatedness, body mass, and prey perception contribute to echolocation call design. Our results confirm that, with the exception of the genus Molossops, echolocation calls of these bats show a typical design for open space foraging. Two lines of evidence point to echolocation call structure of molossids reflecting phylogenetic relatedness. First, such structure is significantly more similar within than among genera. Second, except for allometric scaling, such structure is nearly the same in congeneric species. Despite contrasting body masses, 12 of 18 species call within a relatively narrow frequency range of 20 to 35 kHz, a finding that we explain by using a modeling approach whose results suggest this frequency range to be an adaptation optimizing prey perception in open space. To conclude, we argue that the high variability in echolocation call design of molossids is an advanced evolutionary trait allowing the flexible adjustment of echolocation systems to various sensorial challenges, while conserving sender identity for social communication. Unraveling evolutionary drivers for echolocation call design in bats has so far been hampered by the lack of adequate model organisms sharing a phylogenetic origin and facing similar sensorial challenges. We thus believe that knowledge of the echolocation call diversity of New World molossid bats may prove to be landmark to understand the evolution and functionality of species-specific signal design in bats. PMID:24454833
Tangprasertchai, Narin S; Zhang, Xiaojun; Ding, Yuan; Tham, Kenneth; Rohs, Remo; Haworth, Ian S; Qin, Peter Z
2015-01-01
The technique of site-directed spin labeling (SDSL) provides unique information on biomolecules by monitoring the behavior of a stable radical tag (i.e., spin label) using electron paramagnetic resonance (EPR) spectroscopy. In this chapter, we describe an approach in which SDSL is integrated with computational modeling to map conformations of nucleic acids. This approach builds upon a SDSL tool kit previously developed and validated, which includes three components: (i) a nucleotide-independent nitroxide probe, designated as R5, which can be efficiently attached at defined sites within arbitrary nucleic acid sequences; (ii) inter-R5 distances in the nanometer range, measured via pulsed EPR; and (iii) an efficient program, called NASNOX, that computes inter-R5 distances on given nucleic acid structures. Following a general framework of data mining, our approach uses multiple sets of measured inter-R5 distances to retrieve "correct" all-atom models from a large ensemble of models. The pool of models can be generated independently without relying on the inter-R5 distances, thus allowing a large degree of flexibility in integrating the SDSL-measured distances with a modeling approach best suited for the specific system under investigation. As such, the integrative experimental/computational approach described here represents a hybrid method for determining all-atom models based on experimentally-derived distance measurements. © 2015 Elsevier Inc. All rights reserved.
Tangprasertchai, Narin S.; Zhang, Xiaojun; Ding, Yuan; Tham, Kenneth; Rohs, Remo; Haworth, Ian S.; Qin, Peter Z.
2015-01-01
The technique of site-directed spin labeling (SDSL) provides unique information on biomolecules by monitoring the behavior of a stable radical tag (i.e., spin label) using electron paramagnetic resonance (EPR) spectroscopy. In this chapter, we describe an approach in which SDSL is integrated with computational modeling to map conformations of nucleic acids. This approach builds upon a SDSL tool kit previously developed and validated, which includes three components: (i) a nucleotide-independent nitroxide probe, designated as R5, which can be efficiently attached at defined sites within arbitrary nucleic acid sequences; (ii) inter-R5 distances in the nanometer range, measured via pulsed EPR; and (iii) an efficient program, called NASNOX, that computes inter-R5 distances on given nucleic acid structures. Following a general framework of data mining, our approach uses multiple sets of measured inter-R5 distances to retrieve “correct” all-atom models from a large ensemble of models. The pool of models can be generated independently without relying on the inter-R5 distances, thus allowing a large degree of flexibility in integrating the SDSL-measured distances with a modeling approach best suited for the specific system under investigation. As such, the integrative experimental/computational approach described here represents a hybrid method for determining all-atom models based on experimentally-derived distance measurements. PMID:26477260
A distributed finite-element modeling and control approach for large flexible structures
NASA Technical Reports Server (NTRS)
Young, K. D.
1989-01-01
An unconventional framework is described for the design of decentralized controllers for large flexible structures. In contrast to conventional control system design practice which begins with a model of the open loop plant, the controlled plant is assembled from controlled components in which the modeling phase and the control design phase are integrated at the component level. The developed framework is called controlled component synthesis (CCS) to reflect that it is motivated by the well developed Component Mode Synthesis (CMS) methods which were demonstrated to be effective for solving large complex structural analysis problems for almost three decades. The design philosophy behind CCS is also closely related to that of the subsystem decomposition approach in decentralized control.
Biological intuition in alignment-free methods: response to Posada.
Ragan, Mark A; Chan, Cheong Xin
2013-08-01
A recent editorial in Journal of Molecular Evolution highlights opportunities and challenges facing molecular evolution in the era of next-generation sequencing. Abundant sequence data should allow more-complex models to be fit at higher confidence, making phylogenetic inference more reliable and improving our understanding of evolution at the molecular level. However, concern that approaches based on multiple sequence alignment may be computationally infeasible for large datasets is driving the development of so-called alignment-free methods for sequence comparison and phylogenetic inference. The recent editorial characterized these approaches as model-free, not based on the concept of homology, and lacking in biological intuition. We argue here that alignment-free methods have not abandoned models or homology, and can be biologically intuitive.
Geometrical optics in the near field: local plane-interface approach with evanescent waves.
Bose, Gaurav; Hyvärinen, Heikki J; Tervo, Jani; Turunen, Jari
2015-01-12
We show that geometrical models may provide useful information on light propagation in wavelength-scale structures even if evanescent fields are present. We apply a so-called local plane-wave and local plane-interface methods to study a geometry that resembles a scanning near-field microscope. We show that fair agreement between the geometrical approach and rigorous electromagnetic theory can be achieved in the case where evanescent waves are required to predict any transmission through the structure.
Proposing an Evidence-Based Strategy for Software Requirements Engineering.
Lindoerfer, Doris; Mansmann, Ulrich
2016-01-01
This paper discusses an evidence-based approach to software requirements engineering. The approach is called evidence-based, since it uses publications on the specific problem as a surrogate for stakeholder interests, to formulate risks and testing experiences. This complements the idea that agile software development models are more relevant, in which requirements and solutions evolve through collaboration between self-organizing cross-functional teams. The strategy is exemplified and applied to the development of a Software Requirements list used to develop software systems for patient registries.
Healthcare delivery systems: designing quality into health information systems.
Joyce, Phil; Green, Rosamund; Winch, Graham
2007-01-01
To ensure that quality is 'engineered in' a holistic, integrated and quality approach is required, and Total Quality Management (TQM) principles are the obvious foundations for this. This paper describes a novel approach to viewing the operations of a healthcare provider where electronic means could be used to distribute information (including electronic fund settlements), building around the Full Service Provider core. Specifically, an approach called the "triple pair flow" model is used to provide a view of healthcare delivery that is integrated, yet detailed, and that combines the strategic enterprise view with a business process view.
Agent-based modeling as a tool for program design and evaluation.
Lawlor, Jennifer A; McGirr, Sara
2017-12-01
Recently, systems thinking and systems science approaches have gained popularity in the field of evaluation; however, there has been relatively little exploration of how evaluators could use quantitative tools to assist in the implementation of systems approaches therein. The purpose of this paper is to explore potential uses of one such quantitative tool, agent-based modeling, in evaluation practice. To this end, we define agent-based modeling and offer potential uses for it in typical evaluation activities, including: engaging stakeholders, selecting an intervention, modeling program theory, setting performance targets, and interpreting evaluation results. We provide demonstrative examples from published agent-based modeling efforts both inside and outside the field of evaluation for each of the evaluative activities discussed. We further describe potential pitfalls of this tool and offer cautions for evaluators who may chose to implement it in their practice. Finally, the article concludes with a discussion of the future of agent-based modeling in evaluation practice and a call for more formal exploration of this tool as well as other approaches to simulation modeling in the field. Copyright © 2017 Elsevier Ltd. All rights reserved.
Metainference: A Bayesian inference method for heterogeneous systems.
Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele
2016-01-01
Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
He, Qiwei; Veldkamp, Bernard P; Glas, Cees A W; de Vries, Theo
2017-03-01
Patients' narratives about traumatic experiences and symptoms are useful in clinical screening and diagnostic procedures. In this study, we presented an automated assessment system to screen patients for posttraumatic stress disorder via a natural language processing and text-mining approach. Four machine-learning algorithms-including decision tree, naive Bayes, support vector machine, and an alternative classification approach called the product score model-were used in combination with n-gram representation models to identify patterns between verbal features in self-narratives and psychiatric diagnoses. With our sample, the product score model with unigrams attained the highest prediction accuracy when compared with practitioners' diagnoses. The addition of multigrams contributed most to balancing the metrics of sensitivity and specificity. This article also demonstrates that text mining is a promising approach for analyzing patients' self-expression behavior, thus helping clinicians identify potential patients from an early stage.
From direct-space discrepancy functions to crystallographic least squares.
Giacovazzo, Carmelo
2015-01-01
Crystallographic least squares are a fundamental tool for crystal structure analysis. In this paper their properties are derived from functions estimating the degree of similarity between two electron-density maps. The new approach leads also to modifications of the standard least-squares procedures, potentially able to improve their efficiency. The role of the scaling factor between observed and model amplitudes is analysed: the concept of unlocated model is discussed and its scattering contribution is combined with that arising from the located model. Also, the possible use of an ancillary parameter, to be associated with the classical weight related to the variance of the observed amplitudes, is studied. The crystallographic discrepancy factors, basic tools often combined with least-squares procedures in phasing approaches, are analysed. The mathematical approach here described includes, as a special case, the so-called vector refinement, used when accurate estimates of the target phases are available.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
Transformer modeling for low- and mid-frequency electromagnetic transients simulation
NASA Astrophysics Data System (ADS)
Lambert, Mathieu
In this work, new models are developed for single-phase and three-phase shell-type transformers for the simulation of low-frequency transients, with the use of the coupled leakage model. This approach has the advantage that it avoids the use of fictitious windings to connect the leakage model to a topological core model, while giving the same response in short-circuit as the indefinite admittance matrix (BCTRAN) model. To further increase the model sophistication, it is proposed to divide windings into coils in the new models. However, short-circuit measurements between coils are never available. Therefore, a novel analytical method is elaborated for this purpose, which allows the calculation in 2-D of short-circuit inductances between coils of rectangular cross-section. The results of this new method are in agreement with the results obtained from the finite element method in 2-D. Furthermore, the assumption that the leakage field is approximately 2-D in shell-type transformers is validated with a 3-D simulation. The outcome of this method is used to calculate the self and mutual inductances between the coils of the coupled leakage model and the results are showing good correspondence with terminal short-circuit measurements. Typically, leakage inductances in transformers are calculated from short-circuit measurements and the magnetizing branch is calculated from no-load measurements, assuming that leakages are unimportant for the unloaded transformer and that magnetizing current is negligible during a short-circuit. While the core is assumed to have an infinite permeability to calculate short-circuit inductances, and it is a reasonable assumption since the core's magnetomotive force is negligible during a short-circuit, the same reasoning does not necessarily hold true for leakage fluxes in no-load conditions. This is because the core starts to saturate when the transformer is unloaded. To take this into account, a new analytical method is developed in this dissertation, which removes the contributions of leakage fluxes to properly calculate the magnetizing branches of the new models. However, in the new analytical method for calculating short-circuit inductances (as with other analytical methods), eddy-current losses are neglected. Similarly, winding losses are omitted in the coupled leakage model and in the new analytical method to remove leakage fluxes to calculate core parameters from no-load tests. These losses will be taken into account in future work. Both transformer models presented in this dissertation are based on the classical hypothesis that flux can be discretized into flux tubes, which is also the assumption used in a category of models called topological models. Even though these models are physically-based, there exist many topological models for a given transformer geometry. It is shown in this work that these differences can be explained in part through the concepts of divided and integral fluxes, and it is explained that divided approach is the result of mathematical manipulations, while the integral approach is more "physically-accurate". Furthermore, it is demonstrated, for the special case of a two-winding single-phase transformer, that the divided leakage inductances have to be nonlinear for both approaches to be equivalent. Even between models of the divided or integral approach models, there are differences, which arise from the particular choice of so-called flux paths" (tubes). This arbitrariness comes from the fact that with the classical hypothesis that magnetic flux can be confined into predefined flux tubes (leading to classical magnetic circuit theory), it is assumed that flux cannot leak from the sides of flux tubes. Therefore, depending on the transformer's operation conditions (degree of saturation, short-circuit, etc.), this can lead to different choices of flux tubes and different models. In this work, a new theoretical framework is developed to allow flux to leak from the sides of the tube, and generalized to include resistances and capacitances in what is called electromagnetic circuit theory. Also, it is explained that this theory is actually equivalent to what is called finite formulations (such as the finite element method), which bridges the gap between circuit theory and discrete electromagnetism. Therefore, this enables not only to develop topologically-correct transformer models, where electric and magnetic circuits are defined on dual meshes, but also rotating machine and transmission lines models (wave propagation can be taken into account).
Reasoning about real-time systems with temporal interval logic constraints on multi-state automata
NASA Technical Reports Server (NTRS)
Gabrielian, Armen
1991-01-01
Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.
A Framework for Distributed Problem Solving
NASA Astrophysics Data System (ADS)
Leone, Joseph; Shin, Don G.
1989-03-01
This work explores a distributed problem solving (DPS) approach, namely the AM/AG model, to cooperative memory recall. The AM/AG model is a hierarchic social system metaphor for DPS based on the Mintzberg's model of organizations. At the core of the model are information flow mechanisms, named amplification and aggregation. Amplification is a process of expounding a given task, called an agenda, into a set of subtasks with magnified degree of specificity and distributing them to multiple processing units downward in the hierarchy. Aggregation is a process of combining the results reported from multiple processing units into a unified view, called a resolution, and promoting the conclusion upward in the hierarchy. The combination of amplification and aggregation can account for a memory recall process which primarily relies on the ability of making associations between vast amounts of related concepts, sorting out the combined results, and promoting the most plausible ones. The amplification process is discussed in detail. An implementation of the amplification process is presented. The process is illustrated by an example.
Achuthan, Anusha; Rajeswari, Mandava; Ramachandram, Dhanesh; Aziz, Mohd Ezane; Shuaib, Ibrahim Lutfi
2010-07-01
This paper introduces an approach to perform segmentation of regions in computed tomography (CT) images that exhibit intra-region intensity variations and at the same time have similar intensity distributions with surrounding/adjacent regions. In this work, we adapt a feature computed from wavelet transform called wavelet energy to represent the region information. The wavelet energy is embedded into a level set model to formulate the segmentation model called wavelet energy-guided level set-based active contour (WELSAC). The WELSAC model is evaluated using several synthetic and CT images focusing on tumour cases, which contain regions demonstrating the characteristics of intra-region intensity variations and having high similarity in intensity distributions with the adjacent regions. The obtained results show that the proposed WELSAC model is able to segment regions of interest in close correspondence with the manual delineation provided by the medical experts and to provide a solution for tumour detection. Copyright 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Raksincharoensak, Pongsathorn; Khaisongkram, Wathanyoo; Nagai, Masao; Shimosaka, Masamichi; Mori, Taketoshi; Sato, Tomomasa
2010-12-01
This paper describes the modelling of naturalistic driving behaviour in real-world traffic scenarios, based on driving data collected via an experimental automobile equipped with a continuous sensing drive recorder. This paper focuses on the longitudinal driving situations which are classified into five categories - car following, braking, free following, decelerating and stopping - and are referred to as driving states. Here, the model is assumed to be represented by a state flow diagram. Statistical machine learning of driver-vehicle-environment system model based on driving database is conducted by a discriminative modelling approach called boosting sequential labelling method.
NASA Astrophysics Data System (ADS)
Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan
A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.
Word of Mouth : An Agent-based Approach to Predictability of Stock Prices
NASA Astrophysics Data System (ADS)
Shimokawa, Tetsuya; Misawa, Tadanobu; Watanabe, Kyoko
This paper addresses how communication processes among investors affect stock prices formation, especially emerging predictability of stock prices, in financial markets. An agent based model, called the word of mouth model, is introduced for analyzing the problem. This model provides a simple, but sufficiently versatile, description of informational diffusion process and is successful in making lucidly explanation for the predictability of small sized stocks, which is a stylized fact in financial markets but difficult to resolve by traditional models. Our model also provides a rigorous examination of the under reaction hypothesis to informational shocks.
Lee, Y; Tien, J M
2001-01-01
We present mathematical models that determine the optimal parameters for strategically routing multidestination traffic in an end-to-end network setting. Multidestination traffic refers to a traffic type that can be routed to any one of a multiple number of destinations. A growing number of communication services is based on multidestination routing. In this parameter-driven approach, a multidestination call is routed to one of the candidate destination nodes in accordance with predetermined decision parameters associated with each candidate node. We present three different approaches: (1) a link utilization (LU) approach, (2) a network cost (NC) approach, and (3) a combined parametric (CP) approach. The LU approach provides the solution that would result in an optimally balanced link utilization, whereas the NC approach provides the least expensive way to route traffic to destinations. The CP approach, on the other hand, provides multiple solutions that help leverage link utilization and cost. The LU approach has in fact been implemented by a long distance carrier resulting in a considerable efficiency improvement in its international direct services, as summarized.
Interpretable Deep Models for ICU Outcome Prediction
Che, Zhengping; Purushotham, Sanjay; Khemani, Robinder; Liu, Yan
2016-01-01
Exponential surge in health care data, such as longitudinal data from electronic health records (EHR), sensor data from intensive care unit (ICU), etc., is providing new opportunities to discover meaningful data-driven characteristics and patterns ofdiseases. Recently, deep learning models have been employedfor many computational phenotyping and healthcare prediction tasks to achieve state-of-the-art performance. However, deep models lack interpretability which is crucial for wide adoption in medical research and clinical decision-making. In this paper, we introduce a simple yet powerful knowledge-distillation approach called interpretable mimic learning, which uses gradient boosting trees to learn interpretable models and at the same time achieves strong prediction performance as deep learning models. Experiment results on Pediatric ICU dataset for acute lung injury (ALI) show that our proposed method not only outperforms state-of-the-art approaches for morality and ventilator free days prediction tasks but can also provide interpretable models to clinicians. PMID:28269832
NASA Astrophysics Data System (ADS)
Udupa, Jayaram K.; Odhner, Dewey; Falcao, Alexandre X.; Ciesielski, Krzysztof C.; Miranda, Paulo A. V.; Vaideeswaran, Pavithra; Mishra, Shipra; Grevera, George J.; Saboury, Babak; Torigian, Drew A.
2011-03-01
To make Quantitative Radiology (QR) a reality in routine clinical practice, computerized automatic anatomy recognition (AAR) becomes essential. As part of this larger goal, we present in this paper a novel fuzzy strategy for building bodywide group-wise anatomic models. They have the potential to handle uncertainties and variability in anatomy naturally and to be integrated with the fuzzy connectedness framework for image segmentation. Our approach is to build a family of models, called the Virtual Quantitative Human, representing normal adult subjects at a chosen resolution of the population variables (gender, age). Models are represented hierarchically, the descendents representing organs contained in parent organs. Based on an index of fuzziness of the models, 32 thorax data sets, and 10 organs defined in them, we found that the hierarchical approach to modeling can effectively handle the non-linear relationships in position, scale, and orientation that exist among organs in different patients.
Social calls provide novel insights into the evolution of vocal learning
Sewall, Kendra B.; Young, Anna M.; Wright, Timothy F.
2016-01-01
Learned song is among the best-studied models of animal communication. In oscine songbirds, where learned song is most prevalent, it is used primarily for intrasexual selection and mate attraction. Learning of a different class of vocal signals, known as contact calls, is found in a diverse array of species, where they are used to mediate social interactions among individuals. We argue that call learning provides a taxonomically rich system for studying testable hypotheses for the evolutionary origins of vocal learning. We describe and critically evaluate four nonmutually exclusive hypotheses for the origin and current function of vocal learning of calls, which propose that call learning (1) improves auditory detection and recognition, (2) signals local knowledge, (3) signals group membership, or (4) allows for the encoding of more complex social information. We propose approaches to testing these four hypotheses but emphasize that all of them share the idea that social living, not sexual selection, is a central driver of vocal learning. Finally, we identify future areas for research on call learning that could provide new perspectives on the origins and mechanisms of vocal learning in both animals and humans. PMID:28163325
Agnihotri, Samira; Sundeep, P. V. D. S.; Seelamantula, Chandra Sekhar; Balakrishnan, Rohini
2014-01-01
Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential. PMID:24603717
Motion control of planar parallel robot using the fuzzy descriptor system approach.
Vermeiren, Laurent; Dequidt, Antoine; Afroun, Mohamed; Guerra, Thierry-Marie
2012-09-01
This work presents the control of a two-degree of freedom parallel robot manipulator. A quasi-LPV approach, through the so-called TS fuzzy model and LMI constraints problems is used. Moreover, in this context a way to derive interesting control laws is to keep the descriptor form of the mechanical system. Therefore, new LMI problems have to be defined that helps to reduce the conservatism of the usual results. Some relaxations are also proposed to leave the pure quadratic stability/stabilization framework. A comparison study between the classical control strategies from robotics and the control design using TS fuzzy descriptor models is carried out to show the interest of the proposed approach. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Information Retrieval and Graph Analysis Approaches for Book Recommendation.
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments.
Information Retrieval and Graph Analysis Approaches for Book Recommendation
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments. PMID:26504899
Binbing Yu; Tiwari, Ram C; Feuer, Eric J
2011-06-01
Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.
Rollover risk prediction of heavy vehicles by reliability index and empirical modelling
NASA Astrophysics Data System (ADS)
Sellami, Yamine; Imine, Hocine; Boubezoul, Abderrahmane; Cadiou, Jean-Charles
2018-03-01
This paper focuses on a combination of a reliability-based approach and an empirical modelling approach for rollover risk assessment of heavy vehicles. A reliability-based warning system is developed to alert the driver to a potential rollover before entering into a bend. The idea behind the proposed methodology is to estimate the rollover risk by the probability that the vehicle load transfer ratio (LTR) exceeds a critical threshold. Accordingly, a so-called reliability index may be used as a measure to assess the vehicle safe functioning. In the reliability method, computing the maximum of LTR requires to predict the vehicle dynamics over the bend which can be in some cases an intractable problem or time-consuming. With the aim of improving the reliability computation time, an empirical model is developed to substitute the vehicle dynamics and rollover models. This is done by using the SVM (Support Vector Machines) algorithm. The preliminary obtained results demonstrate the effectiveness of the proposed approach.
ERIC Educational Resources Information Center
Faiola, Anthony; Matei, Sorin Adam
2010-01-01
The evolution of human-computer interaction design (HCID) over the last 20 years suggests that there is a growing need for educational scholars to consider new and more applicable theoretical models of interactive product design. The authors suggest that such paradigms would call for an approach that would equip HCID students with a better…
ERIC Educational Resources Information Center
Kaufman, Peter A.; Melton, Horace L.; Varner, Iris I.; Hoelscher, Mark; Schmidt, Klaus; Spaulding, Aslihan D.
2011-01-01
Using an experiential learning model as a conceptual background, this article discusses characteristics and learning objectives for well-known foreign study programs such as study tours, study abroad, and internships and compares them with a less common overseas program called the "Global Marketing Program" (GMP). GMP involves…
Designing a Strategic Plan through an Emerging Knowledge Generation Process: The ATM Experience
ERIC Educational Resources Information Center
Zanotti, Francesco
2012-01-01
Purpose: The aim of this contribution is to describe a new methodology for designing strategic plans and how it was implemented by ATM, a public transportation agency based in Milan, Italy. Design/methodology/approach: This methodology is founded on a new system theory, called "quantum systemics". It is based on models and metaphors both…
ERIC Educational Resources Information Center
Kennedy, Michael J.; Thomas, Cathy Newman; Meyer, J. Patrick; Alves, Kat D.; Lloyd, John Wills
2014-01-01
Universal Design for Learning (UDL) is a framework that is commonly used for guiding the construction and delivery of instruction intended to support all students. In this study, we used a related model to guide creation of a multimedia-based instructional tool called content acquisition podcasts (CAPs). CAPs delivered vocabulary instruction…
ERIC Educational Resources Information Center
Dubowitz, Howard
2013-01-01
Child maltreatment affects millions of children each year. health care providers are increasingly called upon to address such psychosocial problems facing many families. In this article, the authors describe a practical approach to further enhance pediatric primary care and make it more responsive to the needs of children and families. The Safe…
ERIC Educational Resources Information Center
Huang, Zuqing; Qiu, Robin G.
2016-01-01
University ranking or higher education assessment in general has been attracting more and more public attention over the years. However, the subjectivity-based evaluation index and indicator selections and weights that are widely adopted in most existing ranking systems have been called into question. In other words, the objectivity and…
Understanding Achievement Differences between Schools in Ireland--Can Existing Data-Sets Help?
ERIC Educational Resources Information Center
Gilleece, Lorraine
2014-01-01
Recent years have seen an increased focus on school accountability in Ireland and calls for greater use to be made of student achievement data for monitoring student outcomes. In this paper, it is argued that existing data-sets in Ireland offer limited potential for the value-added modelling approaches used for accountability purposes in many…
ERIC Educational Resources Information Center
Bailey, Judy; Taylor, Merilyn
2015-01-01
Learning to teach is a complex matter, and many different models of pre-service teacher education have been used to support novice teachers' preparation for the classroom. More recently there have been calls for a focus on core high-leverage teaching practices and for novice teachers to engage in representations, decompositions, and approximations…
Process Evaluation of an Integrated Health Promotion/Occupational Health Model in WellWorks-2
ERIC Educational Resources Information Center
Hunt, Mary Kay; Lederman, Ruth; Stoddard, Anne M.; LaMontagne, Anthony D.; McLellan, Deborah; Combe, Candace; Barbeau, Elizabeth; Sorensen, Glorian
2005-01-01
Disparities in chronic disease risk by occupation call for new approaches to health promotion. WellWorks-2 was a randomized, controlled study comparing the effectiveness of a health promotion/occupational health program (HP/OHS) with a standard intervention (HP). Interventions in both studies were based on the same theoretical foundations. Results…
The social and political lives of zoonotic disease models: narratives, science and policy.
Leach, Melissa; Scoones, Ian
2013-07-01
Zoonotic diseases currently pose both major health threats and complex scientific and policy challenges, to which modelling is increasingly called to respond. In this article we argue that the challenges are best met by combining multiple models and modelling approaches that elucidate the various epidemiological, ecological and social processes at work. These models should not be understood as neutral science informing policy in a linear manner, but as having social and political lives: social, cultural and political norms and values that shape their development and which they carry and project. We develop and illustrate this argument in relation to the cases of H5N1 avian influenza and Ebola, exploring for each the range of modelling approaches deployed and the ways they have been co-constructed with a particular politics of policy. Addressing the complex, uncertain dynamics of zoonotic disease requires such social and political lives to be made explicit in approaches that aim at triangulation rather than integration, and plural and conditional rather than singular forms of policy advice. Copyright © 2013 Elsevier Ltd. All rights reserved.
An approach to quantify the heat wave strength and price a heat derivative for risk hedging
NASA Astrophysics Data System (ADS)
Shen, Samuel S. P.; Kramps, Benedikt; Sun, Shirley X.; Bailey, Barbara
2012-01-01
Mitigating the heat stress via a derivative policy is a vital financial option for agricultural producers and other business sectors to strategically adapt to the climate change scenario. This study has provided an approach to identifying heat stress events and pricing the heat stress weather derivative due to persistent days of high surface air temperature (SAT). Cooling degree days (CDD) are used as the weather index for trade. In this study, a call-option model was used as an example for calculating the price of the index. Two heat stress indices were developed to describe the severity and physical impact of heat waves. The daily Global Historical Climatology Network (GHCN-D) SAT data from 1901 to 2007 from the southern California, USA, were used. A major California heat wave that occurred 20-25 October 1965 was studied. The derivative price was calculated based on the call-option model for both long-term station data and the interpolated grid point data at a regular 0.1°×0.1° latitude-longitude grid. The resulting comparison indicates that (a) the interpolated data can be used as reliable proxy to price the CDD and (b) a normal distribution model cannot always be used to reliably calculate the CDD price. In conclusion, the data, models, and procedures described in this study have potential application in hedging agricultural and other risks.
An application of prospect theory to a SHM-based decision problem
NASA Astrophysics Data System (ADS)
Bolognani, Denise; Verzobio, Andrea; Tonelli, Daniel; Cappello, Carlo; Glisic, Branko; Zonta, Daniele
2017-04-01
Decision making investigates choices that have uncertain consequences and that cannot be completely predicted. Rational behavior may be described by the so-called expected utility theory (EUT), whose aim is to help choosing among several solutions to maximize the expectation of the consequences. However, Kahneman and Tversky developed an alternative model, called prospect theory (PT), showing that the basic axioms of EUT are violated in several instances. In respect of EUT, PT takes into account irrational behaviors and heuristic biases. It suggests an alternative approach, in which probabilities are replaced by decision weights, which are strictly related to the decision maker's preferences and may change for different individuals. In particular, people underestimate the utility of uncertain scenarios compared to outcomes obtained with certainty, and show inconsistent preferences when the same choice is presented in different forms. The goal of this paper is precisely to analyze a real case study involving a decision problem regarding the Streicker Bridge, a pedestrian bridge on Princeton University campus. By modelling the manager of the bridge with the EUT first, and with PT later, we want to verify the differences between the two approaches and to investigate how the two models are sensitive to unpacking probabilities, which represent a common cognitive bias in irrational behaviors.
Towards Agent-Oriented Approach to a Call Management System
NASA Astrophysics Data System (ADS)
Ashamalla, Amir Nabil; Beydoun, Ghassan; Low, Graham
There is more chance of a completed sale if the end customers and relationship managers are suitably matched. This in turn can reduce the number of calls made by a call centre reducing operational costs such as working time and phone bills. This chapter is part of ongoing research aimed at helping a CMC to make better use of its personnel and equipment while maximizing the value of the service it offers to its client companies and end customers. This is accomplished by ensuring the optimal use of resources with appropriate real-time scheduling and load balancing and matching the end customers to appropriate relationship managers. In a globalized market, this may mean taking into account the cultural environment of the customer, as well as the appropriate profile and/or skill of the relationship manager to communicate effectively with the end customer. The chapter evaluates the suitability of a MAS to a call management system and illustrates the requirement analysis phase using i* models.
Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.
Chung, SungWon; Lu, Ying; Henry, Roland G
2006-11-01
Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.
Computation of transonic separated wing flows using an Euler/Navier-Stokes zonal approach
NASA Technical Reports Server (NTRS)
Kaynak, Uenver; Holst, Terry L.; Cantwell, Brian J.
1986-01-01
A computer program called Transonic Navier Stokes (TNS) has been developed which solves the Euler/Navier-Stokes equations around wings using a zonal grid approach. In the present zonal scheme, the physical domain of interest is divided into several subdomains called zones and the governing equations are solved interactively. The advantages of the Zonal Grid approach are as follows: (1) the grid for any subdomain can be generated easily; (2) grids can be, in a sense, adapted to the solution; (3) different equation sets can be used in different zones; and, (4) this approach allows for a convenient data base organization scheme. Using this code, separated flows on a NACA 0012 section wing and on the NASA Ames WING C have been computed. First, the effects of turbulence and artificial dissipation models incorporated into the code are assessed by comparing the TNS results with other CFD codes and experiments. Then a series of flow cases is described where data are available. The computed results, including cases with shock-induced separation, are in good agreement with experimental data. Finally, some futuristic cases are presented to demonstrate the abilities of the code for massively separated cases which do not have experimental data.
A Scalable Approach to Modeling Cascading Risk in the MDAP Network
2014-04-30
our future work. References Asuncion, A., Welling, M., Smyth, P., & Teh , P. Y. ( 2009 ). On Smoothing and inference for Topic Models, Proceeding of...modeling can be measured using a factor called perplexity (Asuncion et al., 2009 ). Perplexity is a measure of model’s ability to infer the topics in...fåÑçêãÉÇ=`Ü~åÖÉ= = - 307 - Table 5. Funding Table for FY 2007 From PE_abc (From 2009 Document) Programs Cost ($ in millions) for FY 2007 Non-MDAP_a
Cellular Automata with Anticipation: Examples and Presumable Applications
NASA Astrophysics Data System (ADS)
Krushinsky, Dmitry; Makarenko, Alexander
2010-11-01
One of the most prospective new methodologies for modelling is the so-called cellular automata (CA) approach. According to this paradigm, the models are built from simple elements connected into regular structures with local interaction between neighbours. The patterns of connections usually have a simple geometry (lattices). As one of the classical examples of CA we mention the game `Life' by J. Conway. This paper presents two examples of CA with anticipation property. These examples include a modification of the game `Life' and a cellular model of crowd movement.
Functional Risk Modeling for Lunar Surface Systems
NASA Technical Reports Server (NTRS)
Thomson, Fraser; Mathias, Donovan; Go, Susie; Nejad, Hamed
2010-01-01
We introduce an approach to risk modeling that we call functional modeling , which we have developed to estimate the capabilities of a lunar base. The functional model tracks the availability of functions provided by systems, in addition to the operational state of those systems constituent strings. By tracking functions, we are able to identify cases where identical functions are provided by elements (rovers, habitats, etc.) that are connected together on the lunar surface. We credit functional diversity in those cases, and in doing so compute more realistic estimates of operational mode availabilities. The functional modeling approach yields more realistic estimates of the availability of the various operational modes provided to astronauts by the ensemble of surface elements included in a lunar base architecture. By tracking functional availability the effects of diverse backup, which often exists when two or more independent elements are connected together, is properly accounted for.
ERIC Educational Resources Information Center
Levy, Mike
2015-01-01
The article considers the role of qualitative research methods in CALL through describing a series of examples. These examples are used to highlight the importance and value of qualitative data in relation to a specific research objective in CALL. The use of qualitative methods in conjunction with other approaches as in mixed method research…
Motivation and Self-Regulation in Addiction: A Call for Convergence
Köpetz, Cătălina E.; Lejuez, Carl W.; Wiers, Reinout W.; Kruglanski, Arie W.
2015-01-01
Addiction models have frequently invoked motivational mechanisms to explain the initiation and maintenance of addictive behaviors. However, in doing so, these models have emphasized the unique characteristics of addictive behaviors and overlooked the commonalities that they share with motivated behaviors in general. As a consequence, addiction research has failed to connect with and take advantage of promising and highly relevant advances in motivation and self-regulation research. The present article is a call for a convergence of the previous approaches to addictive behavior and the new advances in basic motivation and self-regulation. The authors emphasize the commonalities that addictive behaviors may share with motivated behavior in general. In addition, it is suggested that the same psychological principles underlying motivated action in general may apply to understand challenging aspects of the etiology and maintenance of addictive behaviors. PMID:26069472
Maruyama, Rika; Echigoya, Yusuke; Caluseriu, Oana; Aoki, Yoshitsugu; Takeda, Shin'ichi; Yokota, Toshifumi
2017-01-01
Exon-skipping therapy is an emerging approach that uses synthetic DNA-like molecules called antisense oligonucleotides (AONs) to splice out frame-disrupting parts of mRNA, restore the reading frame, and produce truncated yet functional proteins. Multiple exon skipping utilizing a cocktail of AONs can theoretically treat 80-90% of patients with Duchenne muscular dystrophy (DMD). The success of multiple exon skipping by the systemic delivery of a cocktail of AONs called phosphorodiamidate morpholino oligomers (PMOs) in a DMD dog model has made a significant impact on the development of therapeutics for DMD, leading to clinical trials of PMO-based drugs. Here, we describe the systemic delivery of a cocktail of PMOs to skip multiple exons in dystrophic dogs and the evaluation of the efficacies and toxicity in vivo.
Karski, Tomasz
2012-01-01
The observations from 1985-1995 and till 2012 clarify that the development of so-called idiopathic scoliosis is connected with "gait" and habitual permanent "standing at ease" on the right leg. The scoliosis is "a result" of asymmetry of "function" - "changed" loading during gait and asymmetry in time during 'at ease' standing, more prevalent on the right leg. Every types of scoliosis is connected with the adequate "model of hips movements" [MHM] (Karski et al., 2006 [1]). This new classification clarifies the therapeutic approach to each types of scoliosis and provides the possibility to introduce causative prophylaxis.
The ABLe change framework: a conceptual and methodological tool for promoting systems change.
Foster-Fishman, Pennie G; Watson, Erin R
2012-06-01
This paper presents a new approach to the design and implementation of community change efforts like a System of Care. Called the ABLe Change Framework, the model provides simultaneous attention to the content and process of the work, ensuring effective implementation and the pursuit of systems change. Three key strategies are employed in this model to ensure the integration of content and process efforts and effective mobilization of broad scale systems change: Systemic Action Learning Teams, Simple Rules, and Small Wins. In this paper we describe the ABLe Change Framework and present a case study in which we successfully applied this approach to one system of care effort in Michigan.
Estimating daily climatologies for climate indices derived from climate model data and observations
Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof
2015-01-01
Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192
Neural network for control of rearrangeable Clos networks.
Park, Y K; Cherkassky, V
1994-09-01
Rapid evolution in the field of communication networks requires high speed switching technologies. This involves a high degree of parallelism in switching control and routing performed at the hardware level. The multistage crossbar networks have always been attractive to switch designers. In this paper a neural network approach to controlling a three-stage Clos network in real time is proposed. This controller provides optimal routing of communication traffic requests on a call-by-call basis by rearranging existing connections, with a minimum length of rearrangement sequence so that a new blocked call request can be accommodated. The proposed neural network controller uses Paull's rearrangement algorithm, along with the special (least used) switch selection rule in order to minimize the length of rearrangement sequences. The functional behavior of our model is verified by simulations and it is shown that the convergence time required for finding an optimal solution is constant, regardless of the switching network size. The performance is evaluated for random traffic with various traffic loads. Simulation results show that applying the least used switch selection rule increases the efficiency in switch rearrangements, reducing the network convergence time. The implementation aspects are also discussed to show the feasibility of the proposed approach.
Digression and Value Concatenation to Enable Privacy-Preserving Regression.
Li, Xiao-Bai; Sarkar, Sumit
2014-09-01
Regression techniques can be used not only for legitimate data analysis, but also to infer private information about individuals. In this paper, we demonstrate that regression trees, a popular data-analysis and data-mining technique, can be used to effectively reveal individuals' sensitive data. This problem, which we call a "regression attack," has not been addressed in the data privacy literature, and existing privacy-preserving techniques are not appropriate in coping with this problem. We propose a new approach to counter regression attacks. To protect against privacy disclosure, our approach introduces a novel measure, called digression , which assesses the sensitive value disclosure risk in the process of building a regression tree model. Specifically, we develop an algorithm that uses the measure for pruning the tree to limit disclosure of sensitive data. We also propose a dynamic value-concatenation method for anonymizing data, which better preserves data utility than a user-defined generalization scheme commonly used in existing approaches. Our approach can be used for anonymizing both numeric and categorical data. An experimental study is conducted using real-world financial, economic and healthcare data. The results of the experiments demonstrate that the proposed approach is very effective in protecting data privacy while preserving data quality for research and analysis.
Reconstruction-Based Digital Dental Occlusion of the Partially Edentulous Dentition.
Zhang, Jian; Xia, James J; Li, Jianfu; Zhou, Xiaobo
2017-01-01
Partially edentulous dentition presents a challenging problem for the surgical planning of digital dental occlusion in the field of craniomaxillofacial surgery because of the incorrect maxillomandibular distance caused by missing teeth. We propose an innovative approach called Dental Reconstruction with Symmetrical Teeth (DRST) to achieve accurate dental occlusion for the partially edentulous cases. In this DRST approach, the rigid transformation between two symmetrical teeth existing on the left and right dental model is estimated through probabilistic point registration by matching the two shapes. With the estimated transformation, the partially edentulous space can be virtually filled with the teeth in its symmetrical position. Dental alignment is performed by digital dental occlusion reestablishment algorithm with the reconstructed complete dental model. Satisfactory reconstruction and occlusion results are demonstrated with the synthetic and real partially edentulous models.
NASA Astrophysics Data System (ADS)
Astuti Thamrin, Sri; Taufik, Irfan
2018-03-01
Dengue haemorrhagic fever (DHF) is an infectious disease caused by dengue virus. The increasing number of people with DHF disease correlates with the neighbourhood, for example sub-districts, and the characteristics of the sub-districts are formed from individuals who are domiciled in the sub-districts. Data containing individuals and sub-districts is a hierarchical data structure, called multilevel analysis. Frequently encountered response variable of the data is the time until an event occurs. Multilevel and spatial models are being increasingly used to obtain substantive information on area-level inequalities in DHF survival. Using a case study approach, we report on the implications of using multilevel with spatial survival models to study geographical inequalities in all cause survival.
On the equivalence of the RTI and SVM approaches to time correlated analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croft, S.; Favalli, A.; Henzlova, D.
2014-11-21
Recently two papers on how to perform passive neutron auto-correlation analysis on time gated histograms formed from pulse train data, generically called time correlation analysis (TCA), have appeared in this journal [1,2]. For those of us working in international nuclear safeguards these treatments are of particular interest because passive neutron multiplicity counting is a widely deployed technique for the quantification of plutonium. The purpose of this letter is to show that the skewness-variance-mean (SVM) approach developed in [1] is equivalent in terms of assay capability to the random trigger interval (RTI) analysis laid out in [2]. Mathematically we could alsomore » use other numerical ways to extract the time correlated information from the histogram data including for example what we might call the mean, mean square, and mean cube approach. The important feature however, from the perspective of real world applications, is that the correlated information extracted is the same, and subsequently gets interpreted in the same way based on the same underlying physics model.« less
A paradigm to guide health promotion into the 21st century: the integral idea whose time has come.
Lundy, Tam
2010-09-01
The field of health promotion and education is at a turning point as it steps up to address the interconnected challenges of health, equity and sustainable development. Professionals and policy makers recognize the need for an integrative thinking and practice approach to foster comprehensive and coherent action in each of these complex areas. An integrative approach to policy and practice builds bridges across disciplines and discourses, supporting our efforts to take important next steps to generate sustainability and health for all. Comprehensive and coherent practice requires comprehensive and coherent theory. This article offers a brief introduction to Ken Wilber's influential Integral model, inviting its consideration as a promising paradigmatic framework that can guide thinking, practice, research and evidence as health promotion and education enter a new era. Currently influencing thought and practice leaders in diverse disciplines and sectors, the Integral approach presents a practical response to the current call for cross-disciplinary collaboration to address health, equity and sustainability. In addition, it addresses the disciplinary call for evidence-based practice that is grounded in, and accountable to, robust theoretical foundations.
NASA Astrophysics Data System (ADS)
Wardono; Waluya, S. B.; Mariani, Scolastika; Candra D, S.
2016-02-01
This study aims to find out that there are differences in mathematical literacy ability in content Change and Relationship class VII Junior High School 19, Semarang by Problem Based Learning (PBL) model with an Indonesian Realistic Mathematics Education (called Pendidikan Matematika Realistik Indonesia or PMRI in Indonesia) approach assisted Elearning Edmodo, PBL with a PMRI approach, and expository; to know whether the group of students with learning PBL models with PMRI approach and assisted E-learning Edmodo can improve mathematics literacy; to know that the quality of learning PBL models with a PMRI approach assisted E-learning Edmodo has a good category; to describe the difficulties of students in working the problems of mathematical literacy ability oriented PISA. This research is a mixed methods study. The population was seventh grade students of Junior High School 19, Semarang Indonesia. Sample selection is done by random sampling so that the selected experimental class 1, class 2 and the control experiment. Data collected by the methods of documentation, tests and interviews. From the results of this study showed average mathematics literacy ability of students in the group PBL models with a PMRI approach assisted E-learning Edmodo better than average mathematics literacy ability of students in the group PBL models with a PMRI approach and better than average mathematics literacy ability of students in the expository models; Mathematics literacy ability in the class using the PBL model with a PMRI approach assisted E-learning Edmodo have increased and the improvement of mathematics literacy ability is higher than the improvement of mathematics literacy ability of class that uses the model of PBL learning with PMRI approach and is higher than the improvement of mathematics literacy ability of class that uses the expository models; The quality of learning using PBL models with a PMRI approach assisted E-learning Edmodo have very good category.
Functional integral for non-Lagrangian systems
NASA Astrophysics Data System (ADS)
Kochan, Denis
2010-02-01
A functional integral formulation of quantum mechanics for non-Lagrangian systems is presented. The approach, which we call “stringy quantization,” is based solely on classical equations of motion and is free of any ambiguity arising from Lagrangian and/or Hamiltonian formulation of the theory. The functionality of the proposed method is demonstrated on several examples. Special attention is paid to the stringy quantization of systems with a general A-power friction force -κq˙A. Results for A=1 are compared with those obtained in the approaches by Caldirola-Kanai, Bateman, and Kostin. Relations to the Caldeira-Leggett model and to the Feynman-Vernon approach are discussed as well.
Moving object detection using dynamic motion modelling from UAV aerial images.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
2014-01-01
Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.
The Empathic Operating System (emOS)
2016-06-15
contextual data, including phone calls, locations, and events, to help mitigate the effects of stress on daily life. This approach offers novel...with their contextual data, including phone calls, locations, and events, to help mitigate the effects of stress on daily life. This approach offers...incorporates users’ physiology with their contextual data, including phone calls, locations, and events, to help mitigate the effects of stress on
2013-09-30
accuracy of the analysis . Root mean square difference ( RMSD ) is much smaller for RIP than for either Simple Ocean Data Assimilation or Incremental... Analysis Update globally for temperature as well as salinity. Regionally the same results were found, with only one exception in which the salinity RMSD ...short-term forecast using a numerical model with the observations taken within the forecast time window. The resulting state is the so-called “ analysis
The Effects of Reading Short Stories in Improving Foreign Language Writing Skills
ERIC Educational Resources Information Center
Bartan, Özgür Sen
2017-01-01
This study is an inquiry into the effects of reading short stories in improving foreign language writing skills through Read for Writing model, which is the adaptation of the approach called Talk for Writing (Corbett, 2013). It is a quasi-experimental 13-week field study which was implemented in a primary school. The purpose of this study is to…
ERIC Educational Resources Information Center
Bush, Michael D.
2010-01-01
The development of online learning materials is a complex and expensive process that can benefit from the application of consistent and organized principles of instructional design. This article discusses the development at Brigham Young University of the online portion of a one-semester course in Swahili using the ADDIE Model (Analysis, Design,…
ERIC Educational Resources Information Center
Somech, Anit; Oplatka, Izhar
2009-01-01
Purpose: The current literature's call for a more ecological approach to violence theory, research, and practice stimulated the current study. This model postulates that teachers' willingness to engage in behaviors intended to tackle violence in school as part of their in-role duties (role breadth) will affect school violence. Specifically, the…
ERIC Educational Resources Information Center
Schmidt, Jennifer A.; Rosenberg, Joshua M.; Beymer, Patrick N.
2018-01-01
Science education reform efforts in the Unites States call for a dramatic shift in the way students are expected to engage with scientific concepts, core ideas, and practices in the classroom. This new vision of science learning demands a more complex conceptual understanding of student engagement and research models that capture both the…
ERIC Educational Resources Information Center
Dameron, Merry Leigh
2016-01-01
Increasing demands upon the time of the professional school counselor combined with the call by the American School Counselor Association to provide direct services to students may lead many in the profession to wonder from what theoretical standpoint(s) they can best meet these lofty goals. I propose a two phase approach combining person-centered…
Motivating Learners in Open and Distance Learning: Do We Need a New Theory of Learner Support?
ERIC Educational Resources Information Center
Simpson, Ormond
2008-01-01
This paper calls for a new theory of learner support in distance learning based on recent findings in the fields of learning and motivational psychology. It surveys some current learning motivation theories and proposes that models drawn from the relatively new field of Positive Psychology, such as the "Strengths Approach", together with…
ERIC Educational Resources Information Center
Ponguta, Liliana Angelica; Rasheed, Muneera Abdul; Reyes, Chin Regina; Yousafzai, Aisha Khizar
2018-01-01
The international community has set forth global targets that include calls for universal access to high-quality early childhood care and education (ECCE), as indicated in the United Nations' Sustainable Development Goals. One major impediment to achieving this target is the lack of a skilled workforce. In this paper, we argue the case for…
Strelka: accurate somatic small-variant calling from sequenced tumor-normal sample pairs.
Saunders, Christopher T; Wong, Wendy S W; Swamy, Sajani; Becq, Jennifer; Murray, Lisa J; Cheetham, R Keira
2012-07-15
Whole genome and exome sequencing of matched tumor-normal sample pairs is becoming routine in cancer research. The consequent increased demand for somatic variant analysis of paired samples requires methods specialized to model this problem so as to sensitively call variants at any practical level of tumor impurity. We describe Strelka, a method for somatic SNV and small indel detection from sequencing data of matched tumor-normal samples. The method uses a novel Bayesian approach which represents continuous allele frequencies for both tumor and normal samples, while leveraging the expected genotype structure of the normal. This is achieved by representing the normal sample as a mixture of germline variation with noise, and representing the tumor sample as a mixture of the normal sample with somatic variation. A natural consequence of the model structure is that sensitivity can be maintained at high tumor impurity without requiring purity estimates. We demonstrate that the method has superior accuracy and sensitivity on impure samples compared with approaches based on either diploid genotype likelihoods or general allele-frequency tests. The Strelka workflow source code is available at ftp://strelka@ftp.illumina.com/. csaunders@illumina.com
NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
A framework for the identification of reusable processes
NASA Astrophysics Data System (ADS)
de Vries, Marné; Gerber, Aurona; van der Merwe, Alta
2013-11-01
A significant challenge that faces IT management is that of aligning the IT infrastructure of an enterprise with its business goals and practices, also called business-IT alignment. A particular business-IT alignment approach, the foundation for execution approach, was well-accepted by practitioners due to a novel construct, called the operating model (OM). The OM supports business-IT alignment by directing the coherent and consistent design of business and IT components. Even though the OM is a popular construct, our previous research detected the need to enhance the OM, since the OM does not specify methods to identify opportunities for data sharing and process reuse in an enterprise. In this article, we address one of the identified deficiencies in the OM. We present a process reuse identification framework (PRIF) that could be used to enhance the OM in identifying process reuse opportunities in an enterprise. We applied design research to develop PRIF as an artefact, where the development process of PRIF was facilitated by means of the business-IT alignment model (BIAM). We demonstrate the use of the PRIF as well as report on the results of evaluating PRIF in terms of its usefulness and ease-of-use, using experimentation and a questionnaire.
Marine mammals' influence on ecosystem processes affecting fisheries in the Barents Sea is trivial.
Corkeron, Peter J
2009-04-23
Some interpretations of ecosystem-based fishery management include culling marine mammals as an integral component. The current Norwegian policy on marine mammal management is one example. Scientific support for this policy includes the Scenario Barents Sea (SBS) models. These modelled interactions between cod, Gadus morhua, herring, Clupea harengus, capelin, Mallotus villosus and northern minke whales, Balaenoptera acutorostrata. Adding harp seals Phoca groenlandica into this top-down modelling approach resulted in unrealistic model outputs. Another set of models of the Barents Sea fish-fisheries system focused on interactions within and between the three fish populations, fisheries and climate. These model key processes of the system successfully. Continuing calls to support the SBS models despite their failure suggest a belief that marine mammal predation must be a problem for fisheries. The best available scientific evidence provides no justification for marine mammal culls as a primary component of an ecosystem-based approach to managing the fisheries of the Barents Sea.
Penalized spline estimation for functional coefficient regression models.
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan
2010-04-01
The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.
NASA Astrophysics Data System (ADS)
Koga-Vicente, A.; Friedel, M. J.
2010-12-01
Every year thousands of people are affected by floods and landslide hazards caused by rainstorms. The problem is more serious in tropical developing countries because of the susceptibility as a result of the high amount of available energy to form storms, and the high vulnerability due to poor economic and social conditions. Predictive models of hazards are important tools to manage this kind of risk. In this study, a comparison of two different modeling approaches was made for predicting hydrometeorological hazards in 12 cities on the coast of São Paulo, Brazil, from 1994 to 2003. In the first approach, an empirical multiple linear regression (MLR) model was developed and used; the second approach used a type of unsupervised nonlinear artificial neural network called a self-organized map (SOM). By using twenty three independent variables of susceptibility (precipitation, soil type, slope, elevation, and regional atmospheric system scale) and vulnerability (distribution and total population, income and educational characteristics, poverty intensity, human development index), binary hazard responses were obtained. Model performance by cross-validation indicated that the respective MLR and SOM model accuracy was about 67% and 80%. Prediction accuracy can be improved by the addition of information, but the SOM approach is preferred because of sparse data and highly nonlinear relations among the independent variables.
Spatiotemporal access model based on reputation for the sensing layer of the IoT.
Guo, Yunchuan; Yin, Lihua; Li, Chao; Qian, Junyan
2014-01-01
Access control is a key technology in providing security in the Internet of Things (IoT). The mainstream security approach proposed for the sensing layer of the IoT concentrates only on authentication while ignoring the more general models. Unreliable communications and resource constraints make the traditional access control techniques barely meet the requirements of the sensing layer of the IoT. In this paper, we propose a model that combines space and time with reputation to control access to the information within the sensing layer of the IoT. This model is called spatiotemporal access control based on reputation (STRAC). STRAC uses a lattice-based approach to decrease the size of policy bases. To solve the problem caused by unreliable communications, we propose both nondeterministic authorizations and stochastic authorizations. To more precisely manage the reputation of nodes, we propose two new mechanisms to update the reputation of nodes. These new approaches are the authority-based update mechanism (AUM) and the election-based update mechanism (EUM). We show how the model checker UPPAAL can be used to analyze the spatiotemporal access control model of an application. Finally, we also implement a prototype system to demonstrate the efficiency of our model.
Metainference: A Bayesian inference method for heterogeneous systems
Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele
2016-01-01
Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300
Network inference using informative priors
Mukherjee, Sach; Speed, Terence P.
2008-01-01
Recent years have seen much interest in the study of systems characterized by multiple interacting components. A class of statistical models called graphical models, in which graphs are used to represent probabilistic relationships between variables, provides a framework for formal inference regarding such systems. In many settings, the object of inference is the network structure itself. This problem of “network inference” is well known to be a challenging one. However, in scientific settings there is very often existing information regarding network connectivity. A natural idea then is to take account of such information during inference. This article addresses the question of incorporating prior information into network inference. We focus on directed models called Bayesian networks, and use Markov chain Monte Carlo to draw samples from posterior distributions over network structures. We introduce prior distributions on graphs capable of capturing information regarding network features including edges, classes of edges, degree distributions, and sparsity. We illustrate our approach in the context of systems biology, applying our methods to network inference in cancer signaling. PMID:18799736
Network inference using informative priors.
Mukherjee, Sach; Speed, Terence P
2008-09-23
Recent years have seen much interest in the study of systems characterized by multiple interacting components. A class of statistical models called graphical models, in which graphs are used to represent probabilistic relationships between variables, provides a framework for formal inference regarding such systems. In many settings, the object of inference is the network structure itself. This problem of "network inference" is well known to be a challenging one. However, in scientific settings there is very often existing information regarding network connectivity. A natural idea then is to take account of such information during inference. This article addresses the question of incorporating prior information into network inference. We focus on directed models called Bayesian networks, and use Markov chain Monte Carlo to draw samples from posterior distributions over network structures. We introduce prior distributions on graphs capable of capturing information regarding network features including edges, classes of edges, degree distributions, and sparsity. We illustrate our approach in the context of systems biology, applying our methods to network inference in cancer signaling.
Schneider, Nick K; Sebrié, Ernesto M; Fernández, Esteve
2011-12-07
To demonstrate the tobacco industry rationale behind the "Spanish model" on non-smokers' protection in hospitality venues and the impact it had on some European and Latin American countries between 2006 and 2011. Tobacco industry documents research triangulated against news and media reports. As an alternative to the successful implementation of 100% smoke-free policies, several European and Latin American countries introduced partial smoking bans based on the so-called "Spanish model", a legal framework widely advocated by parts of the hospitality industry with striking similarities to "accommodation programmes" promoted by the tobacco industry in the late 1990s. These developments started with the implementation of the Spanish tobacco control law (Ley 28/2005) in 2006 and have increased since then. The Spanish experience demonstrates that partial smoking bans often resemble tobacco industry strategies and are used to spread a failed approach on international level. Researchers, advocates and policy makers should be aware of this ineffective policy.
2011-01-01
Background To demonstrate the tobacco industry rationale behind the "Spanish model" on non-smokers' protection in hospitality venues and the impact it had on some European and Latin American countries between 2006 and 2011. Methods Tobacco industry documents research triangulated against news and media reports. Results As an alternative to the successful implementation of 100% smoke-free policies, several European and Latin American countries introduced partial smoking bans based on the so-called "Spanish model", a legal framework widely advocated by parts of the hospitality industry with striking similarities to "accommodation programmes" promoted by the tobacco industry in the late 1990s. These developments started with the implementation of the Spanish tobacco control law (Ley 28/2005) in 2006 and have increased since then. Conclusion The Spanish experience demonstrates that partial smoking bans often resemble tobacco industry strategies and are used to spread a failed approach on international level. Researchers, advocates and policy makers should be aware of this ineffective policy. PMID:22151884
A deterministic compressive sensing model for bat biosonar.
Hague, David A; Buck, John R; Bilik, Igal
2012-12-01
The big brown bat (Eptesicus fuscus) uses frequency modulated (FM) echolocation calls to accurately estimate range and resolve closely spaced objects in clutter and noise. They resolve glints spaced down to 2 μs in time delay which surpasses what traditional signal processing techniques can achieve using the same echolocation call. The Matched Filter (MF) attains 10-12 μs resolution while the Inverse Filter (IF) achieves higher resolution at the cost of significantly degraded detection performance. Recent work by Fontaine and Peremans [J. Acoustic. Soc. Am. 125, 3052-3059 (2009)] demonstrated that a sparse representation of bat echolocation calls coupled with a decimating sensing method facilitates distinguishing closely spaced objects over realistic SNRs. Their work raises the intriguing question of whether sensing approaches structured more like a mammalian auditory system contains the necessary information for the hyper-resolution observed in behavioral tests. This research estimates sparse echo signatures using a gammatone filterbank decimation sensing method which loosely models the processing of the bat's auditory system. The decimated filterbank outputs are processed with [script-l](1) minimization. Simulations demonstrate that this model maintains higher resolution than the MF and significantly better detection performance than the IF for SNRs of 5-45 dB while undersampling the return signal by a factor of six.
Occupational voice demands and their impact on the call-centre industry.
Hazlett, D E; Duffy, O M; Moorhead, S A
2009-04-20
Within the last decade there has been a growth in the call-centre industry in the UK, with a growing awareness of the voice as an important tool for successful communication. Occupational voice problems such as occupational dysphonia, in a business which relies on healthy, effective voice as the primary professional communication tool, may threaten working ability and occupational health and safety of workers. While previous studies of telephone call-agents have reported a range of voice symptoms and functional vocal health problems, there have been no studies investigating the use and impact of vocal performance in the communication industry within the UK. This study aims to address a significant gap in the evidence-base of occupational health and safety research. The objectives of the study are: 1. to investigate the work context and vocal communication demands for call-agents; 2. to evaluate call-agents' vocal health, awareness and performance; and 3. to identify key risks and training needs for employees and employers within call-centres. This is an occupational epidemiological study, which plans to recruit call-centres throughout the UK and Ireland. Data collection will consist of three components: 1. interviews with managers from each participating call-centre to assess their communication and training needs; 2. an online biopsychosocial questionnaire will be administered to investigate the work environment and vocal demands of call-agents; and 3. voice acoustic measurements of a random sample of participants using the Multi-dimensional Voice Program (MDVP). Qualitative content analysis from the interviews will identify underlying themes and issues. A multivariate analysis approach will be adopted using Structural Equation Modelling (SEM), to develop voice measurement models in determining the construct validity of potential factors contributing to occupational dysphonia. Quantitative data will be analysed using SPSS version 15. Ethical approval is granted for this study from the School of Communication, University of Ulster. The results from this study will provide the missing element of voice-based evidence, by appraising the interactional dimensions of vocal health and communicative performance. This information will be used to inform training for call-agents and to contribute to health policies within the workplace, in order to enhance vocal health.
Beyond positivist ecology: toward an integrated ecological ethics.
Norton, Bryan G
2008-12-01
A post-positivist understanding of ecological science and the call for an "ecological ethic" indicate the need for a radically new approach to evaluating environmental change. The positivist view of science cannot capture the essence of environmental sciences because the recent work of "reflexive" ecological modelers shows that this requires a reconceptualization of the way in which values and ecological models interact in scientific process. Reflexive modelers are ecological modelers who believe it is appropriate for ecologists to examine the motives for their choices in developing models; this self-reflexive approach opens the door to a new way of integrating values into public discourse and to a more comprehensive approach to evaluating ecological change. This reflexive building of ecological models is introduced through the transformative simile of Aldo Leopold, which shows that learning to "think like a mountain" involves a shift in both ecological modeling and in values and responsibility. An adequate, interdisciplinary approach to ecological valuation, requires a re-framing of the evaluation questions in entirely new ways, i.e., a review of the current status of interdisciplinary value theory with respect to ecological values reveals that neither of the widely accepted theories of environmental value-neither economic utilitarianism nor intrinsic value theory (environmental ethics)-provides a foundation for an ecologically sensitive evaluation process. Thus, a new, ecologically sensitive, and more comprehensive approach to evaluating ecological change would include an examination of the metaphors that motivate the models used to describe environmental change.
Numerical modeling of axi-symmetrical cold forging process by ``Pseudo Inverse Approach''
NASA Astrophysics Data System (ADS)
Halouani, A.; Li, Y. M.; Abbes, B.; Guo, Y. Q.
2011-05-01
The incremental approach is widely used for the forging process modeling, it gives good strain and stress estimation, but it is time consuming. A fast Inverse Approach (IA) has been developed for the axi-symmetric cold forging modeling [1-2]. This approach exploits maximum the knowledge of the final part's shape and the assumptions of proportional loading and simplified tool actions make the IA simulation very fast. The IA is proved very useful for the tool design and optimization because of its rapidity and good strain estimation. However, the assumptions mentioned above cannot provide good stress estimation because of neglecting the loading history. A new approach called "Pseudo Inverse Approach" (PIA) was proposed by Batoz, Guo et al.. [3] for the sheet forming modeling, which keeps the IA's advantages but gives good stress estimation by taking into consideration the loading history. Our aim is to adapt the PIA for the cold forging modeling in this paper. The main developments in PIA are resumed as follows: A few intermediate configurations are generated for the given tools' positions to consider the deformation history; the strain increment is calculated by the inverse method between the previous and actual configurations. An incremental algorithm of the plastic integration is used in PIA instead of the total constitutive law used in the IA. An example is used to show the effectiveness and limitations of the PIA for the cold forging process modeling.
Confidential close call reporting system : preliminary evaluation findings.
DOT National Transportation Integrated Search
2008-12-01
The Federal Railroad Administration (FRA) is implementing a collaborative problem-solving approach to improving safety. The Confidential Close Call Reporting System (C3RS) is a human factors-based approach that is designed to reduce the accident rate...
NASA Astrophysics Data System (ADS)
Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.
2013-12-01
A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)
Dagenais, Christian; Plouffe, Laurence; Gagné, Charles; Toulouse, Georges; Breault, Andrée-Anne; Dupont, Didier
2017-03-01
A knowledge transfer (KT) strategy was implemented by the IRSST, an occupational health and safety research institute established in Québec (Canada), to improve the prevention of psychological and musculoskeletal problems among 911 emergency call centre agents. An evaluability assessment was conducted in which each aspect of the KT approach was documented systematically to determine whether the strategy had the potential to be evaluated in terms of its impact on the targeted population. A review of the literature on KT in occupational health and safety and on the evaluation of such KT programmes, along with the development of a logic model based on documentary analysis and semi-structured interviews with key stakeholders, indicated that the KT strategy was likely to have had a positive impact in the 911 emergency call centre sector. Implications for future research are discussed.
The Modellers' Halting Foray into Ecological Theory: Or, What is This Thing Called 'Growth Rate'?
Deveau, Michael; Karsten, Richard; Teismann, Holger
2015-06-01
This discussion paper describes the attempt of an imagined group of non-ecologists ("Modellers") to determine the population growth rate from field data. The Modellers wrestle with the multiple definitions of the growth rate available in the literature and the fact that, in their modelling, it appears to be drastically model-dependent, which seems to throw into question the very concept itself. Specifically, they observe that six representative models used to capture the data produce growth-rate values, which differ significantly. Almost ready to concede that the problem they set for themselves is ill-posed, they arrive at an alternative point of view that not only preserves the identity of the concept of the growth rate, but also helps discriminate between competing models for capturing the data. This is accomplished by assessing how robustly a given model is able to generate growth-rate values from randomized time-series data. This leads to the proposal of an iterative approach to ecological modelling in which the definition of theoretical concepts (such as the growth rate) and model selection complement each other. The paper is based on high-quality field data of mites on apple trees and may be called a "data-driven opinion piece".
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment
NASA Astrophysics Data System (ADS)
Zeigler, Bernard P.; Lee, J. S.
1998-08-01
In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.
Towards a self-consistent dynamical nuclear model
NASA Astrophysics Data System (ADS)
Roca-Maza, X.; Niu, Y. F.; Colò, G.; Bortignon, P. F.
2017-04-01
Density functional theory (DFT) is a powerful and accurate tool, exploited in nuclear physics to investigate the ground-state and some of the collective properties of nuclei along the whole nuclear chart. Models based on DFT are not, however, suitable for the description of single-particle dynamics in nuclei. Following the field theoretical approach by A Bohr and B R Mottelson to describe nuclear interactions between single-particle and vibrational degrees of freedom, we have taken important steps towards the building of a microscopic dynamic nuclear model. In connection with this, one important issue that needs to be better understood is the renormalization of the effective interaction in the particle-vibration approach. One possible way to renormalize the interaction is by the so-called subtraction method. In this contribution, we will implement the subtraction method in our model for the first time and study its consequences.
Modeling structural change in spatial system dynamics: A Daisyworld example.
Neuwirth, C; Peck, A; Simonović, S P
2015-03-01
System dynamics (SD) is an effective approach for helping reveal the temporal behavior of complex systems. Although there have been recent developments in expanding SD to include systems' spatial dependencies, most applications have been restricted to the simulation of diffusion processes; this is especially true for models on structural change (e.g. LULC modeling). To address this shortcoming, a Python program is proposed to tightly couple SD software to a Geographic Information System (GIS). The approach provides the required capacities for handling bidirectional and synchronized interactions of operations between SD and GIS. In order to illustrate the concept and the techniques proposed for simulating structural changes, a fictitious environment called Daisyworld has been recreated in a spatial system dynamics (SSD) environment. The comparison of spatial and non-spatial simulations emphasizes the importance of considering spatio-temporal feedbacks. Finally, practical applications of structural change models in agriculture and disaster management are proposed.
Application of Gauss's law space-charge limited emission model in iterative particle tracking method
NASA Astrophysics Data System (ADS)
Altsybeyev, V. V.; Ponomarev, V. A.
2016-11-01
The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. The results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.
A hybrid modeling approach for option pricing
NASA Astrophysics Data System (ADS)
Hajizadeh, Ehsan; Seifi, Abbas
2011-11-01
The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.
Data-driven approach to human motion modeling with Lua and gesture description language
NASA Astrophysics Data System (ADS)
Hachaj, Tomasz; Koptyra, Katarzyna; Ogiela, Marek R.
2017-03-01
The aim of this paper is to present the novel proposition of the human motion modelling and recognition approach that enables real time MoCap signal evaluation. By motions (actions) recognition we mean classification. The role of this approach is to propose the syntactic description procedure that can be easily understood, learnt and used in various motion modelling and recognition tasks in all MoCap systems no matter if they are vision or wearable sensor based. To do so we have prepared extension of Gesture Description Language (GDL) methodology that enables movements description and real-time recognition so that it can use not only positional coordinates of body joints but virtually any type of discreetly measured output MoCap signals like accelerometer, magnetometer or gyroscope. We have also prepared and evaluated the cross-platform implementation of this approach using Lua scripting language and JAVA technology. This implementation is called Data Driven GDL (DD-GDL). In tested scenarios the average execution speed is above 100 frames per second which is an acquisition time of many popular MoCap solutions.
Numerical approach to model independently reconstruct f (R ) functions through cosmographic data
NASA Astrophysics Data System (ADS)
Pizza, Liberato
2015-06-01
The challenging issue of determining the correct f (R ) among several possibilities is revised here by means of numerical reconstructions of the modified Friedmann equations around the redshift interval z ∈[0 ,1 ] . Frequently, a severe degeneracy between f (R ) approaches occurs, since different paradigms correctly explain present time dynamics. To set the initial conditions on the f (R ) functions, we involve the use of the so-called cosmography of the Universe, i.e., the technique of fixing constraints on the observable Universe by comparing expanded observables with current data. This powerful approach is essentially model independent, and correspondingly we got a model-independent reconstruction of f (R (z )) classes within the interval z ∈[0 ,1 ]. To allow the Hubble rate to evolve around z ≤1 , we considered three relevant frameworks of effective cosmological dynamics, i.e., the Λ CDM model, the Chevallier-Polarski-Linder parametrization, and a polynomial approach to dark energy. Finally, cumbersome algebra permits passing from f (z ) to f (R ), and the general outcome of our work is the determination of a viable f (R ) function, which effectively describes the observed Universe dynamics.
Jiang, Rui ; Yang, Hua ; Zhou, Linqi ; Kuo, C.-C. Jay ; Sun, Fengzhu ; Chen, Ting
2007-01-01
The increasing demand for the identification of genetic variation responsible for common diseases has translated into a need for sophisticated methods for effectively prioritizing mutations occurring in disease-associated genetic regions. In this article, we prioritize candidate nonsynonymous single-nucleotide polymorphisms (nsSNPs) through a bioinformatics approach that takes advantages of a set of improved numeric features derived from protein-sequence information and a new statistical learning model called “multiple selection rule voting” (MSRV). The sequence-based features can maximize the scope of applications of our approach, and the MSRV model can capture subtle characteristics of individual mutations. Systematic validation of the approach demonstrates that this approach is capable of prioritizing causal mutations for both simple monogenic diseases and complex polygenic diseases. Further studies of familial Alzheimer diseases and diabetes show that the approach can enrich mutations underlying these polygenic diseases among the top of candidate mutations. Application of this approach to unclassified mutations suggests that there are 10 suspicious mutations likely to cause diseases, and there is strong support for this in the literature. PMID:17668383
Software Certification for Temporal Properties With Affordable Tool Qualification
NASA Technical Reports Server (NTRS)
Xia, Songtao; DiVito, Benedetto L.
2005-01-01
It has been recognized that a framework based on proof-carrying code (also called semantic-based software certification in its community) could be used as a candidate software certification process for the avionics industry. To meet this goal, tools in the "trust base" of a proof-carrying code system must be qualified by regulatory authorities. A family of semantic-based software certification approaches is described, each different in expressive power, level of automation and trust base. Of particular interest is the so-called abstraction-carrying code, which can certify temporal properties. When a pure abstraction-carrying code method is used in the context of industrial software certification, the fact that the trust base includes a model checker would incur a high qualification cost. This position paper proposes a hybrid of abstraction-based and proof-based certification methods so that the model checker used by a client can be significantly simplified, thereby leading to lower cost in tool qualification.
A random walk model to evaluate autism
NASA Astrophysics Data System (ADS)
Moura, T. R. S.; Fulco, U. L.; Albuquerque, E. L.
2018-02-01
A common test administered during neurological examination in children is the analysis of their social communication and interaction across multiple contexts, including repetitive patterns of behavior. Poor performance may be associated with neurological conditions characterized by impairments in executive function, such as the so-called pervasive developmental disorders (PDDs), a particular condition of the autism spectrum disorders (ASDs). Inspired in these diagnosis tools, mainly those related to repetitive movements and behaviors, we studied here how the diffusion regimes of two discrete-time random walkers, mimicking the lack of social interaction and restricted interests developed for children with PDDs, are affected. Our model, which is based on the so-called elephant random walk (ERW) approach, consider that one of the random walker can learn and imitate the microscopic behavior of the other with probability f (1 - f otherwise). The diffusion regimes, measured by the Hurst exponent (H), is then obtained, whose changes may indicate a different degree of autism.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
NASA Astrophysics Data System (ADS)
Bretin, Elie; Danescu, Alexandre; Penuelas, José; Masnou, Simon
2018-07-01
The structure of many multiphase systems is governed by an energy that penalizes the area of interfaces between phases weighted by surface tension coefficients. However, interface evolution laws depend also on interface mobility coefficients. Having in mind some applications where highly contrasted or even degenerate mobilities are involved, for which classical phase field models are inapplicable, we propose a new effective phase field approach to approximate multiphase mean curvature flows with mobilities. The key aspect of our model is to incorporate the mobilities not in the phase field energy (which is conventionally the case) but in the metric which determines the gradient flow. We show the consistency of such an approach by a formal analysis of the sharp interface limit. We also propose an efficient numerical scheme which allows us to illustrate the advantages of the model on various examples, as the wetting of droplets on solid surfaces or the simulation of nanowires growth generated by the so-called vapor-liquid-solid method.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Galle, J; Hoffmann, M; Aust, G
2009-01-01
Collective phenomena in multi-cellular assemblies can be approached on different levels of complexity. Here, we discuss a number of mathematical models which consider the dynamics of each individual cell, so-called agent-based or individual-based models (IBMs). As a special feature, these models allow to account for intracellular decision processes which are triggered by biomechanical cell-cell or cell-matrix interactions. We discuss their impact on the growth and homeostasis of multi-cellular systems as simulated by lattice-free models. Our results demonstrate that cell polarisation subsequent to cell-cell contact formation can be a source of stability in epithelial monolayers. Stroma contact-dependent regulation of tumour cell proliferation and migration is shown to result in invasion dynamics in accordance with the migrating cancer stem cell hypothesis. However, we demonstrate that different regulation mechanisms can equally well comply with present experimental results. Thus, we suggest a panel of experimental studies for the in-depth validation of the model assumptions.
NASA Technical Reports Server (NTRS)
Feng, C.; Sun, X.; Shen, Y. N.; Lombardi, Fabrizio
1992-01-01
This paper covers the verification and protocol validation for distributed computer and communication systems using a computer aided testing approach. Validation and verification make up the so-called process of conformance testing. Protocol applications which pass conformance testing are then checked to see whether they can operate together. This is referred to as interoperability testing. A new comprehensive approach to protocol testing is presented which address: (1) modeling for inter-layer representation for compatibility between conformance and interoperability testing; (2) computational improvement to current testing methods by using the proposed model inclusive of formulation of new qualitative and quantitative measures and time-dependent behavior; (3) analysis and evaluation of protocol behavior for interactive testing without extensive simulation.
Health Capability: Conceptualization and Operationalization
2010-01-01
Current theoretical approaches to bioethics and public health ethics propose varied justifications as the basis for health care and public health, yet none captures a fundamental reality: people seek good health and the ability to pursue it. Existing models do not effectively address these twin goals. The approach I espouse captures both of these orientations through a concept here called health capability. Conceptually, health capability illuminates the conditions that affect health and one's ability to make health choices. By respecting the health consequences individuals face and their health agency, health capability offers promise for finding a balance between paternalism and autonomy. I offer a conceptual model of health capability and present a health capability profile to identify and address health capability gaps. PMID:19965570
Reconstruction-based Digital Dental Occlusion of the Partially Edentulous Dentition
Zhang, Jian; Xia, James J.; Li, Jianfu; Zhou, Xiaobo
2016-01-01
Partially edentulous dentition presents a challenging problem for the surgical planning of digital dental occlusion in the field of craniomaxillofacial surgery because of the incorrect maxillomandibular distance caused by missing teeth. We propose an innovative approach called Dental Reconstruction with Symmetrical Teeth (DRST) to achieve accurate dental occlusion for the partially edentulous cases. In this DRST approach, the rigid transformation between two symmetrical teeth existing on the left and right dental model is estimated through probabilistic point registration by matching the two shapes. With the estimated transformation, the partially edentulous space can be virtually filled with the teeth in its symmetrical position. Dental alignment is performed by digital dental occlusion reestablishment algorithm with the reconstructed complete dental model. Satisfactory reconstruction and occlusion results are demonstrated with the synthetic and real partially edentulous models. PMID:26584502
NASA Astrophysics Data System (ADS)
Bellver, Fernando Gimeno; Garratón, Manuel Caravaca; Soto Meca, Antonio; López, Juan Antonio Vera; Guirao, Juan L. G.; Fernández-Martínez, Manuel
In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems. The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software. Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altsybeyev, V.V., E-mail: v.altsybeev@spbu.ru; Ponomarev, V.A.
The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. Themore » results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.« less
Abdullah, Fauziah; Su, Tin Tin
2013-01-01
The objective of this study was to evaluate the effect of a call-recall approach in enhancing Pap smear practice by changes of motivation stage among non-compliant women. A cluster randomized controlled trial with parallel and un-blinded design was conducted between January and November 2010 in 40 public secondary schools in Malaysia among 403 female teachers who never or infrequently attended for a Pap test. A cluster randomization was applied in assigning schools to both groups. An intervention group received an invitation and reminder (call-recall program) for a Pap test (20 schools with 201 participants), while the control group received usual care from the existing cervical screening program (20 schools with 202 participants). Multivariate logistic regression was performed to determine the effect of the intervention program on the action stage (Pap smear uptake) at 24 weeks. In both groups, pre-contemplation stage was found as the highest proportion of changes in stages. At 24 weeks, an intervention group showed two times more in the action stage than control group (adjusted odds ratio 2.44, 95% CI 1.29-4.62). The positive effect of a call-recall approach in motivating women to change the behavior of screening practice should be appreciated by policy makers and health care providers in developing countries as an intervention to enhance Pap smear uptake. Copyright © 2013 Elsevier Inc. All rights reserved.
Heget, Jeffrey R; Bagian, James P; Lee, Caryl Z; Gosbee, John W
2002-12-01
In 1998 the Veterans Health Administration (VHA) created the National Center for Patient Safety (NCPS) to lead the effort to reduce adverse events and close calls systemwide. NCPS's aim is to foster a culture of safety in the Department of Veterans Affairs (VA) by developing and providing patient safety programs and delivering standardized tools, methods, and initiatives to the 163 VA facilities. To create a system-oriented approach to patient safety, NCPS looked for models in fields such as aviation, nuclear power, human factors, and safety engineering. Core concepts included a non-punitive approach to patient safety activities that emphasizes systems-based learning, the active seeking out of close calls, which are viewed as opportunities for learning and investigation, and the use of interdisciplinary teams to investigate close calls and adverse events through a root cause analysis (RCA) process. Participation by VA facilities and networks was voluntary. NCPS has always aimed to develop a program that would be applicable both within the VA and beyond. NCPS's full patient safety program was tested and implemented throughout the VA system from November 1999 to August 2000. Program components included an RCA system for use by caregivers at the front line, a system for the aggregate review of RCA results, information systems software, alerts and advisories, and cognitive acids. Following program implementation, NCPS saw a 900-fold increase in reporting of close calls of high-priority events, reflecting the level of commitment to the program by VHA leaders and staff.
Project Photofly: New 3d Modeling Online Web Service (case Studies and Assessments)
NASA Astrophysics Data System (ADS)
Abate, D.; Furini, G.; Migliori, S.; Pierattini, S.
2011-09-01
During summer 2010, Autodesk has released a still ongoing project called Project Photofly, freely downloadable from AutodeskLab web site until August 1 2011. Project Photofly based on computer-vision and photogrammetric principles, exploiting the power of cloud computing, is a web service able to convert collections of photographs into 3D models. Aim of our research was to evaluate the Project Photofly, through different case studies, for 3D modeling of cultural heritage monuments and objects, mostly to identify for which goals and objects it is suitable. The automatic approach will be mainly analyzed.
Schwartz, Carolyn E; Rapkin, Bruce D
2004-01-01
The increasing evidence for response shift phenomena in quality of life (QOL) assessment points to the necessity to reconsider both the measurement model and the application of psychometric analyses. The proposed psychometric model posits that the QOL true score is always contingent upon parameters of the appraisal process. This new model calls into question existing methods for establishing the reliability and validity of QOL assessment tools and suggests several new approaches for describing the psychometric properties of these scales. Recommendations for integrating the assessment of appraisal into QOL research and clinical practice are discussed. PMID:15038830
Berezinskii-Kosterlitz-Thouless transition in the time-reversal-symmetric Hofstadter-Hubbard model
NASA Astrophysics Data System (ADS)
Iskin, M.
2018-01-01
Assuming that two-component Fermi gases with opposite artificial magnetic fields on a square optical lattice are well described by the so-called time-reversal-symmetric Hofstadter-Hubbard model, we explore the thermal superfluid properties along with the critical Berezinskii-Kosterlitz-Thouless (BKT) transition temperature in this model over a wide range of its parameters. In particular, since our self-consistent BCS-BKT approach takes the multiband butterfly spectrum explicitly into account, it unveils how dramatically the interband contribution to the phase stiffness dominates the intraband one with an increasing interaction strength for any given magnetic flux.
Analysis student self efficacy in terms of using Discovery Learning model with SAVI approach
NASA Astrophysics Data System (ADS)
Sahara, Rifki; Mardiyana, S., Dewi Retno Sari
2017-12-01
Often students are unable to prove their academic achievement optimally according to their abilities. One reason is that they often feel unsure that they are capable of completing the tasks assigned to them. For students, such beliefs are necessary. The term belief has called self efficacy. Self efficacy is not something that has brought about by birth or something with permanent quality of an individual, but is the result of cognitive processes, the meaning one's self efficacy will be stimulated through learning activities. Self efficacy has developed and enhanced by a learning model that can stimulate students to foster confidence in their capabilities. One of them is by using Discovery Learning model with SAVI approach. Discovery Learning model with SAVI approach is one of learning models that involves the active participation of students in exploring and discovering their own knowledge and using it in problem solving by utilizing all the sensory devices they have. This naturalistic qualitative research aims to analyze student self efficacy in terms of use the Discovery Learning model with SAVI approach. The subjects of this study are 30 students focused on eight students who have high, medium, and low self efficacy obtained through purposive sampling technique. The data analysis of this research used three stages, that were reducing, displaying, and getting conclusion of the data. Based on the results of data analysis, it was concluded that the self efficacy appeared dominantly on the learning by using Discovery Learning model with SAVI approach is magnitude dimension.
Connectotyping: Model Based Fingerprinting of the Functional Connectome
Miranda-Dominguez, Oscar; Mills, Brian D.; Carpenter, Samuel D.; Grant, Kathleen A.; Kroenke, Christopher D.; Nigg, Joel T.; Fair, Damien A.
2014-01-01
A better characterization of how an individual’s brain is functionally organized will likely bring dramatic advances to many fields of study. Here we show a model-based approach toward characterizing resting state functional connectivity MRI (rs-fcMRI) that is capable of identifying a so-called “connectotype”, or functional fingerprint in individual participants. The approach rests on a simple linear model that proposes the activity of a given brain region can be described by the weighted sum of its functional neighboring regions. The resulting coefficients correspond to a personalized model-based connectivity matrix that is capable of predicting the timeseries of each subject. Importantly, the model itself is subject specific and has the ability to predict an individual at a later date using a limited number of non-sequential frames. While we show that there is a significant amount of shared variance between models across subjects, the model’s ability to discriminate an individual is driven by unique connections in higher order control regions in frontal and parietal cortices. Furthermore, we show that the connectotype is present in non-human primates as well, highlighting the translational potential of the approach. PMID:25386919
NASA Astrophysics Data System (ADS)
Ruiz Pérez, Guiomar; Latron, Jérôme; Llorens, Pilar; Gallart, Francesc; Francés, Félix
2017-04-01
Selecting an adequate hydrological model is the first step to carry out a rainfall-runoff modelling exercise. A hydrological model is a hypothesis of catchment functioning, encompassing a description of dominant hydrological processes and predicting how these processes interact to produce the catchment's response to external forcing. Current research lines emphasize the importance of multiple working hypotheses for hydrological modelling instead of only using a single model. In line with this philosophy, here different hypotheses were considered and analysed to simulate the nonlinear response of a small Mediterranean catchment and to progress in the analysis of its hydrological behaviour. In particular, three hydrological models were considered representing different potential hypotheses: two lumped models called LU3 and LU4, and one distributed model called TETIS. To determine how well each specific model performed and to assess whether a model was more adequate than another, we raised three complementary tests: one based on the analysis of residual errors series, another based on a sensitivity analysis and the last one based on using multiple evaluation criteria associated to the concept of Pareto frontier. This modelling approach, based on multiple working hypotheses, helped to improve our perceptual model of the catchment behaviour and, furthermore, could be used as a guidance to improve the performance of other environmental models.
High call volume at poison control centers: identification and implications for communication
CARAVATI, E. M.; LATIMER, S.; REBLIN, M.; BENNETT, H. K. W.; CUMMINS, M. R.; CROUCH, B. I.; ELLINGTON, L.
2016-01-01
Context High volume surges in health care are uncommon and unpredictable events. Their impact on health system performance and capacity is difficult to study. Objectives To identify time periods that exhibited very busy conditions at a poison control center and to determine whether cases and communication during high volume call periods are different from cases during low volume periods. Methods Call data from a US poison control center over twelve consecutive months was collected via a call logger and an electronic case database (Toxicall®). Variables evaluated for high call volume conditions were: (1) call duration; (2) number of cases; and (3) number of calls per staff member per 30 minute period. Statistical analyses identified peak periods as busier than 99% of all other 30 minute time periods and low volume periods as slower than 70% of all other 30 minute periods. Case and communication characteristics of high volume and low volume calls were compared using logistic regression. Results A total of 65,364 incoming calls occurred over 12 months. One hundred high call volume and 4885 low call volume 30 minute periods were identified. High volume periods were more common between 1500 and 2300 hours and during the winter months. Coded verbal communication data were evaluated for 42 high volume and 296 low volume calls. The mean (standard deviation) call length of these calls during high volume and low volume periods was 3 minutes 27 seconds (1 minute 46 seconds) and 3 minutes 57 seconds (2 minutes 11 seconds), respectively. Regression analyses revealed a trend for fewer overall verbal statements and fewer staff questions during peak periods, but no other significant differences for staff-caller communication behaviors were found. Conclusion Peak activity for poison center call volume can be identified by statistical modeling. Calls during high volume periods were similar to low volume calls. Communication was more concise yet staff was able to maintain a good rapport with callers during busy call periods. This approach allows evaluation of poison exposure call characteristics and communication during high volume periods. PMID:22889059
High call volume at poison control centers: identification and implications for communication.
Caravati, E M; Latimer, S; Reblin, M; Bennett, H K W; Cummins, M R; Crouch, B I; Ellington, L
2012-09-01
High volume surges in health care are uncommon and unpredictable events. Their impact on health system performance and capacity is difficult to study. To identify time periods that exhibited very busy conditions at a poison control center and to determine whether cases and communication during high volume call periods are different from cases during low volume periods. Call data from a US poison control center over twelve consecutive months was collected via a call logger and an electronic case database (Toxicall®).Variables evaluated for high call volume conditions were: (1) call duration; (2) number of cases; and (3) number of calls per staff member per 30 minute period. Statistical analyses identified peak periods as busier than 99% of all other 30 minute time periods and low volume periods as slower than 70% of all other 30 minute periods. Case and communication characteristics of high volume and low volume calls were compared using logistic regression. A total of 65,364 incoming calls occurred over 12 months. One hundred high call volume and 4885 low call volume 30 minute periods were identified. High volume periods were more common between 1500 and 2300 hours and during the winter months. Coded verbal communication data were evaluated for 42 high volume and 296 low volume calls. The mean (standard deviation) call length of these calls during high volume and low volume periods was 3 minutes 27 seconds (1 minute 46 seconds) and 3 minutes 57 seconds (2 minutes 11 seconds), respectively. Regression analyses revealed a trend for fewer overall verbal statements and fewer staff questions during peak periods, but no other significant differences for staff-caller communication behaviors were found. Peak activity for poison center call volume can be identified by statistical modeling. Calls during high volume periods were similar to low volume calls. Communication was more concise yet staff was able to maintain a good rapport with callers during busy call periods. This approach allows evaluation of poison exposure call characteristics and communication during high volume periods.
Developing CALL to Meet the Needs of Language Teaching and Learning
ERIC Educational Resources Information Center
Jiang, Zhaofeng
2008-01-01
This paper illustrates the advantages and disadvantages of CALL. It points out that CALL is influenced by traditional language teaching and learning approaches to some extent. It concludes that what is important in our university system is that CALL design and implementation should match the users' needs, since CALL is not always better than…
ERIC Educational Resources Information Center
Gelfuso, Andrea; Dennis, Danielle V.; Parker, Audra
2015-01-01
Recent calls for a shift to clinically-based models of teacher preparation prompt a research focus on the quality of classroom experiences in which pre-service teachers engage and the level to which theory and practice connect to inform those experiences. Developing a theoretical framework to conceptualize an approach to this work is an essential…
Modeling the User for Education, Training, and Performance Aiding
2004-04-01
It is a fact both obvious and frequently neglected that human competence, which is a product of human cognition, is essential to every military...approach has been called an instructional imperative and an economic impossibility. It would be maximally effective but remains unaffordable because...based on reflection) all producing different perceptual, memory, and motor functions. ( Sloman , 2001; 2003) (http://www.cs.b ham.ac.uk/~axs
Huang, W.; Zheng, Lingyun; Zhan, X.
2002-01-01
Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.
dDocent: a RADseq, variant-calling pipeline designed for population genomics of non-model organisms.
Puritz, Jonathan B; Hollenbeck, Christopher M; Gold, John R
2014-01-01
Restriction-site associated DNA sequencing (RADseq) has become a powerful and useful approach for population genomics. Currently, no software exists that utilizes both paired-end reads from RADseq data to efficiently produce population-informative variant calls, especially for non-model organisms with large effective population sizes and high levels of genetic polymorphism. dDocent is an analysis pipeline with a user-friendly, command-line interface designed to process individually barcoded RADseq data (with double cut sites) into informative SNPs/Indels for population-level analyses. The pipeline, written in BASH, uses data reduction techniques and other stand-alone software packages to perform quality trimming and adapter removal, de novo assembly of RAD loci, read mapping, SNP and Indel calling, and baseline data filtering. Double-digest RAD data from population pairings of three different marine fishes were used to compare dDocent with Stacks, the first generally available, widely used pipeline for analysis of RADseq data. dDocent consistently identified more SNPs shared across greater numbers of individuals and with higher levels of coverage. This is due to the fact that dDocent quality trims instead of filtering, incorporates both forward and reverse reads (including reads with INDEL polymorphisms) in assembly, mapping, and SNP calling. The pipeline and a comprehensive user guide can be found at http://dDocent.wordpress.com.
dDocent: a RADseq, variant-calling pipeline designed for population genomics of non-model organisms
Hollenbeck, Christopher M.; Gold, John R.
2014-01-01
Restriction-site associated DNA sequencing (RADseq) has become a powerful and useful approach for population genomics. Currently, no software exists that utilizes both paired-end reads from RADseq data to efficiently produce population-informative variant calls, especially for non-model organisms with large effective population sizes and high levels of genetic polymorphism. dDocent is an analysis pipeline with a user-friendly, command-line interface designed to process individually barcoded RADseq data (with double cut sites) into informative SNPs/Indels for population-level analyses. The pipeline, written in BASH, uses data reduction techniques and other stand-alone software packages to perform quality trimming and adapter removal, de novo assembly of RAD loci, read mapping, SNP and Indel calling, and baseline data filtering. Double-digest RAD data from population pairings of three different marine fishes were used to compare dDocent with Stacks, the first generally available, widely used pipeline for analysis of RADseq data. dDocent consistently identified more SNPs shared across greater numbers of individuals and with higher levels of coverage. This is due to the fact that dDocent quality trims instead of filtering, incorporates both forward and reverse reads (including reads with INDEL polymorphisms) in assembly, mapping, and SNP calling. The pipeline and a comprehensive user guide can be found at http://dDocent.wordpress.com. PMID:24949246
Detection of shifted double JPEG compression by an adaptive DCT coefficient model
NASA Astrophysics Data System (ADS)
Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua
2014-12-01
In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.
Cullis, B R; Smith, A B; Beeck, C P; Cowling, W A
2010-11-01
Exploring and exploiting variety by environment (V × E) interaction is one of the major challenges facing plant breeders. In paper I of this series, we presented an approach to modelling V × E interaction in the analysis of complex multi-environment trials using factor analytic models. In this paper, we develop a range of statistical tools which explore V × E interaction in this context. These tools include graphical displays such as heat-maps of genetic correlation matrices as well as so-called E-scaled uniplots that are a more informative alternative to the classical biplot for large plant breeding multi-environment trials. We also present a new approach to prediction for multi-environment trials that include pedigree information. This approach allows meaningful selection indices to be formed either for potential new varieties or potential parents.
Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.
Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N
2017-05-01
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, M.A.; Craig, J.I.
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implementmore » the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.« less
LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS
Einstein, Daniel R.; Dyedov, Vladimir
2010-01-01
Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546
Use of agents to implement an integrated computing environment
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1995-01-01
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implement the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.
Focusing on relationships, not information, respects autonomy during antenatal consultations.
Gaucher, Nathalie; Payot, Antoine
2017-01-01
Policy statements regarding antenatal consultations for preterm labour are guided by physicians' concerns for upholding the legal doctrine of informed consent, through the provision of standardised homogeneous medical information. This approach, led by classical in-control conceptions of patient autonomy, conceives moral agents as rational, independent, self-sufficient decision-makers. Recent studies on these antenatal consultations have explored patients' perspectives, and these differ from guidelines' suggestions. Relational autonomy - which understands moral agents as rational, emotional, creative and interdependent - resonates impressively with these new data. A model for antenatal consultations is proposed. This approach encourages clinicians to explore individual patients' lived experiences and engage in trusting empowering relationships. Moreover, it calls on physicians to enhance patients' relational autonomy by becoming advocates for their patients within healthcare institutions and professional organisations, while calling for broadscale policy changes to encourage further funding and support in investigations of the patient's voice. ©2016 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Darbandi, Masoud; Abrar, Bagher
2018-01-01
The spectral-line weighted-sum-of-gray-gases (SLW) model is considered as a modern global model, which can be used in predicting the thermal radiation heat transfer within the combustion fields. The past SLW model users have mostly employed the reference approach to calculate the local values of gray gases' absorption coefficient. This classical reference approach assumes that the absorption spectra of gases at different thermodynamic conditions are scalable with the absorption spectrum of gas at a reference thermodynamic state in the domain. However, this assumption cannot be reasonable in combustion fields, where the gas temperature is very different from the reference temperature. Consequently, the results of SLW model incorporated with the classical reference approach, say the classical SLW method, are highly sensitive to the reference temperature magnitude in non-isothermal combustion fields. To lessen this sensitivity, the current work combines the SLW model with a modified reference approach, which is a particular one among the eight possible reference approach forms reported recently by Solovjov, et al. [DOI: 10.1016/j.jqsrt.2017.01.034, 2017]. The combination is called "modified SLW method". This work shows that the modified reference approach can provide more accurate total emissivity calculation than the classical reference approach if it is coupled with the SLW method. This would be particularly helpful for more accurate calculation of radiation transfer in highly non-isothermal combustion fields. To approve this, we use both the classical and modified SLW methods and calculate the radiation transfer in such fields. It is shown that the modified SLW method can almost eliminate the sensitivity of achieved results to the chosen reference temperature in treating highly non-isothermal combustion fields.
Xi, Jianing; Wang, Minghui; Li, Ao
2017-09-26
The accumulating availability of next-generation sequencing data offers an opportunity to pinpoint driver genes that are causally implicated in oncogenesis through computational models. Despite previous efforts made regarding this challenging problem, there is still room for improvement in the driver gene identification accuracy. In this paper, we propose a novel integrated approach called IntDriver for prioritizing driver genes. Based on a matrix factorization framework, IntDriver can effectively incorporate functional information from both the interaction network and Gene Ontology similarity, and detect driver genes mutated in different sets of patients at the same time. When evaluated through known benchmarking driver genes, the top ranked genes of our result show highly significant enrichment for the known genes. Meanwhile, IntDriver also detects some known driver genes that are not found by the other competing approaches. When measured by precision, recall and F1 score, the performances of our approach are comparable or increased in comparison to the competing approaches.
NASA Astrophysics Data System (ADS)
Corni, Federico; Fuchs, Hans U.; Savino, Giovanni
2018-02-01
This is a description of the conceptual foundations used for designing a novel learning environment for mechanics implemented as an Industrial Educational Laboratory - called Fisica in Moto (FiM) - at the Ducati Foundation in Bologna. In this paper, we will describe the motivation for and design of the conceptual approach to mechanics used in the lab - as such, the paper is theoretical in nature. The goal of FiM is to provide an approach to the teaching of mechanics based upon imaginative structures found in continuum physics suitable to engineering and science. We show how continuum physics creates models of mechanical phenomena by using momentum and angular momentum as primitive quantities. We analyse this approach in terms of cognitive linguistic concepts such as conceptual metaphor and narrative framing of macroscopic physical phenomena. The model discussed here has been used in the didactical design of the actual lab and raises questions for an investigation of student learning of mechanics in a narrative setting.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Improving mapping and SNP-calling performance in multiplexed targeted next-generation sequencing
2012-01-01
Background Compared to classical genotyping, targeted next-generation sequencing (tNGS) can be custom-designed to interrogate entire genomic regions of interest, in order to detect novel as well as known variants. To bring down the per-sample cost, one approach is to pool barcoded NGS libraries before sample enrichment. Still, we lack a complete understanding of how this multiplexed tNGS approach and the varying performance of the ever-evolving analytical tools can affect the quality of variant discovery. Therefore, we evaluated the impact of different software tools and analytical approaches on the discovery of single nucleotide polymorphisms (SNPs) in multiplexed tNGS data. To generate our own test model, we combined a sequence capture method with NGS in three experimental stages of increasing complexity (E. coli genes, multiplexed E. coli, and multiplexed HapMap BRCA1/2 regions). Results We successfully enriched barcoded NGS libraries instead of genomic DNA, achieving reproducible coverage profiles (Pearson correlation coefficients of up to 0.99) across multiplexed samples, with <10% strand bias. However, the SNP calling quality was substantially affected by the choice of tools and mapping strategy. With the aim of reducing computational requirements, we compared conventional whole-genome mapping and SNP-calling with a new faster approach: target-region mapping with subsequent ‘read-backmapping’ to the whole genome to reduce the false detection rate. Consequently, we developed a combined mapping pipeline, which includes standard tools (BWA, SAMtools, etc.), and tested it on public HiSeq2000 exome data from the 1000 Genomes Project. Our pipeline saved 12 hours of run time per Hiseq2000 exome sample and detected ~5% more SNPs than the conventional whole genome approach. This suggests that more potential novel SNPs may be discovered using both approaches than with just the conventional approach. Conclusions We recommend applying our general ‘two-step’ mapping approach for more efficient SNP discovery in tNGS. Our study has also shown the benefit of computing inter-sample SNP-concordances and inspecting read alignments in order to attain more confident results. PMID:22913592
Enhancement Approachof Object Constraint Language Generation
NASA Astrophysics Data System (ADS)
Salemi, Samin; Selamat, Ali
2018-01-01
OCL is the most prevalent language to document system constraints that are annotated in UML. Writing OCL specifications is not an easy task due to the complexity of the OCL syntax. Therefore, an approach to help and assist developers to write OCL specifications is needed. There are two approaches to do so: First, creating an OCL specifications by a tool called COPACABANA. Second, an MDA-based approach to help developers in writing OCL specification by another tool called NL2OCLviaSBVR that generates OCL specification automatically. This study presents another MDA-based approach called En2OCL, and its objective is twofold. 1- to improve the precison of the existing works. 2- to present a benchmark of these approaches. The benchmark shows that the accuracy of COPACABANA, NL2OCLviaSBVR, and En2OCL are 69.23, 84.64, and 88.40 respectively.
Informed Source Separation: A Bayesian Tutorial
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.
2005-01-01
Source separation problems are ubiquitous in the physical sciences; any situation where signals are superimposed calls for source separation to estimate the original signals. In h s tutorial I will discuss the Bayesian approach to the source separation problem. This approach has a specific advantage in that it requires the designer to explicitly describe the signal model in addition to any other information or assumptions that go into the problem description. This leads naturally to the idea of informed source separation, where the algorithm design incorporates relevant information about the specific problem. This approach promises to enable researchers to design their own high-quality algorithms that are specifically tailored to the problem at hand.
Maguire, Greg; Friedman, Peter
2015-05-26
The degree to, and the mechanisms through, which stem cells are able to build, maintain, and heal the body have only recently begun to be understood. Much of the stem cell's power resides in the release of a multitude of molecules, called stem cell released molecules (SRM). A fundamentally new type of therapeutic, namely "systems therapeutic", can be realized by reverse engineering the mechanisms of the SRM processes. Recent data demonstrates that the composition of the SRM is different for each type of stem cell, as well as for different states of each cell type. Although systems biology has been successfully used to analyze multiple pathways, the approach is often used to develop a small molecule interacting at only one pathway in the system. A new model is emerging in biology where systems biology is used to develop a new technology acting at multiple pathways called "systems therapeutics". A natural set of healing pathways in the human that uses SRM is instructive and of practical use in developing systems therapeutics. Endogenous SRM processes in the human body use a combination of SRM from two or more stem cell types, designated as S(2)RM, doing so under various state dependent conditions for each cell type. Here we describe our approach in using state-dependent SRM from two or more stem cell types, S(2)RM technology, to develop a new class of therapeutics called "systems therapeutics." Given the ubiquitous and powerful nature of innate S(2)RM-based healing in the human body, this "systems therapeutic" approach using S(2)RM technology will be important for the development of anti-cancer therapeutics, antimicrobials, wound care products and procedures, and a number of other therapeutics for many indications.
Two aspects of black hole entropy in Lanczos-Lovelock models of gravity
NASA Astrophysics Data System (ADS)
Kolekar, Sanved; Kothawala, Dawood; Padmanabhan, T.
2012-03-01
We consider two specific approaches to evaluate the black hole entropy which are known to produce correct results in the case of Einstein’s theory and generalize them to Lanczos-Lovelock models. In the first approach (which could be called extrinsic), we use a procedure motivated by earlier work by Pretorius, Vollick, and Israel, and by Oppenheim, and evaluate the entropy of a configuration of densely packed gravitating shells on the verge of forming a black hole in Lanczos-Lovelock theories of gravity. We find that this matter entropy is not equal to (it is less than) Wald entropy, except in the case of Einstein theory, where they are equal. The matter entropy is proportional to the Wald entropy if we consider a specific mth-order Lanczos-Lovelock model, with the proportionality constant depending on the spacetime dimensions D and the order m of the Lanczos-Lovelock theory as (D-2m)/(D-2). Since the proportionality constant depends on m, the proportionality between matter entropy and Wald entropy breaks down when we consider a sum of Lanczos-Lovelock actions involving different m. In the second approach (which could be called intrinsic), we generalize a procedure, previously introduced by Padmanabhan in the context of general relativity, to study off-shell entropy of a class of metrics with horizon using a path integral method. We consider the Euclidean action of Lanczos-Lovelock models for a class of metrics off shell and interpret it as a partition function. We show that in the case of spherically symmetric metrics, one can interpret the Euclidean action as the free energy and read off both the entropy and energy of a black hole spacetime. Surprisingly enough, this leads to exactly the Wald entropy and the energy of the spacetime in Lanczos-Lovelock models obtained by other methods. We comment on possible implications of the result.
Life Extending Control. [mechanical fatigue in reusable rocket engines
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control is defined. Life is defined in terms of mechanical fatigue life. A brief description is given of the current approach to life prediction using a local, cyclic, stress-strain approach for a critical system component. An alternative approach to life prediction based on a continuous functional relationship to component performance is proposed. Based on cyclic life prediction, an approach to life extending control, called the Life Management Approach, is proposed. A second approach, also based on cyclic life prediction, called the implicit approach, is presented. Assuming the existence of the alternative functional life prediction approach, two additional concepts for Life Extending Control are presented.
Life extending control: A concept paper
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control is defined. Life is defined in terms of mechanical fatigue life. A brief description is given of the current approach to life prediction using a local, cyclic, stress-strain approach for a critical system component. An alternative approach to life prediction based on a continuous functional relationship to component performance is proposed.Base on cyclic life prediction an approach to Life Extending Control, called the Life Management Approach is proposed. A second approach, also based on cyclic life prediction, called the Implicit Approach, is presented. Assuming the existence of the alternative functional life prediction approach, two additional concepts for Life Extending Control are presented.
NASA Astrophysics Data System (ADS)
Leontidis, Makis; Halatsis, Constantin
The aim of this paper is to present a model in order to integrate the learning style and the personality traits of a learner into an enhanced Affective Style which is stored in the learner’s model. This model which can deal with the cognitive abilities as well as the affective preferences of the learner is called Learner Affective Model (LAM). The LAM is used to retain learner’s knowledge and activities during his interaction with a Web-based learning environment and also to provide him with the appropriate pedagogical guidance. The proposed model makes use of an ontological approach in combination with the Bayesian Network model and contributes to the efficient management of the LAM in an Affective Module.
Assessment of predation risk through referential communication in incubating birds
NASA Astrophysics Data System (ADS)
Suzuki, Toshitaka N.
2015-05-01
Parents of many bird species produce alarm calls when they approach and deter a nest predator in order to defend their offspring. Alarm calls have been shown to warn nestlings about predatory threats, but parents also face a similar risk of predation when incubating eggs in their nests. Here, I show that incubating female Japanese great tits, Parus minor, assess predation risk by conspecific alarm calls given outside the nest cavity. Tits produce acoustically discrete alarm calls for different nest predators: “jar” calls for snakes and “chicka” calls for other predators such as crows and martens. Playback experiments revealed that incubating females responded to “jar” calls by leaving their nest, whereas they responded to “chicka” calls by looking out of the nest entrance. Since snakes invade the nest cavity, escaping from the nest helps females avoid snake predation. In contrast, “chicka” calls are used for a variety of predator types, and therefore, looking out of the nest entrance helps females gather information about the type and location of approaching predators. These results show that incubating females derive information about predator type from different types of alarm calls, providing a novel example of functionally referential communication.
Assessment of predation risk through referential communication in incubating birds.
Suzuki, Toshitaka N
2015-05-18
Parents of many bird species produce alarm calls when they approach and deter a nest predator in order to defend their offspring. Alarm calls have been shown to warn nestlings about predatory threats, but parents also face a similar risk of predation when incubating eggs in their nests. Here, I show that incubating female Japanese great tits, Parus minor, assess predation risk by conspecific alarm calls given outside the nest cavity. Tits produce acoustically discrete alarm calls for different nest predators: "jar" calls for snakes and "chicka" calls for other predators such as crows and martens. Playback experiments revealed that incubating females responded to "jar" calls by leaving their nest, whereas they responded to "chicka" calls by looking out of the nest entrance. Since snakes invade the nest cavity, escaping from the nest helps females avoid snake predation. In contrast, "chicka" calls are used for a variety of predator types, and therefore, looking out of the nest entrance helps females gather information about the type and location of approaching predators. These results show that incubating females derive information about predator type from different types of alarm calls, providing a novel example of functionally referential communication.
Evaluation of an urban vegetative canopy scheme and impact on plume dispersion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Matthew A; Williams, Michael D; Zajic, Dragan
2009-01-01
The Quick Urban and Industrial Complex (QUIC) atmospheric dispersion modeling system attempts to fill an important gap between the fast, but nonbuilding-aware Gaussian plume models and the building-aware but slow computational fluid dynamics (CFD) models. While Gaussian models have the ability to give answers quickly to emergency responders, they are unlikely to be able to adequately account for the effects of the building-induced complex flow patterns on the near-source dispersion of contaminants. QUIC uses a diagnostic massconsistent empirical wind model called QUIC-URB that is based on the methodology of Rockle (1990), (see also Kaplan and Dinar 1996). In this approach,more » the recirculation zones that form around and between buildings are inserted into the flow using empirical parameterizations and then the wind field is forced to be mass consistent. Although not as accurate as CFD codes, this approach is several orders of magnitude faster and accounts for the bulk effects of buildings.« less
Modelling the control of interceptive actions.
Beek, P J; Dessing, J C; Peper, C E; Bullock, D
2003-01-01
In recent years, several phenomenological dynamical models have been formulated that describe how perceptual variables are incorporated in the control of motor variables. We call these short-route models as they do not address how perception-action patterns might be constrained by the dynamical properties of the sensory, neural and musculoskeletal subsystems of the human action system. As an alternative, we advocate a long-route modelling approach in which the dynamics of these subsystems are explicitly addressed and integrated to reproduce interceptive actions. The approach is exemplified through a discussion of a recently developed model for interceptive actions consisting of a neural network architecture for the online generation of motor outflow commands, based on time-to-contact information and information about the relative positions and velocities of hand and ball. This network is shown to be consistent with both behavioural and neurophysiological data. Finally, some problems are discussed with regard to the question of how the motor outflow commands (i.e. the intended movement) might be modulated in view of the musculoskeletal dynamics. PMID:14561342
NASA Technical Reports Server (NTRS)
Chaparro, Daniel; Fujiwara, Gustavo E. C.; Ting, Eric; Nguyen, Nhan
2016-01-01
The need to rapidly scan large design spaces during conceptual design calls for computationally inexpensive tools such as the vortex lattice method (VLM). Although some VLM tools, such as Vorview have been extended to model fully-supersonic flow, VLM solutions are typically limited to inviscid, subcritical flow regimes. Many transport aircraft operate at transonic speeds, which limits the applicability of VLM for such applications. This paper presents a novel approach to correct three-dimensional VLM through coupling of two-dimensional transonic small disturbance (TSD) solutions along the span of an aircraft wing in order to accurately predict transonic aerodynamic loading and wave drag for transport aircraft. The approach is extended to predict flow separation and capture the attenuation of aerodynamic forces due to boundary layer viscosity by coupling the TSD solver with an integral boundary layer (IBL) model. The modeling framework is applied to the NASA General Transport Model (GTM) integrated with a novel control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF).
A Systems Approach to Scalable Transportation Network Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2006-01-01
Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas
2004-08-01
Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.
NASA Astrophysics Data System (ADS)
Su, Tengfei
2018-04-01
In this paper, an unsupervised evaluation scheme for remote sensing image segmentation is developed. Based on a method called under- and over-segmentation aware (UOA), the new approach is improved by overcoming the defect in the part of estimating over-segmentation error. Two cases of such error-prone defect are listed, and edge strength is employed to devise a solution to this issue. Two subsets of high resolution remote sensing images were used to test the proposed algorithm, and the experimental results indicate its superior performance, which is attributed to its improved OSE detection model.
Identifying Interacting Genetic Variations by Fish-Swarm Logic Regression
Yang, Aiyuan; Yan, Chunxia; Zhu, Feng; Zhao, Zhongmeng; Cao, Zhi
2013-01-01
Understanding associations between genotypes and complex traits is a fundamental problem in human genetics. A major open problem in mapping phenotypes is that of identifying a set of interacting genetic variants, which might contribute to complex traits. Logic regression (LR) is a powerful multivariant association tool. Several LR-based approaches have been successfully applied to different datasets. However, these approaches are not adequate with regard to accuracy and efficiency. In this paper, we propose a new LR-based approach, called fish-swarm logic regression (FSLR), which improves the logic regression process by incorporating swarm optimization. In our approach, a school of fish agents are conducted in parallel. Each fish agent holds a regression model, while the school searches for better models through various preset behaviors. A swarm algorithm improves the accuracy and the efficiency by speeding up the convergence and preventing it from dropping into local optimums. We apply our approach on a real screening dataset and a series of simulation scenarios. Compared to three existing LR-based approaches, our approach outperforms them by having lower type I and type II error rates, being able to identify more preset causal sites, and performing at faster speeds. PMID:23984382
Changing culture in the home health setting: strategies for success.
Boan, David
2006-01-01
Organizational culture is generally defined as the internal attributes of the staff, such as their values, beliefs, and attitudes. Although technically accurate as a definition, personal attributes defy direct intervention, leading some to question whether it is possible to change culture. It is proposed that it is possible to change the personal internal attributes that define organizational culture by changing the characteristic structures and behaviors of the organization that shape those attributes. This model, called the Quality Capability Model, creates an approach to culture change that accommodates the unique features of home health.
NASA Astrophysics Data System (ADS)
Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.
2012-04-01
In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological dynamic and processes, i. e. sample heterogeneity. For a same streamflow range corresponds different processes such as rising limbs or recession, where uncertainties are different. The dynamical approach improves reliability, skills and sharpness of forecasts and globally reduces confidence intervals width. When compared in details, the dynamical approach allows a noticeable reduction of confidence intervals during recessions where uncertainty is relatively lower and a slight increase of confidence intervals during rising limbs or snowmelt where uncertainty is greater. The dynamic approach, validated by forecaster's experience that considered the empirical approach not discriminative enough, improved forecaster's confidence and communication of uncertainties. Montanari, A. and Brath, A., (2004). A stochastic approach for assessing the uncertainty of rainfall-runoff simulations. Water Resources Research, 40, W01106, doi:10.1029/2003WR002540. Schaefli, B., Balin Talamba, D. and Musy, A., (2007). Quantifying hydrological modeling errors through a mixture of normal distributions. Journal of Hydrology, 332, 303-315.
NASA Astrophysics Data System (ADS)
Xu, S.; Uneri, A.; Khanna, A. Jay; Siewerdsen, J. H.; Stayman, J. W.
2017-04-01
Metal artifacts can cause substantial image quality issues in computed tomography. This is particularly true in interventional imaging where surgical tools or metal implants are in the field-of-view. Moreover, the region-of-interest is often near such devices which is exactly where image quality degradations are largest. Previous work on known-component reconstruction (KCR) has shown the incorporation of a physical model (e.g. shape, material composition, etc) of the metal component into the reconstruction algorithm can significantly reduce artifacts even near the edge of a metal component. However, for such approaches to be effective, they must have an accurate model of the component that include energy-dependent properties of both the metal device and the CT scanner, placing a burden on system characterization and component material knowledge. In this work, we propose a modified KCR approach that adopts a mixed forward model with a polyenergetic model for the component and a monoenergetic model for the background anatomy. This new approach called Poly-KCR jointly estimates a spectral transfer function associated with known components in addition to the background attenuation values. Thus, this approach eliminates both the need to know component material composition a prior as well as the requirement for an energy-dependent characterization of the CT scanner. We demonstrate the efficacy of this novel approach and illustrate its improved performance over traditional and model-based iterative reconstruction methods in both simulation studies and in physical data including an implanted cadaver sample.
Building dynamic population graph for accurate correspondence detection.
Du, Shaoyi; Guo, Yanrong; Sanroma, Gerard; Ni, Dong; Wu, Guorong; Shen, Dinggang
2015-12-01
In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph. Copyright © 2015 Elsevier B.V. All rights reserved.
GraphTeams: a method for discovering spatial gene clusters in Hi-C sequencing data.
Schulz, Tizian; Stoye, Jens; Doerr, Daniel
2018-05-08
Hi-C sequencing offers novel, cost-effective means to study the spatial conformation of chromosomes. We use data obtained from Hi-C experiments to provide new evidence for the existence of spatial gene clusters. These are sets of genes with associated functionality that exhibit close proximity to each other in the spatial conformation of chromosomes across several related species. We present the first gene cluster model capable of handling spatial data. Our model generalizes a popular computational model for gene cluster prediction, called δ-teams, from sequences to graphs. Following previous lines of research, we subsequently extend our model to allow for several vertices being associated with the same label. The model, called δ-teams with families, is particular suitable for our application as it enables handling of gene duplicates. We develop algorithmic solutions for both models. We implemented the algorithm for discovering δ-teams with families and integrated it into a fully automated workflow for discovering gene clusters in Hi-C data, called GraphTeams. We applied it to human and mouse data to find intra- and interchromosomal gene cluster candidates. The results include intrachromosomal clusters that seem to exhibit a closer proximity in space than on their chromosomal DNA sequence. We further discovered interchromosomal gene clusters that contain genes from different chromosomes within the human genome, but are located on a single chromosome in mouse. By identifying δ-teams with families, we provide a flexible model to discover gene cluster candidates in Hi-C data. Our analysis of Hi-C data from human and mouse reveals several known gene clusters (thus validating our approach), but also few sparsely studied or possibly unknown gene cluster candidates that could be the source of further experimental investigations.
Evolutionary Tradeoffs between Economy and Effectiveness in Biological Homeostasis Systems
Szekely, Pablo; Sheftel, Hila; Mayo, Avi; Alon, Uri
2013-01-01
Biological regulatory systems face a fundamental tradeoff: they must be effective but at the same time also economical. For example, regulatory systems that are designed to repair damage must be effective in reducing damage, but economical in not making too many repair proteins because making excessive proteins carries a fitness cost to the cell, called protein burden. In order to see how biological systems compromise between the two tasks of effectiveness and economy, we applied an approach from economics and engineering called Pareto optimality. This approach allows calculating the best-compromise systems that optimally combine the two tasks. We used a simple and general model for regulation, known as integral feedback, and showed that best-compromise systems have particular combinations of biochemical parameters that control the response rate and basal level. We find that the optimal systems fall on a curve in parameter space. Due to this feature, even if one is able to measure only a small fraction of the system's parameters, one can infer the rest. We applied this approach to estimate parameters in three biological systems: response to heat shock and response to DNA damage in bacteria, and calcium homeostasis in mammals. PMID:23950698
Evolutionary tradeoffs between economy and effectiveness in biological homeostasis systems.
Szekely, Pablo; Sheftel, Hila; Mayo, Avi; Alon, Uri
2013-01-01
Biological regulatory systems face a fundamental tradeoff: they must be effective but at the same time also economical. For example, regulatory systems that are designed to repair damage must be effective in reducing damage, but economical in not making too many repair proteins because making excessive proteins carries a fitness cost to the cell, called protein burden. In order to see how biological systems compromise between the two tasks of effectiveness and economy, we applied an approach from economics and engineering called Pareto optimality. This approach allows calculating the best-compromise systems that optimally combine the two tasks. We used a simple and general model for regulation, known as integral feedback, and showed that best-compromise systems have particular combinations of biochemical parameters that control the response rate and basal level. We find that the optimal systems fall on a curve in parameter space. Due to this feature, even if one is able to measure only a small fraction of the system's parameters, one can infer the rest. We applied this approach to estimate parameters in three biological systems: response to heat shock and response to DNA damage in bacteria, and calcium homeostasis in mammals.
Expanding metal mixture toxicity models to natural stream and lake invertebrate communities
Balistrieri, Laurie S.; Mebane, Christopher A.; Schmidt, Travis S.; Keller, William (Bill)
2015-01-01
A modeling approach that was used to predict the toxicity of dissolved single and multiple metals to trout is extended to stream benthic macroinvertebrates, freshwater zooplankton, and Daphnia magna. The approach predicts the accumulation of toxicants (H, Al, Cd, Cu, Ni, Pb, and Zn) in organisms using 3 equilibrium accumulation models that define interactions between dissolved cations and biological receptors (biotic ligands). These models differ in the structure of the receptors and include a 2-site biotic ligand model, a bidentate biotic ligand or 2-pKa model, and a humic acid model. The predicted accumulation of toxicants is weighted using toxicant-specific coefficients and incorporated into a toxicity function called Tox, which is then related to observed mortality or invertebrate community richness using a logistic equation. All accumulation models provide reasonable fits to metal concentrations in tissue samples of stream invertebrates. Despite the good fits, distinct differences in the magnitude of toxicant accumulation and biotic ligand speciation exist among the models for a given solution composition. However, predicted biological responses are similar among the models because there are interdependencies among model parameters in the accumulation–Tox models. To illustrate potential applications of the approaches, the 3 accumulation–Tox models for natural stream invertebrates are used in Monte Carlo simulations to predict the probability of adverse impacts in catchments of differing geology in central Colorado (USA); to link geology, water chemistry, and biological response; and to demonstrate how this approach can be used to screen for potential risks associated with resource development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Holbery, Jim; Smith, Mark T.
2006-11-30
This report describes the status of the current process modeling approaches to predict the behavior and flow of fiber-filled thermoplastics under injection molding conditions. Previously, models have been developed to simulate the injection molding of short-fiber thermoplastics, and an as-formed composite part or component can then be predicted that contains a microstructure resulting from the constituents’ material properties and characteristics as well as the processing parameters. Our objective is to assess these models in order to determine their capabilities and limitations, and the developments needed for long-fiber injection-molded thermoplastics (LFTs). First, the concentration regimes are summarized to facilitate the understandingmore » of different types of fiber-fiber interaction that can occur for a given fiber volume fraction. After the formulation of the fiber suspension flow problem and the simplification leading to the Hele-Shaw approach, the interaction mechanisms are discussed. Next, the establishment of the rheological constitutive equation is presented that reflects the coupled flow/orientation nature. The decoupled flow/orientation approach is also discussed which constitutes a good simplification for many applications involving flows in thin cavities. Finally, before outlining the necessary developments for LFTs, some applications of the current orientation model and the so-called modified Folgar-Tucker model are illustrated through the fiber orientation predictions for selected LFT samples.« less
Unified treatment of the luminosity distance in cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jaiyul; Scaccabarozzi, Fulvio, E-mail: jyoo@physik.uzh.ch, E-mail: fulvio@physik.uzh.ch
Comparing the luminosity distance measurements to its theoretical predictions is one of the cornerstones in establishing the modern cosmology. However, as shown in Biern and Yoo, its theoretical predictions in literature are often plagued with infrared divergences and gauge-dependences. This trend calls into question the sanity of the methods used to derive the luminosity distance. Here we critically investigate four different methods—the geometric approach, the Sachs approach, the Jacobi mapping approach, and the geodesic light cone (GLC) approach to modeling the luminosity distance, and we present a unified treatment of such methods, facilitating the comparison among the methods and checkingmore » their sanity. All of these four methods, if exercised properly, can be used to reproduce the correct description of the luminosity distance.« less
Enhancing the Design and Analysis of Flipped Learning Strategies
ERIC Educational Resources Information Center
Jenkins, Martin; Bokosmaty, Rena; Brown, Melanie; Browne, Chris; Gao, Qi; Hanson, Julie; Kupatadze, Ketevan
2017-01-01
There are numerous calls in the literature for research into the flipped learning approach to match the flood of popular media articles praising its impact on student learning and educational outcomes. This paper addresses those calls by proposing pedagogical strategies that promote active learning in "flipped" approaches and improved…
ADVANCES IN THE APPLICATION OF REMOTE SENSING TO PLANT INCORPORATED PROTECTANT CROP MONITORING
Current forecasts call for significant increases to the plantings of transgenic corn in the United States for the 2007 growing season and beyond. Transgenic acreage approaching 80% of the total corn plantings could be realized by 2009. These conditions call for a new approach to ...
Conceptualizing Skill within a Participatory Ecological Approach to Outdoor Adventure
ERIC Educational Resources Information Center
Mullins, Philip M.
2014-01-01
To answer calls for an ecological approach to outdoor adventure that can respond to the crisis of sustainability, this paper suggests greater theoretical and empirical attention to skill and skill development as shaping participant interactions with and experiences of environments, landscapes, places, and inhabitants. The paper reviews calls for…
Spatiotemporal Access Model Based on Reputation for the Sensing Layer of the IoT
Guo, Yunchuan; Yin, Lihua; Li, Chao
2014-01-01
Access control is a key technology in providing security in the Internet of Things (IoT). The mainstream security approach proposed for the sensing layer of the IoT concentrates only on authentication while ignoring the more general models. Unreliable communications and resource constraints make the traditional access control techniques barely meet the requirements of the sensing layer of the IoT. In this paper, we propose a model that combines space and time with reputation to control access to the information within the sensing layer of the IoT. This model is called spatiotemporal access control based on reputation (STRAC). STRAC uses a lattice-based approach to decrease the size of policy bases. To solve the problem caused by unreliable communications, we propose both nondeterministic authorizations and stochastic authorizations. To more precisely manage the reputation of nodes, we propose two new mechanisms to update the reputation of nodes. These new approaches are the authority-based update mechanism (AUM) and the election-based update mechanism (EUM). We show how the model checker UPPAAL can be used to analyze the spatiotemporal access control model of an application. Finally, we also implement a prototype system to demonstrate the efficiency of our model. PMID:25177731
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
Chen, Yunjin; Pock, Thomas
2017-06-01
Image restoration is a long-standing problem in low-level computer vision with many interesting applications. We describe a flexible learning framework based on the concept of nonlinear reaction diffusion models for various image restoration problems. By embodying recent improvements in nonlinear diffusion models, we propose a dynamic nonlinear reaction diffusion model with time-dependent parameters (i.e., linear filters and influence functions). In contrast to previous nonlinear diffusion models, all the parameters, including the filters and the influence functions, are simultaneously learned from training data through a loss based approach. We call this approach TNRD-Trainable Nonlinear Reaction Diffusion. The TNRD approach is applicable for a variety of image restoration tasks by incorporating appropriate reaction force. We demonstrate its capabilities with three representative applications, Gaussian image denoising, single image super resolution and JPEG deblocking. Experiments show that our trained nonlinear diffusion models largely benefit from the training of the parameters and finally lead to the best reported performance on common test datasets for the tested applications. Our trained models preserve the structural simplicity of diffusion models and take only a small number of diffusion steps, thus are highly efficient. Moreover, they are also well-suited for parallel computation on GPUs, which makes the inference procedure extremely fast.
Development of an RF-EMF Exposure Surrogate for Epidemiologic Research.
Roser, Katharina; Schoeni, Anna; Bürgi, Alfred; Röösli, Martin
2015-05-22
Exposure assessment is a crucial part in studying potential effects of RF-EMF. Using data from the HERMES study on adolescents, we developed an integrative exposure surrogate combining near-field and far-field RF-EMF exposure in a single brain and whole-body exposure measure. Contributions from far-field sources were modelled by propagation modelling and multivariable regression modelling using personal measurements. Contributions from near-field sources were assessed from both, questionnaires and mobile phone operator records. Mean cumulative brain and whole-body doses were 1559.7 mJ/kg and 339.9 mJ/kg per day, respectively. 98.4% of the brain dose originated from near-field sources, mainly from GSM mobile phone calls (93.1%) and from DECT phone calls (4.8%). Main contributors to the whole-body dose were GSM mobile phone calls (69.0%), use of computer, laptop and tablet connected to WLAN (12.2%) and data traffic on the mobile phone via WLAN (6.5%). The exposure from mobile phone base stations contributed 1.8% to the whole-body dose, while uplink exposure from other people's mobile phones contributed 3.6%. In conclusion, the proposed approach is considered useful to combine near-field and far-field exposure to an integrative exposure surrogate for exposure assessment in epidemiologic studies. However, substantial uncertainties remain about exposure contributions from various near-field and far-field sources.
Development of an RF-EMF Exposure Surrogate for Epidemiologic Research
Roser, Katharina; Schoeni, Anna; Bürgi, Alfred; Röösli, Martin
2015-01-01
Exposure assessment is a crucial part in studying potential effects of RF-EMF. Using data from the HERMES study on adolescents, we developed an integrative exposure surrogate combining near-field and far-field RF-EMF exposure in a single brain and whole-body exposure measure. Contributions from far-field sources were modelled by propagation modelling and multivariable regression modelling using personal measurements. Contributions from near-field sources were assessed from both, questionnaires and mobile phone operator records. Mean cumulative brain and whole-body doses were 1559.7 mJ/kg and 339.9 mJ/kg per day, respectively. 98.4% of the brain dose originated from near-field sources, mainly from GSM mobile phone calls (93.1%) and from DECT phone calls (4.8%). Main contributors to the whole-body dose were GSM mobile phone calls (69.0%), use of computer, laptop and tablet connected to WLAN (12.2%) and data traffic on the mobile phone via WLAN (6.5%). The exposure from mobile phone base stations contributed 1.8% to the whole-body dose, while uplink exposure from other people’s mobile phones contributed 3.6%. In conclusion, the proposed approach is considered useful to combine near-field and far-field exposure to an integrative exposure surrogate for exposure assessment in epidemiologic studies. However, substantial uncertainties remain about exposure contributions from various near-field and far-field sources. PMID:26006132
Rio De Janeiro and Medellin: Similar Challenges, Different Approaches
2016-03-01
philosophy influenced their respective programs . Examining other countries, such as Argentina or Chile , could help show whether this thesis’s...struggled with public insecurity caused by illegal armed groups. Both have developed new programs to address areas of violence and parts of the... program . In Colombia, the program is called the Medellín Model and originated out of the mayor’s office. This thesis uses a comparative analysis to
ERIC Educational Resources Information Center
Cooper, Melanie; Klymkowsky, Michael
2013-01-01
The history of general chemistry is one of almost constant calls for reform, yet over the past 60 years little of substance has changed. Those reforms that have been implemented are almost entirely concerned with how the course is taught, rather than what is to be learned. Here we briefly discuss the history of the general chemistry curriculum and…
Digitizing for Computer-Aided Finite Element Model Generation.
1979-10-10
this approach is a collection of programs developed over the last eight years at the University of Arizona, and called the GIFTS system. This paper...briefly describes the latest version of the system, GIFTS -5, and demonstrates its suitability in a design environment by simple examples. The programs...constituting the GIFTS system were used as a tool for research in many areas, including mesh generation, finite element data base design, interactive
Crypto-Unitary Forms of Quantum Evolution Operators
NASA Astrophysics Data System (ADS)
Znojil, Miloslav
2013-06-01
The description of quantum evolution using unitary operator {u}(t)=exp(-i{h}t) requires that the underlying self-adjoint quantum Hamiltonian {h} remains time-independent. In a way extending the so called {PT}-symmetric quantum mechanics to the models with manifestly time-dependent "charge" {C}(t) we propose and describe an extension of such an exponential-operator approach to evolution to the manifestly time-dependent self-adjoint quantum Hamiltonians {h}(t).
Fission properties of Po isotopes in different macroscopic-microscopic models
NASA Astrophysics Data System (ADS)
Bartel, J.; Pomorski, K.; Nerlo-Pomorska, B.; Schmitt, Ch
2015-11-01
Fission-barrier heights of nuclei in the Po isotopic chain are investigated in several macroscopic-microscopic models. Using the Yukawa-folded single-particle potential, the Lublin-Strasbourg drop (LSD) model, the Strutinsky shell-correction method to yield the shell corrections and the BCS theory for the pairing contributions, fission-barrier heights are calculated and found in quite good agreement with the experimental data. This turns out, however, to be only the case when the underlying macroscopic, liquid-drop (LD) type, theory is well chosen. Together with the LSD approach, different LD parametrizations proposed by Moretto et al are tested. Four deformation parameters describing respectively elongation, neck-formation, reflectional-asymmetric, and non-axiality of the nuclear shape thus defining the so called modified Funny Hills shape parametrization are used in the calculation. The present study clearly demonstrates that nuclear fission-barrier heights constitute a challenging and selective tool to discern between such different macroscopic approaches.
Overlapping community detection in weighted networks via a Bayesian approach
NASA Astrophysics Data System (ADS)
Chen, Yi; Wang, Xiaolong; Xiang, Xin; Tang, Buzhou; Chen, Qingcai; Fan, Shixi; Bu, Junzhao
2017-02-01
Complex networks as a powerful way to represent complex systems have been widely studied during the past several years. One of the most important tasks of complex network analysis is to detect communities embedded in networks. In the real world, weighted networks are very common and may contain overlapping communities where a node is allowed to belong to multiple communities. In this paper, we propose a novel Bayesian approach, called the Bayesian mixture network (BMN) model, to detect overlapping communities in weighted networks. The advantages of our method are (i) providing soft-partition solutions in weighted networks; (ii) providing soft memberships, which quantify 'how strongly' a node belongs to a community. Experiments on a large number of real and synthetic networks show that our model has the ability in detecting overlapping communities in weighted networks and is competitive with other state-of-the-art models at shedding light on community partition.
A Dual Hesitant Fuzzy Multigranulation Rough Set over Two-Universe Model for Medical Diagnoses
Zhang, Chao; Li, Deyu; Yan, Yan
2015-01-01
In medical science, disease diagnosis is one of the difficult tasks for medical experts who are confronted with challenges in dealing with a lot of uncertain medical information. And different medical experts might express their own thought about the medical knowledge base which slightly differs from other medical experts. Thus, to solve the problems of uncertain data analysis and group decision making in disease diagnoses, we propose a new rough set model called dual hesitant fuzzy multigranulation rough set over two universes by combining the dual hesitant fuzzy set and multigranulation rough set theories. In the framework of our study, both the definition and some basic properties of the proposed model are presented. Finally, we give a general approach which is applied to a decision making problem in disease diagnoses, and the effectiveness of the approach is demonstrated by a numerical example. PMID:26858772
A Petri Net Approach Based Elementary Siphons Supervisor for Flexible Manufacturing Systems
NASA Astrophysics Data System (ADS)
Abdul-Hussin, Mowafak Hassan
2015-05-01
This paper presents an approach to constructing a class of an S3PR net for modeling, simulation and control of processes occurring in the flexible manufacturing system (FMS) used based elementary siphons of a Petri net. Siphons are very important to the analysis and control of deadlocks of FMS that is significant objectives of siphons. Petri net models in the efficiency structure analysis, and utilization of the FMSs when different policy can be implemented lead to the deadlock prevention. We are representing an effective deadlock-free policy of a special class of Petri nets called S3PR. Simulation of Petri net structural analysis and reachability graph analysis is used for analysis and control of Petri nets. Petri nets contain been successfully as one of the most powerful tools for modelling of FMS, where Using structural analysis, we show that liveness of such systems can be attributed to the absence of under marked siphons.
Structural Identifiability of Dynamic Systems Biology Models
Villaverde, Alejandro F.
2016-01-01
A powerful way of gaining insight into biological systems is by creating a nonlinear differential equation model, which usually contains many unknown parameters. Such a model is called structurally identifiable if it is possible to determine the values of its parameters from measurements of the model outputs. Structural identifiability is a prerequisite for parameter estimation, and should be assessed before exploiting a model. However, this analysis is seldom performed due to the high computational cost involved in the necessary symbolic calculations, which quickly becomes prohibitive as the problem size increases. In this paper we show how to analyse the structural identifiability of a very general class of nonlinear models by extending methods originally developed for studying observability. We present results about models whose identifiability had not been previously determined, report unidentifiabilities that had not been found before, and show how to modify those unidentifiable models to make them identifiable. This method helps prevent problems caused by lack of identifiability analysis, which can compromise the success of tasks such as experiment design, parameter estimation, and model-based optimization. The procedure is called STRIKE-GOLDD (STRuctural Identifiability taKen as Extended-Generalized Observability with Lie Derivatives and Decomposition), and it is implemented in a MATLAB toolbox which is available as open source software. The broad applicability of this approach facilitates the analysis of the increasingly complex models used in systems biology and other areas. PMID:27792726
Modeling Coherent Strategies for the Sustainable Development Goals
NASA Astrophysics Data System (ADS)
Walsh, B.; Obersteiner, M.; Herrero, M.; Riahi, K.; Fritz, S.; van Vuuren, D.; Havlik, P.
2016-12-01
The Sustainable Development Goals (SDGs) call for a comprehensive new approach to development rooted in planetary boundaries, equity and inclusivity. Societies have largely responded to this call with siloed strategies capable of making progress on selected subsets of these goals. However, agendas crafted specifically to alleviate poverty, hunger, deforestation, biodiversity loss, or other ills may doom the SDG agenda, as policies and strategies designed to accomplish one or several goals can impede and in some cases reverse progress toward others at national, regional, and global levels. We adopt a comprehensive modeling approach to understand the basis for tradeoffs among environmental conservation initiatives (goals 13-15) and food prices (goal 2). We show that such tradeoffs are manifestations of policy-driven pressure in land (i.e. agricultural and environmental) systems. By reducing total land system pressure, Sustainable Consumption and Production (SCP, goal 12) policies minimize tradeoffs and should therefore be regarded as necessary conditions for achieving multiple SDGs. SDG strategies constructed around SCP policies escape problem-shifting, which has long placed global development and conservation agendas at odds. We expect that this and future systems analyses will allow policymakers to negotiate tradeoffs and exploit synergies as they assemble sustainable development strategies equal in scope to the ambition of the SDGs.
NASA Astrophysics Data System (ADS)
Zhao, Mi; Hou, Yifan; Liu, Ding
2010-10-01
In this article we deal with deadlock prevention problems for S4PR, a class of generalised Petri nets, which can well model a large class of flexible manufacturing systems where deadlocks are caused by insufficiently marked siphons. We present a deadlock prevention methodology that is an iterative approach consisting of two stages. The first one is called siphon control, which is to add for each insufficiently marked minimal siphon a control place to the original net. Its objective is to prevent a minimal siphon from being insufficiently marked. The second one, called control-induced siphon control, is to add a control place to the augmented net with its output arcs connecting to the source transitions, which assures that there are no new insufficiently marked siphons generated. At each iteration, a mixed integer programming approach is adopted for generalised Petri nets to obtain an insufficiently marked minimal siphon from the maximal deadly siphon. This way complete siphon enumeration is avoided that is much more time-consuming for a sizeable plant model than the proposed method. The relation of the proposed method and the liveness and reversibility of the controlled net is obtained. Examples are presented to demonstrate the presented method.
NASA Astrophysics Data System (ADS)
Rico, Antonio; Noguera, Manuel; Garrido, José Luis; Benghazi, Kawtar; Barjis, Joseph
2016-05-01
Multi-tenant architectures (MTAs) are considered a cornerstone in the success of Software as a Service as a new application distribution formula. Multi-tenancy allows multiple customers (i.e. tenants) to be consolidated into the same operational system. This way, tenants run and share the same application instance as well as costs, which are significantly reduced. Functional needs vary from one tenant to another; either companies from different sectors run different types of applications or, although deploying the same functionality, they do differ in the extent of their complexity. In any case, MTA leaves one major concern regarding the companies' data, their privacy and security, which requires special attention to the data layer. In this article, we propose an extended data model that enhances traditional MTAs in respect of this concern. This extension - called multi-target - allows MT applications to host, manage and serve multiple functionalities within the same multi-tenant (MT) environment. The practical deployment of this approach will allow SaaS vendors to target multiple markets or address different levels of functional complexity and yet commercialise just one single MT application. The applicability of the approach is demonstrated via a case study of a real multi-tenancy multi-target (MT2) implementation, called Globalgest.
Karras, Elizabeth; Lu, Naiji; Zuo, Guoxin; Tu, Xin M; Stephens, Brady; Draper, John; Thompson, Caitlin; Bossarte, Robert M
2016-08-01
Campaigns have become popular in public health approaches to suicide prevention; however, limited empirical investigation of their impact on behavior has been conducted. To address this gap, utilization patterns of crisis support services associated with the Department of Veterans Affairs' Veterans Crisis Line (VCL) suicide prevention campaign were examined. Daily call data for the National Suicide Prevention Lifeline, VCL, and 1-800-SUICIDE were modeled using a novel semi-varying coefficient method. Analyses reveal significant increases in call volume to both targeted and broad resources during the campaign. Findings underscore the need for further research to refine measurement of the effects of these suicide prevention efforts. © 2016 The American Association of Suicidology.
A real-time computer model to assess resident work-hours scenarios.
McDonald, Furman S; Ramakrishna, Gautam; Schultz, Henry J
2002-07-01
To accurately model residents' work hours and assess options to forthrightly meet Residency Review Committee-Internal Medicine (RRC-IM) requirements. The requirements limiting residents' work hours are clearly defined by the Accreditation Council for Graduate Medical Education (ACGME) and the RRC-IM: "When averaged over any four-week rotation or assignment, residents must not spend more than 80 hours per week in patient care duties."(1) The call for the profession to realistically address work-hours violations is of paramount importance.(2) Unfortunately, work hours are hard to calculate. We developed an electronic model of residents' work-hours scenarios using Microsoft Excel 97. This model allows the input of multiple parameters (i.e., call frequency, call position, days off, short-call, weeks per rotation, outpatient weeks, clinic day of the week, additional time due to clinic) and start and stop times for post-call, non-call, short-call, and weekend days. For each resident on a rotation, the model graphically demonstrates call schedules, plots clinic days, and portrays all possible and preferred days off. We tested the model for accuracy in several scenarios. For example, the model predicted average work hours of 85.1 hours per week for fourth-night-call rotations. This was compared with logs of actual work hours of 84.6 hours per week. Model accuracy for this scenario was 99.4% (95% CI 96.2%-100%). The model prospectively predicted work hours of 89.9 hours/week in the cardiac intensive care unit (CCU). Subsequent surveys found mean CCU work hours of 88, 1 hours per week. Model accuracy for this scenario was 98% (95% CI 93.2-100%). Thus validated, we then used the model to test proposed scenarios for complying with RRC-IM limits. The flexibility of the model allowed demonstration of the full range of work-hours scenarios in every rotation of our 36-month program. Demonstrations of status-quo work-hours scenarios were presented to faculty as well as real-time demonstrations of the feasibility, or unfeasibility, of their proposed solutions. The model clearly demonstrated that non-call (i.e., short-call) admissions without concomitant decreases in overnight call frequency resulted in substantial increases in total work hours. Attempts to "get the resident out" an hour or two earlier each day had negligible effects on total hours and were unrealistic paper solutions. For fourth-night-call rotations, the addition of a "golden weekend" (i.e., a fifth day off per month) was found to significantly reduce work hours. The electronic model allowed the development of creative schedules for previously third-night-call rotations that limit resident work hours without decreasing continuity of care by scheduling overnight call every sixth night alternating with sixth-night-short-call rotations. Our electronic model is sufficiently robust to accurately estimate work hours on multiple and varied rotations. This model clearly demonstrates that it is very difficult to meet the RRC-IM work-hours limitations under standard fourth-night-call schedules with only four days off per month. We are successfully using our model to test proposed alternative scenarios, to overcome faculty misconceptions about resident work-hours "solutions," and to make changes to our call schedules that both are realistic for residents to accomplish and truly diminish total resident work hours toward the requirements of the RRC-IM.
Benkert, Pascal; Schwede, Torsten; Tosatto, Silvio Ce
2009-05-20
The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per-residue estimation of model quality (QMEANlocal) and compare it to a new local consensus-based approach. Improved model selection is obtained by using a composite scoring function operating on single models in order to enrich higher quality models which are subsequently used to calculate the structural consensus. The performance of consensus-based methods such as QMEANclust highly depends on the composition and quality of the model ensemble to be analysed. Therefore, performance estimates for consensus methods based on large meta-datasets (e.g. CASP) might overrate their applicability in more realistic modelling situations with smaller sets of models based on individual methods.
Real Time Optima Tracking Using Harvesting Models of the Genetic Algorithm
NASA Technical Reports Server (NTRS)
Baskaran, Subbiah; Noever, D.
1999-01-01
Tracking optima in real time propulsion control, particularly for non-stationary optimization problems is a challenging task. Several approaches have been put forward for such a study including the numerical method called the genetic algorithm. In brief, this approach is built upon Darwinian-style competition between numerical alternatives displayed in the form of binary strings, or by analogy to 'pseudogenes'. Breeding of improved solution is an often cited parallel to natural selection in.evolutionary or soft computing. In this report we present our results of applying a novel model of a genetic algorithm for tracking optima in propulsion engineering and in real time control. We specialize the algorithm to mission profiling and planning optimizations, both to select reduced propulsion needs through trajectory planning and to explore time or fuel conservation strategies.
On the accuracy of personality judgment: a realistic approach.
Funder, D C
1995-10-01
The "accuracy paradigm" for the study of personality judgment provides an important, new complement to the "error paradigm" that dominated this area of research for almost 2 decades. The present article introduces a specific approach within the accuracy paradigm called the Realistic Accuracy Model (RAM). RAM begins with the assumption that personality traits are real attributes of individuals. This assumption entails the use of a broad array of criteria for the evaluation of personality judgment and leads to a model that describes accuracy as a function of the availability, detection, and utilization of relevant behavioral cues. RAM provides a common explanation for basic moderators of accuracy, sheds light on how these moderators interact, and outlines a research agenda that includes the reintegration of the study of error with the study of accuracy.
Leistritz, L; Suesse, T; Haueisen, J; Hilgenfeld, B; Witte, H
2006-01-01
Directed information transfer in the human brain occurs presumably by oscillations. As of yet, most approaches for the analysis of these oscillations are based on time-frequency or coherence analysis. The present work concerns the modeling of cortical 600 Hz oscillations, localized within the Brodmann Areas 3b and 1 after stimulation of the nervus medianus, by means of coupled differential equations. This approach leads to the so-called parameter identification problem, where based on a given data set, a set of unknown parameters of a system of ordinary differential equations is determined by special optimization procedures. Some suitable algorithms for this task are presented in this paper. Finally an oscillatory network model is optimally fitted to the data taken from ten volunteers.
Acoustic multipath arrivals in the horizontal plane due to approaching nonlinear internal waves.
Badiey, Mohsen; Katsnelson, Boris G; Lin, Ying-Tsong; Lynch, James F
2011-04-01
Simultaneous measurements of acoustic wave transmissions and a nonlinear internal wave packet approaching an along-shelf acoustic path during the Shallow Water 2006 experiment are reported. The incoming internal wave packet acts as a moving frontal layer reflecting (or refracting) sound in the horizontal plane. Received acoustic signals are filtered into acoustic normal mode arrivals. It is shown that a horizontal multipath interference is produced. This has previously been called a horizontal Lloyd's mirror. The interference between the direct path and the refracted path depends on the mode number and frequency of the acoustic signal. A mechanism for the multipath interference is shown. Preliminary modeling results of this dynamic interaction using vertical modes and horizontal parabolic equation models are in good agreement with the observed data.
Phenomenological approach to mechanical damage growth analysis.
Pugno, Nicola; Bosia, Federico; Gliozzi, Antonio S; Delsanto, Pier Paolo; Carpinteri, Alberto
2008-10-01
The problem of characterizing damage evolution in a generic material is addressed with the aim of tracing it back to existing growth models in other fields of research. Based on energetic considerations, a system evolution equation is derived for a generic damage indicator describing a material system subjected to an increasing external stress. The latter is found to fit into the framework of a recently developed phenomenological universality (PUN) approach and, more specifically, the so-called U2 class. Analytical results are confirmed by numerical simulations based on a fiber-bundle model and statistically assigned local strengths at the microscale. The fits with numerical data prove, with an excellent degree of reliability, that the typical evolution of the damage indicator belongs to the aforementioned PUN class. Applications of this result are briefly discussed and suggested.
Probabilistic models of eukaryotic evolution: time for integration
Lartillot, Nicolas
2015-01-01
In spite of substantial work and recent progress, a global and fully resolved picture of the macroevolutionary history of eukaryotes is still under construction. This concerns not only the phylogenetic relations among major groups, but also the general characteristics of the underlying macroevolutionary processes, including the patterns of gene family evolution associated with endosymbioses, as well as their impact on the sequence evolutionary process. All these questions raise formidable methodological challenges, calling for a more powerful statistical paradigm. In this direction, model-based probabilistic approaches have played an increasingly important role. In particular, improved models of sequence evolution accounting for heterogeneities across sites and across lineages have led to significant, although insufficient, improvement in phylogenetic accuracy. More recently, one main trend has been to move away from simple parametric models and stepwise approaches, towards integrative models explicitly considering the intricate interplay between multiple levels of macroevolutionary processes. Such integrative models are in their infancy, and their application to the phylogeny of eukaryotes still requires substantial improvement of the underlying models, as well as additional computational developments. PMID:26323768
Zhou, Hongyi; Skolnick, Jeffrey
2010-01-01
In this work, we develop a method called FTCOM for assessing the global quality of protein structural models for targets of medium and hard difficulty (remote homology) produced by structure prediction approaches such as threading or ab initio structure prediction. FTCOM requires the Cα coordinates of full length models and assesses model quality based on fragment comparison and a score derived from comparison of the model to top threading templates. On a set of 361 medium/hard targets, FTCOM was applied to and assessed for its ability to improve upon the results from the SP3, SPARKS, PROSPECTOR_3, and PRO-SP3-TASSER threading algorithms. The average TM-score improves by 5%–10% for the first selected model by the new method over models obtained by the original selection procedure in the respective threading methods. Moreover the number of foldable targets (TM-score ≥0.4) increases from least 7.6% for SP3 to 54% for SPARKS. Thus, FTCOM is a promising approach to template selection. PMID:20455261
Characterization of photomultiplier tubes with a realistic model through GPU-boosted simulation
NASA Astrophysics Data System (ADS)
Anthony, M.; Aprile, E.; Grandi, L.; Lin, Q.; Saldanha, R.
2018-02-01
The accurate characterization of a photomultiplier tube (PMT) is crucial in a wide-variety of applications. However, current methods do not give fully accurate representations of the response of a PMT, especially at very low light levels. In this work, we present a new and more realistic model of the response of a PMT, called the cascade model, and use it to characterize two different PMTs at various voltages and light levels. The cascade model is shown to outperform the more common Gaussian model in almost all circumstances and to agree well with a newly introduced model independent approach. The technical and computational challenges of this model are also presented along with the employed solution of developing a robust GPU-based analysis framework for this and other non-analytical models.
“And Yet It Was a Blessing”: The Case for Existential Maturity
Reddy, Neha; Hauser, Joshua; Sonnenfeld, Sarah B.
2017-01-01
Abstract We are interested in the kind of well-being that can occur as a person approaches death; we call it “existential maturity.” We describe a conceptual model of this state that we felt was realized in an individual case, illustrating the state by describing the case. Our goal is to articulate a generalizable, working model of existential maturity in concepts and terms taken from fundamentals of psychodynamic theory. We hope that a recognizable case and a model-based way of thinking about what was going on can both help guide care that fosters existential maturity and stimulate more theoretical modeling of the state. PMID:28128674
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patnaik, P. C.
The SIGMET mesoscale meteorology simulation code represents an extension, in terms of physical modelling detail and numerical approach, of the work of Anthes (1972) and Anthes and Warner (1974). The code utilizes a finite difference technique to solve the so-called primitive equations which describe transient flow in the atmosphere. The SIGMET modelling contains all of the physics required to simulate the time dependent meteorology of a region with description of both the planetary boundary layer and upper level flow as they are affected by synoptic forcing and complex terrain. The mathematical formulation of the SIGMET model and the various physicalmore » effects incorporated into it are summarized.« less
Abnormal pressures as hydrodynamic phenomena
Neuzil, C.E.
1995-01-01
So-called abnormal pressures, subsurface fluid pressures significantly higher or lower than hydrostatic, have excited speculation about their origin since subsurface exploration first encountered them. Two distinct conceptual models for abnormal pressures have gained currency among earth scientists. The static model sees abnormal pressures generally as relict features preserved by a virtual absence of fluid flow over geologic time. The hydrodynamic model instead envisions abnormal pressures as phenomena in which flow usually plays an important role. This paper develops the theoretical framework for abnormal pressures as hydrodynamic phenomena, shows that it explains the manifold occurrences of abnormal pressures, and examines the implications of this approach. -from Author
Systems Engineering and Application of System Performance Modeling in SIM Lite Mission
NASA Technical Reports Server (NTRS)
Moshir, Mehrdad; Murphy, David W.; Milman, Mark H.; Meier, David L.
2010-01-01
The SIM Lite Astrometric Observatory will be the first space-based Michelson interferometer operating in the visible wavelength, with the ability to perform ultra-high precision astrometric measurements on distant celestial objects. SIM Lite data will address in a fundamental way questions such as characterization of Earth-mass planets around nearby stars. To accomplish these goals it is necessary to rely on a model-based systems engineering approach - much more so than most other space missions. This paper will describe in further detail the components of this end-to-end performance model, called "SIM-sim", and show how it has helped the systems engineering process.
FRaC: a feature-modeling approach for semi-supervised and unsupervised anomaly detection.
Noto, Keith; Brodley, Carla; Slonim, Donna
2012-01-01
Anomaly detection involves identifying rare data instances (anomalies) that come from a different class or distribution than the majority (which are simply called "normal" instances). Given a training set of only normal data, the semi-supervised anomaly detection task is to identify anomalies in the future. Good solutions to this task have applications in fraud and intrusion detection. The unsupervised anomaly detection task is different: Given unlabeled, mostly-normal data, identify the anomalies among them. Many real-world machine learning tasks, including many fraud and intrusion detection tasks, are unsupervised because it is impractical (or impossible) to verify all of the training data. We recently presented FRaC, a new approach for semi-supervised anomaly detection. FRaC is based on using normal instances to build an ensemble of feature models, and then identifying instances that disagree with those models as anomalous. In this paper, we investigate the behavior of FRaC experimentally and explain why FRaC is so successful. We also show that FRaC is a superior approach for the unsupervised as well as the semi-supervised anomaly detection task, compared to well-known state-of-the-art anomaly detection methods, LOF and one-class support vector machines, and to an existing feature-modeling approach.
FRaC: a feature-modeling approach for semi-supervised and unsupervised anomaly detection
Brodley, Carla; Slonim, Donna
2011-01-01
Anomaly detection involves identifying rare data instances (anomalies) that come from a different class or distribution than the majority (which are simply called “normal” instances). Given a training set of only normal data, the semi-supervised anomaly detection task is to identify anomalies in the future. Good solutions to this task have applications in fraud and intrusion detection. The unsupervised anomaly detection task is different: Given unlabeled, mostly-normal data, identify the anomalies among them. Many real-world machine learning tasks, including many fraud and intrusion detection tasks, are unsupervised because it is impractical (or impossible) to verify all of the training data. We recently presented FRaC, a new approach for semi-supervised anomaly detection. FRaC is based on using normal instances to build an ensemble of feature models, and then identifying instances that disagree with those models as anomalous. In this paper, we investigate the behavior of FRaC experimentally and explain why FRaC is so successful. We also show that FRaC is a superior approach for the unsupervised as well as the semi-supervised anomaly detection task, compared to well-known state-of-the-art anomaly detection methods, LOF and one-class support vector machines, and to an existing feature-modeling approach. PMID:22639542
Knies, David; Wittmüß, Philipp; Appel, Sebastian; Sawodny, Oliver; Ederer, Michael; Feuer, Ronny
2015-10-28
The coccolithophorid unicellular alga Emiliania huxleyi is known to form large blooms, which have a strong effect on the marine carbon cycle. As a photosynthetic organism, it is subjected to a circadian rhythm due to the changing light conditions throughout the day. For a better understanding of the metabolic processes under these periodically-changing environmental conditions, a genome-scale model based on a genome reconstruction of the E. huxleyi strain CCMP 1516 was created. It comprises 410 reactions and 363 metabolites. Biomass composition is variable based on the differentiation into functional biomass components and storage metabolites. The model is analyzed with a flux balance analysis approach called diurnal flux balance analysis (diuFBA) that was designed for organisms with a circadian rhythm. It allows storage metabolites to accumulate or be consumed over the diurnal cycle, while keeping the structure of a classical FBA problem. A feature of this approach is that the production and consumption of storage metabolites is not defined externally via the biomass composition, but the result of optimal resource management adapted to the diurnally-changing environmental conditions. The model in combination with this approach is able to simulate the variable biomass composition during the diurnal cycle in proximity to literature data.
Discrete Velocity Models for Polyatomic Molecules Without Nonphysical Collision Invariants
NASA Astrophysics Data System (ADS)
Bernhoff, Niclas
2018-05-01
An important aspect of constructing discrete velocity models (DVMs) for the Boltzmann equation is to obtain the right number of collision invariants. Unlike for the Boltzmann equation, for DVMs there can appear extra collision invariants, so called spurious collision invariants, in plus to the physical ones. A DVM with only physical collision invariants, and hence, without spurious ones, is called normal. The construction of such normal DVMs has been studied a lot in the literature for single species, but also for binary mixtures and recently extensively for multicomponent mixtures. In this paper, we address ways of constructing normal DVMs for polyatomic molecules (here represented by that each molecule has an internal energy, to account for non-translational energies, which can change during collisions), under the assumption that the set of allowed internal energies are finite. We present general algorithms for constructing such models, but we also give concrete examples of such constructions. This approach can also be combined with similar constructions of multicomponent mixtures to obtain multicomponent mixtures with polyatomic molecules, which is also briefly outlined. Then also, chemical reactions can be added.
Topographic Spreading Analysis of an Empirical Sex Workers' Network
NASA Astrophysics Data System (ADS)
Bjell, Johannes; Canright, Geoffrey; Engø-Monsen, Kenth; Remple, Valencia P.
The problem of epidemic spreading over networks has received considerable attention in recent years, due both to its intrinsic intellectual challenge and to its practical importance. A good recent summary of such work may be found in Newman (8), while (9) gives an outstanding example of a non-trivial prediction which is obtained from explicitly modeling the network in the epidemic spreading. In the language of mathematicians and computer scientists, a network of nodes connected by edges is called a graph. Most work on epidemic spreading over networks focuses on whole-graph properties, such as the percentage of infected nodes at long time. Two of us have, in contrast, focused on understanding the spread of an infection over time and space (the network) (61; 63; 62). This work involves decomposing any given network into subgraphs called regions (61). Regions are precisely defined as disjoint subgraphs which may be viewed as coarse-grained units of infection—in that, once one node in a region is infected, the progress of the infection over the remainder of the region is relatively fast and predictable (63). We note that this approach is based on the ‘Susceptible-Infected’ (SI) model of infection, in which nodes, once infected, are never cured. This model is reasonable for some infections, such as HIV—which is one of the diseases studied here. We also study gonorrhea and chlamydia, for which a more appropriate model is Susceptible-Infected-Susceptible (SIS) (67) (since nodes can be cured); we discuss the limitations of our approach for these cases below.
NASA Astrophysics Data System (ADS)
Sheikholeslami, R.; Hosseini, N.; Razavi, S.
2016-12-01
Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).
NASA Astrophysics Data System (ADS)
Bonzom, Valentin
2016-07-01
We review an approach which aims at studying discrete (pseudo-)manifolds in dimension d≥ 2 and called random tensor models. More specifically, we insist on generalizing the two-dimensional notion of p-angulations to higher dimensions. To do so, we consider families of triangulations built out of simplices with colored faces. Those simplices can be glued to form new building blocks, called bubbles which are pseudo-manifolds with boundaries. Bubbles can in turn be glued together to form triangulations. The main challenge is to classify the triangulations built from a given set of bubbles with respect to their numbers of bubbles and simplices of codimension two. While the colored triangulations which maximize the number of simplices of codimension two at fixed number of simplices are series-parallel objects called melonic triangulations, this is not always true anymore when restricting attention to colored triangulations built from specific bubbles. This opens up the possibility of new universality classes of colored triangulations. We present three existing strategies to find those universality classes. The first two strategies consist in building new bubbles from old ones for which the problem can be solved. The third strategy is a bijection between those colored triangulations and stuffed, edge-colored maps, which are some sort of hypermaps whose hyperedges are replaced with edge-colored maps. We then show that the present approach can lead to enumeration results and identification of universality classes, by working out the example of quartic tensor models. They feature a tree-like phase, a planar phase similar to two-dimensional quantum gravity and a phase transition between them which is interpreted as a proliferation of baby universes. While this work is written in the context of random tensors, it is almost exclusively of combinatorial nature and we hope it is accessible to interested readers who are not familiar with random matrices, tensors and quantum field theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée
2015-04-01
In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less
On the use of L-band microwave and multi-mission EO data for high resolution soil moisture
NASA Astrophysics Data System (ADS)
Bitar, Ahmad Al; Merlin, Olivier; Malbeteau, Yoann; Molero-Rodenas, Beatriz; Zribi, Mehrez; Sekhar, Muddu; Tomer, Sat Kumar; José Escorihuela, Maria; Stefan, Vivien; Suere, Christophe; Mialon, Arnaud; Kerr, Yann
2017-04-01
Sub-kilometric soil moisture maps have been increasingly mentioned as a need in the scientific community for many applications ranging from agronomical and hydrological (Wood et al. 2011). For example, this type of dataset will become essential to support the current evolution of the land surface and hydrologic modelling communities towards high resolution global modelling. But the ability of the different sensors to monitor soil moisture is different. The L-Band microwave EO provides, at a coarse resolution, the most sensitive information to surface soil moisture when compared to C-Band microwave, optical or C-band SAR. On the other hand the optical and radar sensors provide the spatial distribution of associated variables like surface soil moisture,surface temperature or vegetation leaf area index. This paper describes two complementary fusion approaches to obtain such data from optical or SAR in combination to microwave EO, and more precisely L-Band microwave from the SMOS mission. The first approach, called MAPSM, is based on the use of high resolution soil moisture from SAR and microwave. The two types of sensors have all weather capabilities. The approach uses the new concept of water change capacity (Tomer et al. 2015, 2016). It has been applied to the Berambadi watershed in South-India which is characterised by high cloud coverage. The second approach, called Dispatch, is based on the use of optical sensors in a physical disaggregation approach. It is a well-established approach (Merlin et al. 2012, Malbeteau et al. 2015) that has been implemented operationally in the CATDS (Centre Aval de Traitement des Données SMOS) processing centre (Molero et al. 2016). An analysis on the complementarity of the approaches is discussed. The results show the performances of the methods when compared to existing soil moisture monitoring networks in arid, sub-tropical and humid environments. They emphasis on the need for large inter-comparison studied for the qualification of such products on different climatic zones and on the need of an adaptative multisensor approach. The availability of the recent Sentinel-1 2 and 3 missions from ESA provides an exceptional environment to apply such algorithms at larger scales.
Mobbing calls signal predator category in a kin group-living bird species
Griesser, Michael
2009-01-01
Many prey species gather together to approach and harass their predators despite the associated risks. While mobbing, prey usually utter calls and previous experiments have demonstrated that mobbing calls can convey information about risk to conspecifics. However, the risk posed by predators also differs between predator categories. The ability to communicate predator category would be adaptive because it would allow other mobbers to adjust their risk taking. I tested this idea in Siberian jays Perisoreus infaustus, a group-living bird species, by exposing jay groups to mounts of three hawk and three owl species of varying risks. Groups immediately approached to mob the mount and uttered up to 14 different call types. Jays gave more calls when mobbing a more dangerous predator and when in the presence of kin. Five call types were predator-category-specific and jays uttered two hawk-specific and three owl-specific call types. Thus, this is one of the first studies to demonstrate that mobbing calls can simultaneously encode information about both predator category and the risk posed by a predator. Since antipredator calls of Siberian jays are known to specifically aim at reducing the risk to relatives, kin-based sociality could be an important factor in facilitating the evolution of predator-category-specific mobbing calls. PMID:19474047
Mobbing calls signal predator category in a kin group-living bird species.
Griesser, Michael
2009-08-22
Many prey species gather together to approach and harass their predators despite the associated risks. While mobbing, prey usually utter calls and previous experiments have demonstrated that mobbing calls can convey information about risk to conspecifics. However, the risk posed by predators also differs between predator categories. The ability to communicate predator category would be adaptive because it would allow other mobbers to adjust their risk taking. I tested this idea in Siberian jays Perisoreus infaustus, a group-living bird species, by exposing jay groups to mounts of three hawk and three owl species of varying risks. Groups immediately approached to mob the mount and uttered up to 14 different call types. Jays gave more calls when mobbing a more dangerous predator and when in the presence of kin. Five call types were predator-category-specific and jays uttered two hawk-specific and three owl-specific call types. Thus, this is one of the first studies to demonstrate that mobbing calls can simultaneously encode information about both predator category and the risk posed by a predator. Since antipredator calls of Siberian jays are known to specifically aim at reducing the risk to relatives, kin-based sociality could be an important factor in facilitating the evolution of predator-category-specific mobbing calls.
Berzosa, Álvaro; Barandica, Jesús M; Fernández-Sánchez, Gonzalo
2014-01-01
In recent years, several methodologies have been developed for the quantification of greenhouse gas (GHG) emissions. However, determining who is responsible for these emissions is also quite challenging. The most common approach is to assign emissions to the producer (based on the Kyoto Protocol), but proposals also exist for its allocation to the consumer (based on an ecological footprint perspective) and for a hybrid approach called shared responsibility. In this study, the existing proposals and standards regarding the allocation of GHG emissions responsibilities are analyzed, focusing on their main advantages and problems. A new model of shared responsibility that overcomes some of the existing problems is also proposed. This model is based on applying the best available technologies (BATs). This new approach allocates the responsibility between the producers and the final consumers based on the real capacity of each agent to reduce emissions. The proposed approach is demonstrated using a simple case study of a 4-step life cycle of ammonia nitrate (AN) fertilizer production. The proposed model has the characteristics that the standards and publications for assignment of GHG emissions responsibilities demand. This study presents a new way to assign responsibilities that pushes all the actors in the production chain, including consumers, to reduce pollution. © 2013 SETAC.
COMPUTATIONAL TOXICOLOGY-WHERE IS THE DATA? ...
This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource). This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource).
CALL: Past, Present and Future--A Bibliometric Approach
ERIC Educational Resources Information Center
Jung, Udo O. H.
2005-01-01
A bibliometric approach is used not only to sketch out the development of CALL during the last 25 years, but also to assess the contribution of educational technology to 21st century foreign-language teaching and learning. This study is based on the six instalments of the author's International (and multilingual) Bibliography of Computer Assisted…
Phylogenetic signal in the acoustic parameters of the advertisement calls of four clades of anurans.
Gingras, Bruno; Mohandesan, Elmira; Boko, Drasko; Fitch, W Tecumseh
2013-07-01
Anuran vocalizations, especially their advertisement calls, are largely species-specific and can be used to identify taxonomic affiliations. Because anurans are not vocal learners, their vocalizations are generally assumed to have a strong genetic component. This suggests that the degree of similarity between advertisement calls may be related to large-scale phylogenetic relationships. To test this hypothesis, advertisement calls from 90 species belonging to four large clades (Bufo, Hylinae, Leptodactylus, and Rana) were analyzed. Phylogenetic distances were estimated based on the DNA sequences of the 12S mitochondrial ribosomal RNA gene, and, for a subset of 49 species, on the rhodopsin gene. Mean values for five acoustic parameters (coefficient of variation of root-mean-square amplitude, dominant frequency, spectral flux, spectral irregularity, and spectral flatness) were computed for each species. We then tested for phylogenetic signal on the body-size-corrected residuals of these five parameters, using three statistical tests (Moran's I, Mantel, and Blomberg's K) and three models of genetic distance (pairwise distances, Abouheif's proximities, and the variance-covariance matrix derived from the phylogenetic tree). A significant phylogenetic signal was detected for most acoustic parameters on the 12S dataset, across statistical tests and genetic distance models, both for the entire sample of 90 species and within clades in several cases. A further analysis on a subset of 49 species using genetic distances derived from rhodopsin and from 12S broadly confirmed the results obtained on the larger sample, indicating that the phylogenetic signals observed in these acoustic parameters can be detected using a variety of genetic distance models derived either from a variable mitochondrial sequence or from a conserved nuclear gene. We found a robust relationship, in a large number of species, between anuran phylogenetic relatedness and acoustic similarity in the advertisement calls in a taxon with no evidence for vocal learning, even after correcting for the effect of body size. This finding, covering a broad sample of species whose vocalizations are fairly diverse, indicates that the intense selection on certain call characteristics observed in many anurans does not eliminate all acoustic indicators of relatedness. Our approach could potentially be applied to other vocal taxa.
Heat stress incident prevalence and tennis matchplay performance at the Australian Open.
Smith, Matthew T; Reid, Machar; Kovalchik, Stephanie; Woods, Tim O; Duffield, Rob
2018-05-01
To examine the association of wet bulb globe temperature (WBGT) with the occurrence of heat-related incidents and changes in behavioural and matchplay characteristics in men's Grand Slam tennis. On-court calls for trainers, doctors, cooling devices and water, post-match medical consults and matchplay characteristic data were collected from 360 Australian Open matches (first 4 rounds 2014-2016). Data were referenced against estimated WBGT and categorised into standard zones. Generalised linear models assessed the association of WBGT zone on heat-related medical incidences and matchplay variables. On-court calls for doctor (47% increase per zone, p=0.001), heat-related events (41%, p=0.019), cooling devices (53%, p<0.001), and post-match heat-related consults (87%, p=0.014) increased with each rise in estimated WBGT zone. In WBGT's >32°C and >28°C, significant increases in heat-related calls (p=0.019) and calls for cooling devices (p<0.001), respectively, were evident. The number of winners (-2.5±0.006% per zone, p<0.001) and net approaches (-7.1±0.008%, p<0.001) reduced as the estimated WBGT zone increased, while return points won increased (1.75±0.46, p<0.001). When matches were adjusted for player quality of the opponent (Elo rating), the number of aces (5±0.02%, p=0.003) increased with estimated WBGT zone, whilst net approaches decreased (7.6±0.013%, p<0.001). Increased estimated WBGT increased total match doctor and trainer consults for heat related-incidents, post-match heat-related consults (>32°C) and cooling device callouts (>28°C). However, few matchplay characteristics were noticeably affected, with only reduced net approaches and increased aces evident in higher estimated WBGT environments. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
A Research Roadmap for Computation-Based Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald; Mandelli, Diego; Joe, Jeffrey
2015-08-01
The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less
GridLAB-D: An Agent-Based Simulation Framework for Smart Grids
Chassin, David P.; Fuller, Jason C.; Djilali, Ned
2014-01-01
Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control systemmore » design, and integration of wind power in a smart grid.« less
Berry, Roberta M; Borenstein, Jason; Butera, Robert J
2013-06-01
This manuscript describes a pilot study in ethics education employing a problem-based learning approach to the study of novel, complex, ethically fraught, unavoidably public, and unavoidably divisive policy problems, called "fractious problems," in bioscience and biotechnology. Diverse graduate and professional students from four US institutions and disciplines spanning science, engineering, humanities, social science, law, and medicine analyzed fractious problems employing "navigational skills" tailored to the distinctive features of these problems. The students presented their results to policymakers, stakeholders, experts, and members of the public. This approach may provide a model for educating future bioscientists and bioengineers so that they can meaningfully contribute to the social understanding and resolution of challenging policy problems generated by their work.
GridLAB-D: An Agent-Based Simulation Framework for Smart Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Fuller, Jason C.; Djilali, Ned
2014-06-23
Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control systemmore » design, and integration of wind power in a smart grid.« less
Verbist, Bie; Clement, Lieven; Reumers, Joke; Thys, Kim; Vapirev, Alexander; Talloen, Willem; Wetzels, Yves; Meys, Joris; Aerssens, Jeroen; Bijnens, Luc; Thas, Olivier
2015-02-22
Deep-sequencing allows for an in-depth characterization of sequence variation in complex populations. However, technology associated errors may impede a powerful assessment of low-frequency mutations. Fortunately, base calls are complemented with quality scores which are derived from a quadruplet of intensities, one channel for each nucleotide type for Illumina sequencing. The highest intensity of the four channels determines the base that is called. Mismatch bases can often be corrected by the second best base, i.e. the base with the second highest intensity in the quadruplet. A virus variant model-based clustering method, ViVaMBC, is presented that explores quality scores and second best base calls for identifying and quantifying viral variants. ViVaMBC is optimized to call variants at the codon level (nucleotide triplets) which enables immediate biological interpretation of the variants with respect to their antiviral drug responses. Using mixtures of HCV plasmids we show that our method accurately estimates frequencies down to 0.5%. The estimates are unbiased when average coverages of 25,000 are reached. A comparison with the SNP-callers V-Phaser2, ShoRAH, and LoFreq shows that ViVaMBC has a superb sensitivity and specificity for variants with frequencies above 0.4%. Unlike the competitors, ViVaMBC reports a higher number of false-positive findings with frequencies below 0.4% which might partially originate from picking up artificial variants introduced by errors in the sample and library preparation step. ViVaMBC is the first method to call viral variants directly at the codon level. The strength of the approach lies in modeling the error probabilities based on the quality scores. Although the use of second best base calls appeared very promising in our data exploration phase, their utility was limited. They provided a slight increase in sensitivity, which however does not warrant the additional computational cost of running the offline base caller. Apparently a lot of information is already contained in the quality scores enabling the model based clustering procedure to adjust the majority of the sequencing errors. Overall the sensitivity of ViVaMBC is such that technical constraints like PCR errors start to form the bottleneck for low frequency variant detection.
Metamodeling and the Critic-based approach to multi-level optimization.
Werbos, Ludmilla; Kozma, Robert; Silva-Lugo, Rodrigo; Pazienza, Giovanni E; Werbos, Paul J
2012-08-01
Large-scale networks with hundreds of thousands of variables and constraints are becoming more and more common in logistics, communications, and distribution domains. Traditionally, the utility functions defined on such networks are optimized using some variation of Linear Programming, such as Mixed Integer Programming (MIP). Despite enormous progress both in hardware (multiprocessor systems and specialized processors) and software (Gurobi) we are reaching the limits of what these tools can handle in real time. Modern logistic problems, for example, call for expanding the problem both vertically (from one day up to several days) and horizontally (combining separate solution stages into an integrated model). The complexity of such integrated models calls for alternative methods of solution, such as Approximate Dynamic Programming (ADP), which provide a further increase in the performance necessary for the daily operation. In this paper, we present the theoretical basis and related experiments for solving the multistage decision problems based on the results obtained for shorter periods, as building blocks for the models and the solution, via Critic-Model-Action cycles, where various types of neural networks are combined with traditional MIP models in a unified optimization system. In this system architecture, fast and simple feed-forward networks are trained to reasonably initialize more complicated recurrent networks, which serve as approximators of the value function (Critic). The combination of interrelated neural networks and optimization modules allows for multiple queries for the same system, providing flexibility and optimizing performance for large-scale real-life problems. A MATLAB implementation of our solution procedure for a realistic set of data and constraints shows promising results, compared to the iterative MIP approach. Copyright © 2012 Elsevier Ltd. All rights reserved.
Shahamiri, Seyed Reza; Salim, Siti Salwah Binti
2014-09-01
Automatic speech recognition (ASR) can be very helpful for speakers who suffer from dysarthria, a neurological disability that damages the control of motor speech articulators. Although a few attempts have been made to apply ASR technologies to sufferers of dysarthria, previous studies show that such ASR systems have not attained an adequate level of performance. In this study, a dysarthric multi-networks speech recognizer (DM-NSR) model is provided using a realization of multi-views multi-learners approach called multi-nets artificial neural networks, which tolerates variability of dysarthric speech. In particular, the DM-NSR model employs several ANNs (as learners) to approximate the likelihood of ASR vocabulary words and to deal with the complexity of dysarthric speech. The proposed DM-NSR approach was presented as both speaker-dependent and speaker-independent paradigms. In order to highlight the performance of the proposed model over legacy models, multi-views single-learner models of the DM-NSRs were also provided and their efficiencies were compared in detail. Moreover, a comparison among the prominent dysarthric ASR methods and the proposed one is provided. The results show that the DM-NSR recorded improved recognition rate by up to 24.67% and the error rate was reduced by up to 8.63% over the reference model.
Airfoil Shape Optimization based on Surrogate Model
NASA Astrophysics Data System (ADS)
Mukesh, R.; Lingadurai, K.; Selvakumar, U.
2018-02-01
Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.
Multilayer Markov Random Field models for change detection in optical remote sensing images
NASA Astrophysics Data System (ADS)
Benedek, Csaba; Shadaydeh, Maha; Kato, Zoltan; Szirányi, Tamás; Zerubia, Josiane
2015-09-01
In this paper, we give a comparative study on three Multilayer Markov Random Field (MRF) based solutions proposed for change detection in optical remote sensing images, called Multicue MRF, Conditional Mixed Markov model, and Fusion MRF. Our purposes are twofold. On one hand, we highlight the significance of the focused model family and we set them against various state-of-the-art approaches through a thematic analysis and quantitative tests. We discuss the advantages and drawbacks of class comparison vs. direct approaches, usage of training data, various targeted application fields and different ways of Ground Truth generation, meantime informing the Reader in which roles the Multilayer MRFs can be efficiently applied. On the other hand we also emphasize the differences between the three focused models at various levels, considering the model structures, feature extraction, layer interpretation, change concept definition, parameter tuning and performance. We provide qualitative and quantitative comparison results using principally a publicly available change detection database which contains aerial image pairs and Ground Truth change masks. We conclude that the discussed models are competitive against alternative state-of-the-art solutions, if one uses them as pre-processing filters in multitemporal optical image analysis. In addition, they cover together a large range of applications, considering the different usage options of the three approaches.
Structural Equation Models in a Redundancy Analysis Framework With Covariates.
Lovaglio, Pietro Giorgio; Vittadini, Giorgio
2014-01-01
A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.
Statistical physics of vehicular traffic and some related systems
NASA Astrophysics Data System (ADS)
Chowdhury, Debashish; Santen, Ludger; Schadschneider, Andreas
2000-05-01
In the so-called “microscopic” models of vehicular traffic, attention is paid explicitly to each individual vehicle each of which is represented by a “particle”; the nature of the “interactions” among these particles is determined by the way the vehicles influence each others’ movement. Therefore, vehicular traffic, modeled as a system of interacting “particles” driven far from equilibrium, offers the possibility to study various fundamental aspects of truly nonequilibrium systems which are of current interest in statistical physics. Analytical as well as numerical techniques of statistical physics are being used to study these models to understand rich variety of physical phenomena exhibited by vehicular traffic. Some of these phenomena, observed in vehicular traffic under different circumstances, include transitions from one dynamical phase to another, criticality and self-organized criticality, metastability and hysteresis, phase-segregation, etc. In this critical review, written from the perspective of statistical physics, we explain the guiding principles behind all the main theoretical approaches. But we present detailed discussions on the results obtained mainly from the so-called “particle-hopping” models, particularly emphasizing those which have been formulated in recent years using the language of cellular automata.
Saini, Harsh; Raicar, Gaurav; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok
2015-12-07
Protein subcellular localization is an important topic in proteomics since it is related to a protein׳s overall function, helps in the understanding of metabolic pathways, and in drug design and discovery. In this paper, a basic approximation technique from natural language processing called the linear interpolation smoothing model is applied for predicting protein subcellular localizations. The proposed approach extracts features from syntactical information in protein sequences to build probabilistic profiles using dependency models, which are used in linear interpolation to determine how likely is a sequence to belong to a particular subcellular location. This technique builds a statistical model based on maximum likelihood. It is able to deal effectively with high dimensionality that hinders other traditional classifiers such as Support Vector Machines or k-Nearest Neighbours without sacrificing performance. This approach has been evaluated by predicting subcellular localizations of Gram positive and Gram negative bacterial proteins. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A
2011-10-01
Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
Including Magnetostriction in Micromagnetic Models
NASA Astrophysics Data System (ADS)
Conbhuí, Pádraig Ó.; Williams, Wyn; Fabian, Karl; Nagy, Lesleis
2016-04-01
The magnetic anomalies that identify crustal spreading are predominantly recorded by basalts formed at the mid-ocean ridges, whose magnetic signals are dominated by iron-titanium-oxides (Fe3-xTixO4), so called "titanomagnetites", of which the Fe2.4Ti0.6O4 (TM60) phase is the most common. With sufficient quantities of titanium present, these minerals exhibit strong magnetostriction. To date, models of these grains in the pseudo-single domain (PSD) range have failed to accurately account for this effect. In particular, a popular analytic treatment provided by Kittel (1949) for describing the magnetostrictive energy as an effective increase of the anisotropy constant can produce unphysical strains for non-uniform magnetizations. I will present a rigorous approach based on work by Brown (1966) and by Kroner (1958) for including magnetostriction in micromagnetic codes which is suitable for modelling hysteresis loops and finding remanent states in the PSD regime. Preliminary results suggest the more rigorously defined micromagnetic models exhibit higher coercivities and extended single domain ranges when compared to more simplistic approaches.
Predicting Mycobacterium tuberculosis Complex Clades Using Knowledge-Based Bayesian Networks
Bennett, Kristin P.
2014-01-01
We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web. PMID:24864238
Modeling Sustainable Food Systems.
Allen, Thomas; Prosperi, Paolo
2016-05-01
The processes underlying environmental, economic, and social unsustainability derive in part from the food system. Building sustainable food systems has become a predominating endeavor aiming to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems are complex social-ecological systems involving multiple interactions between human and natural components. Policy needs to encourage public perception of humanity and nature as interdependent and interacting. The systemic nature of these interdependencies and interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system that will ensure its essential outcomes are maintained or enhanced over time and across generations, will help organizations and governmental institutions to track progress towards sustainability, and set policies that encourage positive transformations. This paper proposes a conceptual model that articulates crucial vulnerability and resilience factors to global environmental and socio-economic changes, postulating specific food and nutrition security issues as priority outcomes of food systems. By acknowledging the systemic nature of sustainability, this approach allows consideration of causal factor dynamics. In a stepwise approach, a logical application is schematized for three Mediterranean countries, namely Spain, France, and Italy.
Modeling Sustainable Food Systems
NASA Astrophysics Data System (ADS)
Allen, Thomas; Prosperi, Paolo
2016-05-01
The processes underlying environmental, economic, and social unsustainability derive in part from the food system. Building sustainable food systems has become a predominating endeavor aiming to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems are complex social-ecological systems involving multiple interactions between human and natural components. Policy needs to encourage public perception of humanity and nature as interdependent and interacting. The systemic nature of these interdependencies and interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system that will ensure its essential outcomes are maintained or enhanced over time and across generations, will help organizations and governmental institutions to track progress towards sustainability, and set policies that encourage positive transformations. This paper proposes a conceptual model that articulates crucial vulnerability and resilience factors to global environmental and socio-economic changes, postulating specific food and nutrition security issues as priority outcomes of food systems. By acknowledging the systemic nature of sustainability, this approach allows consideration of causal factor dynamics. In a stepwise approach, a logical application is schematized for three Mediterranean countries, namely Spain, France, and Italy.
Fournier, Véronique; Spranzi, Marta; Foureur, Nicolas; Brunet, Laurence
2015-01-01
Several approaches to clinical ethics consultation (CEC) exist in medical practice and are widely discussed in the clinical ethics literature; different models of CECs are classified according to their methods, goals, and consultant's attitude. Although the "facilitation" model has been endorsed by the American Society for Bioethics and Humanities (ASBH) and is described in an influential manual, alternative approaches, such as advocacy, moral expertise, mediation, and engagement are practiced and defended in the clinical ethics field. Our Clinical Ethics Center in Paris was founded in 2002 in the wake of the Patients' Rights Act, and to date it is the largest center that provides consultation services in France. In this article we shall describe and defend our own approach to clinical ethics consultation, which we call the "Commitment Model," in comparison with other existing models. Indeed commitment implies, among other meanings, continuity through time, a series of coherent actions, and the realization of important social goals. By drawing on a recent consultation case, we shall describe the main steps of our consultation procedure: interviews with major stakeholders, including patients and proxies; case conferences; and follow up. We shall show why we have chosen the term "commitment" to represent our approach at three different but interrelated levels: commitment towards patients, within the case conference group, and towards society as a whole. Copyright 2015 The Journal of Clinical Ethics. All rights reserved.
Duda, Piotr; Jaworski, Maciej; Rutkowski, Leszek
2018-03-01
One of the greatest challenges in data mining is related to processing and analysis of massive data streams. Contrary to traditional static data mining problems, data streams require that each element is processed only once, the amount of allocated memory is constant and the models incorporate changes of investigated streams. A vast majority of available methods have been developed for data stream classification and only a few of them attempted to solve regression problems, using various heuristic approaches. In this paper, we develop mathematically justified regression models working in a time-varying environment. More specifically, we study incremental versions of generalized regression neural networks, called IGRNNs, and we prove their tracking properties - weak (in probability) and strong (with probability one) convergence assuming various concept drift scenarios. First, we present the IGRNNs, based on the Parzen kernels, for modeling stationary systems under nonstationary noise. Next, we extend our approach to modeling time-varying systems under nonstationary noise. We present several types of concept drifts to be handled by our approach in such a way that weak and strong convergence holds under certain conditions. Finally, in the series of simulations, we compare our method with commonly used heuristic approaches, based on forgetting mechanism or sliding windows, to deal with concept drift. Finally, we apply our concept in a real life scenario solving the problem of currency exchange rates prediction.
Adaptive hidden Markov model with anomaly States for price manipulation detection.
Cao, Yi; Li, Yuhua; Coleman, Sonya; Belatreche, Ammar; McGinnity, Thomas Martin
2015-02-01
Price manipulation refers to the activities of those traders who use carefully designed trading behaviors to manually push up or down the underlying equity prices for making profits. With increasing volumes and frequency of trading, price manipulation can be extremely damaging to the proper functioning and integrity of capital markets. The existing literature focuses on either empirical studies of market abuse cases or analysis of particular manipulation types based on certain assumptions. Effective approaches for analyzing and detecting price manipulation in real time are yet to be developed. This paper proposes a novel approach, called adaptive hidden Markov model with anomaly states (AHMMAS) for modeling and detecting price manipulation activities. Together with wavelet transformations and gradients as the feature extraction methods, the AHMMAS model caters to price manipulation detection and basic manipulation type recognition. The evaluation experiments conducted on seven stock tick data from NASDAQ and the London Stock Exchange and 10 simulated stock prices by stochastic differential equation show that the proposed AHMMAS model can effectively detect price manipulation patterns and outperforms the selected benchmark models.
Yi, Faliu; Moon, Inkyu; Javidi, Bahram
2017-10-01
In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.
Yi, Faliu; Moon, Inkyu; Javidi, Bahram
2017-01-01
In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078